id
stringlengths
3
9
source
stringclasses
1 value
version
stringclasses
1 value
text
stringlengths
1.54k
298k
added
stringdate
1993-11-25 05:05:38
2024-09-20 15:30:25
created
stringdate
1-01-01 00:00:00
2024-07-31 00:00:00
metadata
dict
58664393
pes2o/s2orc
v3-fos-license
Kinetic Monte Carlo simulations of organic ferroelectrics Ferroelectrics find broad applications, e.g. in non-volatile memories, but the switching kinetics in real, disordered, materials is still incompletely understood. Here, we develop an electrostatic model to study ferroelectric switching using 3D Monte Carlo simulations. We apply this model to the prototypical small molecular ferroelectric trialkylbenzene-1,3,5-tricarboxamide (BTA) and find good agreement between the Monte Carlo simulations, experiments, and molecular dynamics studies. Since the model lacks any explicit steric effects, we conclude that these are of minor importance. While the material is shown to have a frustrated antiferroelectric ground state, it behaves as a normal ferroelectric under practical conditions due to the large energy barrier for switching that prevents the material from reaching its ground state after poling. We find that field-driven polarization reversal and spontaneous depolarization have orders of magnitude different switching kinetics. For the former, which determines the coercive field and is relevant for data writing, nucleation occurs at the electrodes, whereas for the latter, which governs data retention, nucleation occurs at disorder-induced defects. As a result, by reducing the disorder in the system, the polarization retention time can be increased dramatically while the coercive field remains unchanged. Introduction Ferroelectric materials find application in a broad range of fields, but a full understanding of the switching and especially the depolarization kinetics on all length and time scales is still lacking. A variety of computational models have been employed to tackle this problem and study different aspects of ferroelectrics. 1 First-principles calculations based on Density Functional Theory (DFT) are computationally expensive but can give detailed insight into phase diagrams, static domain structures or ultrathin films. [2][3][4][5] Molecular dynamics (MD) simulations can predict domain features and kinetics on intermediate length and time scales. 6,7 Finally, there are Monte Carlo simulations that are mainly based on electrostatic interactions. [8][9][10] These simulations can address experimental length and time scales but are often restricted to idealized morphologies. In contrast, we develop here a 3D electrostatic model that can reproduce ferroelectric properties and kinetics on experimental time scales taking realistic, disordered morphologies as input. The field of ferroelectrics has for long been dominated by inorganic materials such as barium titanate (BTO) and lead zirconate titanate (PZT), with the notable exception of the polyvinylidene fluoride polymer (PVDF) and its various copolymers. Recently, a new class of organic ferroelectrics has been explored, the small molecular liquid crystals. Whereas there have been numerous experimental works on the ferroelectric behavior of these materials, 11,12 there is only a basic understanding of the underlying processes and kinetics. As a prototype small molecular ferroelectric, we focus here on trialkylbenzene-1,3,5-tricarboxamide (BTA). Although BTA has received extensive interest for its self-assembly properties, 13 it has only recently been experimentally proven to be ferroelectric. [14][15][16] Most previous theoretical work on this material has thus been focused on the self-assembly properties and the dipole moment of single stacks. [17][18][19][20] Recently, Zehe et al. used a simple 2D Ising model to investigate the geometrical frustration between the hexagonally packed columns and the different roles of nearest and next-nearest neighbor interactions on the nature of the ground state. 21 Here we develop an electrostatic model that takes the full 3D morphology of BTA into account and reproduces experimentally observed ferroelectric properties using kinetic Monte Carlo simulations. We examine polarization hysteresis loops and retention, a Complex Materials and Devices, Department of Physics, Chemistry and Biology and we can rationalize the obtained parameter dependencies in the framework of thermally activated nucleation limited switching. We find an antiferroelectric ground state that in practice is not reached due to high activation energies leading to extremely slow depolarization kinetics. Structural disorder is found to be a critical parameter for polarization retention. The results show good agreement with experiments, and not only provide detailed insight into the mechanism of polarization switching in organic ferroelectrics, but also concrete guidelines to further improve the performance of organic, and possibly inorganic, ferroelectric devices. Model The molecular structure of BTA is shown in Fig. 1. It consists of a benzene core to which three chains are attached composed of a dipolar amide group and a flexible alkyl tail whose length can vary. Driven by p-p stacking interactions between the benzene core and the formation of hydrogen bonds between the amide groups, these discotic molecules self-organize into supramolecular columnar structures ( Fig. 1(c)). In each column, the dipolar amide groups form a triple helix, which results in a macrodipole that is oriented along the column axis. In the liquid crystalline phase, the columns organize in a hexagonal packing ( Fig. 1(d)). Due to the flexibility of the alkyl tails, the side chains have enough mobility to allow rotation of the dipoles under an applied electric field. This flips the macrodipoles of the columns, and thereby the polarization of the material. In a recent paper we established experimentally that the bistable polarization of BTA reflects true ferroelectricity. 14 The ferroelectricity in BTA is thus caused by the collective behavior of the dipolar amide groups. We therefore focus on these dipoles and their interactions. The dipoles have fixed positions given by the structure of the BTA molecule and the morphology of the system. We assume that the permanent dipoles m 0 ! only have two possible directions, up and down, which differ only in the z-component of their dipole moment. On top of these permanent dipoles we have to take into account induced dipoles. These are the result of the electron clouds that get shifted by the net local electric field E loc ! . They are directed along the local field: m ind ! ¼ a  E loc ! , with a the linear polarizability. The total dipole is the sum of the permanent and induced dipoles, To determine the flipping probability of a dipole, we calculate the energy difference between the initial and final state based on the dipole-dipole interactions between all dipoles within a certain cutoff radius, as well as the interaction with the externally applied field. Interactions outside of the cut-off radius are taken into account using the reaction field method. 22 Hence, the energy difference between the two possible polarization states of each dipole depends on the orientation of all other dipoles within the cutoff and has to be updated after each flipping event. We do not consider any rotational energy barrier between the two states as doing so would only add an offset to all energies involved and as such be equivalent to a change in the flipping rate prefactor. This prefactor determines the time scale of the simulations and changing it only results in absolute time/frequency differences in the results. The flipping rates are the input for a kinetic Monte Carlo simulation (kMC), which allows us to observe the dynamical behavior of the complete system and its response to externally applied fields. More details on all energy calculations and the simulation algorithm can be found in the ESI. † Being a liquid crystalline material, the system is subject to positional disorder, see Fig. 1(d). Based on previous experimental work, 14,23,24 we implement this disorder by introducing defects that represent a break in the hydrogen-bonded triple helix. This divides a column into subcolumns, each with their own translational offset, rotational orientation, and handedness. We start with a fixed amount of disorder, but in the section ''The effect of disorder'' we will vary the amount of disorder and investigate how this influences the ferroelectric properties. Further details about the disorder parameters are given in Table S1 and Fig. S3 (ESI †). We would like to note that this disorder is the only free parameter in the simulations. All other parameters are fixed and taken from experiment or DFT calculations, as discussed below and in the ESI. † Hysteresis loops The main characteristic of a ferroelectric material is its polarization hysteresis loop as a function of the applied electric field. Fig. 1 Morphology of the BTA system. The BTA molecule (a) consists of a benzene core, three dipolar amide groups, and a flexible alkyl tail. The molecule is represented schematically in (b), and stacks into columns forming a triple hydrogen-bonded helix (c). In the liquid crystal phase these columns form a quasi-2D hexagonal lattice, which is sandwiched between electrodes as shown in (d). Disorder is introduced by defects (red crosses) that divide the columns into subcolumns. An example of a simulated hysteresis loop is shown in Fig. 2(a), with a shape that is typical for ferroelectrics. It should be noted that the loop is corrected for a linear background, as is done for the experimental loop in Fig. 2(a), as discussed in the ESI. † We will discuss three features of the hysteresis loops: (saturation) polarization, shape and coercive field. First, the polarization of the system is simply the dipole density and is thus determined by the position, orientation and magnitude of all dipoles. Since our morphology is fixed, the polarization is governed by the permanent dipole m and the polarizability a. We determine these parameters using the results of DFT calculations on BTA columns. 17,20,21 These calculations have shown that when forming columns, BTA molecules exhibit a cooperativity effect, meaning that the dipole moment per molecule increases when more molecules are added to the column. Typically, the dipole moment per molecule will increase from B7 D to B12 D upon column elongation from 2 to 48 BTA units. We thus have a permanent dipole of 7 D per molecule, and an induced dipole of 5 D. Taking m 0 = 4 D per amide group and a = 1 eÅ 2 V À1 gives a good approximation with a permanent dipole moment of 7.7 D per molecule when isolated and a total dipole of 11.6 D per molecule when integrated in an infinite stack. Note that m 0 is the total dipole moment per amide, and not the axial component, which is why the permanent dipole moment per molecule is less than 3  4 = 12 D. In terms of polarization this gives a total polarization P s = 48 mC m À2 for BTA-C6, which is slightly lower than the polarization found experimentally in Fig. 2(b). This corresponds to our earlier observation, where we found a higher than expected polarization which we attributed to an enhanced interaction between columns. 23 For simplicity we will from here on only discuss the normalized polarization P/P s . Second, there is a noticeable difference in shape of the hysteresis loops obtained by the simulation using the default parameters and the experiment in Fig. 2(a). The experimental loops are sharp and nearly rectangular, whereas the simulated loops are more slanted. To investigate possible causes for this discrepancy, Fig. 2(b) also shows the hysteresis loops obtained from a simulation without disorder, and one where the BTA columns have an enhanced separation distance. The effect of disorder and large column separations on the hysteresis loops has been investigated previously, 25 and will be more extensively discussed further on in the text. For now, it suffices to conclude that the effect of disorder is too small to explain the discrepancy in slopes between experiment and simulations. In contrast, the simulation with the increased column separation does show a sharp increase in the slope of the loop and has a general shape in agreement with the experiment. For the enhanced column separation there is essentially no interaction between columns. This suggests that in experiments, even with a small column separation, the interaction between columns is limited and we overestimate this interaction in the simulations. This contradicts our earlier suggestion that an enhanced interaction between columns is responsible for higher than expected polarizations. 23 The reason for this overestimation could lie in the assumption of an isotropic dielectric medium with e r = 2 that surrounds the dipoles. In a real device this is obviously not the case, and the columns could be more strongly screened by for example the alkyl chains or regions of amorphous material. Third, we analyze the coercive field, which is here defined as the field where the polarization is zero. Comparing the two loops in Fig. 2(a), the simulated coercive field is about four times higher than the experimental one -note the different x-axes for simulated and measured curves. This is often the case with simulations, where one usually obtains an intrinsic coercive field several orders of magnitude above the experimental value. 1,26 The polarization switching in experiments is fully extrinsic and based on nucleation near defects, charged impurities, and/or interfaces. 27 Evidently our model does not fully capture the complex dynamics of a real sample, even though it does include defects, disorder, and electrodes. A clear nucleation-limited behavior is seen (see ESI † video), with slow nucleation and subsequent fast growth of the switched domain and thereby should produce an extrinsic coercive field. However, considering the limited thickness of the simulation box, which is favorable for intrinsic switching, [28][29][30] the simulated switching is likely still partly intrinsic. Simulations with an increased box size (40 nm vs. 10 nm) yielded, on the other hand, only a modest (B5%) reduction of the coercive field. The coercive field is dependent on the field sweep speed and temperature. We can describe this dependency using the theory of thermally activated nucleation limited switching (TA-NLS) developed by Vopsaroiu et al., 31 which gives for the coercive field: where W b = w b V* is the flipping energy barrier, k B T the thermal energy, n 0 an attempt frequency, P s the saturation polarization, and V* is the nucleation volume. The waiting time t is assumed to be inversely proportional to the field sweep frequency f. Previously we have shown that this TA-NLS theory provides a good description of the experimental switching kinetics in BTA. [23][24][25] We verify the applicability of the TA-NLS model here by simulating hysteresis loops and determining the coercive field as a function of frequency and temperature. The results are shown in Fig. 3, together with the fit to the TA-NLS model. The attempt frequency is fixed at the input (phonon) frequency of 1 THz. The energy barrier w b = 0.18 eV nm À3 agrees with experimental values. The nucleation volume is 7.0 nm 3 , which is around 8 molecules and roughly corresponds to the average size of a subcolumn between defects. Retention An important but often overlooked property of ferroelectrics is their polarization retention. Especially in organic ferroelectrics the retention times are often poor which precludes practical applications. Several responsible driving forces for polarization loss have been discussed, such as the depolarization field caused by dead interface layers or imperfect screening by the electrodes. 24 The depolarization mechanism was previously argued to be R-relaxation, which is a collective reversal of the amide dipole moments in ferroelectric domains. 15,24 Here, we investigate the retention by starting with a fully poled system and letting the polarization decay over time without an externally applied field. The resulting depolarization curve is generally described by a stretched exponential function: with P 0 the initial polarization, b the stretching exponent, and t the retention time. Fitting the simulated depolarization curves in Fig. S5 (ESI †) to eqn (2) gives the retention times shown in Fig. 4. A good agreement between simulations and experiment is seen when directly comparing the retention times. It should be noted that fitting the depolarization curves to eqn (2) is not trivial. Especially at the low temperatures that correspond to experimental conditions, where there is little polarization decay, the obtained retention times and stretch parameters can vary wildly. We therefore choose to fix the stretch parameter b to 0.13, an average value which gives decent fits for all temperatures (see Fig. S5, ESI †). For experimental depolarization curves the stretch parameter is usually found to be around 0.5. 24 The reasons for this difference are unclear at present but might relate to the absence of dead layers in the simulation. View Article Online As was done for the coercive field, we use the TA-NLS model to describe the depolarization by thermal activation over an energy barrier W b . 31 The retention time t is then given by Fitting the results to eqn (3) gives n 0 = 0.38 MHz and W b = 0.75 eV. This depolarization activation energy is similar to what is found experimentally. 23,24 However, the attempt frequency differs orders of magnitude from the input attempt frequency n 0 that was recovered in the analysis of the switching kinetics. A similar deviation between the attempt frequencies found in analyzing switching and depolarization kinetics is also observed in experiments. 23 We interpret this to indicate that for a stable nucleus to form, a single dipole flip is not enough. Instead, multiple flips need to occur simultaneously, leading to a reduced attempt frequency. This will be further discussed below in the section on the effect of disorder. Flipping modes and the 2:1 state So far, we have assumed that when a dipole flips, only its z-component changes. However, another mode of flipping is possible as well, corresponding to a full inversion of the dipole vector. Both flipping modes are illustrated in Fig. 5(a) and will be called z-flip and full flip from here on. Fig. 5(a) shows that the z-flip mode changes the helicity of the triple helix, whereas the full flip mode does not. Flipping of the helicity upon polarization reversal has consequences when investigating switching on BTA substituted with enantiomerically pure chains, which will be discussed in more detail in a future work. We can artificially restrict the system to one of the modes and simulate the hysteresis loops. We find that the coercive field of the full flip mode is B4 times higher than that of the z-flip mode, see Fig. 5(b). The z-flip mode is thus energetically favored and should be the one responsible for dipole flipping in experiments. We can also analyze the energetics of the full flip mode by determining the coercive field as a function of temperature and frequency, as was done in Fig. 3 for the z-flip mode. We find (see Fig. S6, ESI †) an energy barrier of 0.78 eV nm À3 and a nucleation volume of 3 dipoles, which is significantly higher respectively lower than for the z-flip. The high energy barrier is caused by the unfavorable head-to-head interaction that occurs between neighbors when a single dipole is fully flipped. The smaller nucleation volume is because in the full flip mode only one helix will switch at a time, whereas in the z-flip mode all three helices must flip nearly simultaneously. The hysteresis loop of the full flip mode as obtained with kMC is not perfectly square and shows shoulders at AE2.5 GV m À1 . This shoulder corresponds to an intermediate state where in each column the dipoles in one helix are pointing in opposite direction to the other two helices, as shown in Fig. 5(a) and Fig. S7 (ESI †). This state is incompatible with the z-flip mode, because there switching occurs by breaking the whole triple helix structure molecule by molecule, instead of just reversing one helix at a time. The shoulder is therefore not observed in the z-flip hysteresis loop. The fact that it is neither observed in experiments is further evidence that in reality, flipping occurs through the z-flip mode. This is a refinement of our previous conclusion where we tacitly assumed that R-relaxation, which is the collective reversal of a domain that is responsible for the polarization loss, would correspond to the full flip mode. 15,24 The stability of this so-called 2:1 state has been investigated previously with DFT and MD, and it was found that for zero applied field this state is energetically more favorable than the 3:0 state with all dipoles pointing in one direction. 19,20,32 This is mainly due to the electrostatic interaction that favors antiparallel alignment of the three dipole helices. The fact that the 2:1 state is observed as an intermediate in the simulated hysteresis loops confirms that it indeed corresponds to a (local) energy minimum. A more detailed analysis of the energetics of this state can be found in the ESI. † The fact that the z-flip mode is the dominant flipping mode has been demonstrated previously through MD simulations by Bejagam et al. 18 In these simulations, both modes of flipping are allowed. The authors start with a BTA column polarized in one direction and apply an electric field to reverse the polarization. They found that upon polarization reversal, the helicity of the triple helix also reverses, corresponding to the z-flip mode. This behavior was attributed to a combination of electrostatic and steric effects. 33 Our current results show that the z-flip mode is favored even when steric effects are ignored. We have investigated the flipping process with further MD simulations (see Fig. S10 and ESI † for a detailed description and analysis of the MD simulations). We again find that the z-flip mode is the dominant mode, although the differences with the full flip mode are small. In the full flip mode, investigated here by examining transitions from the 2:1 state, a variety of intermediate states is possible due to thermal fluctuations. Indeed, when we apply a low electric field of 0.28 GV m À1 on the 2:1 state, we observe small fluctuations, i.e. one of the BTA residues in the stack twists in the xy-plane resulting in an interchange of the values of two dihedral angles while keeping the same helicity, since this electric field is not high enough to induce the full flip in the time of our simulation. When a higher field of 0.36 GV m À1 is applied on the same 2:1 stack, both M-and P-helicities are observed. Some intermediate states do not even keep the favorable H-bonding network, and the H-bonding helix is disrupted for a short time. With increasing value of the electric field, the self-assembled stack is fully switched from 2:1 to 0:3. On the other hand, when we apply an electric field of 0.22 GV m À1 on a fully polarized 3:0 state, the dihedral angles flip through a value of 01 and consequently the helicity changes, without going through any intermediate states. This z-flip is therefore energetically more favorable since it requires a lower electric field than the full flip. The effect of disorder In our previous experimental work we have observed that the morphology and more specifically the degree of disorder can have a major influence on the ferroelectric properties of BTA. 23,25 Therefore we will now study the effects of disorder on the simulated hysteresis loops and retention. We control the disorder in the system by changing the average length of the subcolumns between defects. At each defect there is a translational shift and randomization of the rotational angle, as discussed before and shown in Fig. S3 (ESI †). We consider four different cases: high, medium, low and no disorder, corresponding to a mean subcolumn length of 7, 15, 20, and infinite molecules, respectively. Note that all results reported above were obtained for the system with medium disorder. Comparing the hysteresis loops for high, medium and no disorder in Fig. 6(a) shows that there is only a very minor decrease in the coercive field upon increasing the disorder. This holds for the whole range of frequencies and temperatures as shown in Fig. S11 (ESI †). The change in shape of the loops is more significant, with more slanted loops for higher disorders. To explain this, we consider the hysteresis loop of our system as the sum of the response of the subcolumns. This is the idea behind the Preisach theory, which considers a ferroelectric to be a collection of perfect hysterons. 34 In this case, each subcolumn can be seen as such a hysteron with a square hysteresis loop and a well-defined coercive field. Due to the disorder in the system, there is a distribution in these coercive fields, which causes the total loop to become slanted. The higher the disorder, the broader the distribution in coercive fields, and the more slanted the loop becomes. A similar effect occurs when the distance between columns is increased as in Fig. 2(b), because there will no longer be any interaction between columns, as previously shown. 25 In contrast to the influence on the hysteresis loops, the influence of disorder on the polarization retention is significant. Fig. 6(b) shows the retention times as determined by the procedure described earlier. The retention is increased by several orders of magnitude upon decreasing the disorder. In the case of no disorder no polarization loss is observed at all at lower temperatures. The difference between the disorder dependence of the coercive field and retention stems from differences in the nucleation mechanism. By visual inspection of the simulation results we identified the typical nuclei for both cases as shown in Fig. 6(c and d). We find that for field-driven polarization reversal, nucleation almost exclusively occurs at the electrodes, and a nucleus of about three dipoles is required before a subcolumn fully switches. For spontaneous polarization reversal on the other hand, nucleation occurs mostly at defects in the bulk of the material. The nucleus is also slightly larger, which supports our earlier speculation that the reduced attempt frequency for depolarization is caused by a larger nucleus, involving a higher-order coincidence. Further inspection of the energetics of the nucleation processes reveals the reason for these different locations of the nuclei. Nucleation at an electrode is generally favored, but unstable without an applied electric field. In the case of the spontaneous reversal, nucleation therefore has to occur at defects, where it is stable without applied field. A more detailed explanation can be found in the ESI. † The disorder has thus little influence on the hysteresis loops as nucleation anyhow occurs at the electrodes, whereas it heavily affects the retention where nucleation occurs at defects. This observation provides a way to increase performance of (organic) ferroelectric devices. For application purposes a high retention time is desired, whereas the coercive field should still be low enough to allow reasonable operating voltages. In typical (inorganic) ferroelectrics the retention time and coercive field are usually coupled; when one increases, the other increases as well, as both scale with the energy barrier for switching. 23 Our results show that the additional degrees of freedom offered by supramolecular ferroelectrics allow tailoring of the disorder. On the one hand, improving processing and changing e.g. side chains to promote stacking might be the key to get high retention times while maintaining reasonable coercive fields. 23 On the other hand, increasing disorder might facilitate devices with very fast response times that can be operated at high frequencies. Ground state We can investigate the ground state of our system by letting a simulation without applied field run until an equilibrium is reached. Details on these simulations are presented in the ESI. † When no disorder is present, we find that the ground state is a mixture of up and down fully polarized columns with zero net polarization. We can thus represent the domain structure as the 2D top view in Fig. 7. A stripe-like domain structure is observed, which is typical for a frustrated antiferroelectric. The electrostatic interaction between the macrodipoles of two neighboring columns tends to align them anti-parallel. Due to the geometric frustration the ground state is highly degenerate, and complex domain structures can be formed. We can quantitatively characterize the domain structure by looking at the correlation coefficients, which are defined as the dot product of a dipole and the mean of its (next) nearest neighbors (eqn S6, ESI †). The correlation coefficients for the domain structure in Fig. 7 are shown in Fig. S13 (ESI †). We find that indeed nearest neighbors tend to be antiparallel, and consequently next nearest neighbors have a slight tendency to be parallel. When disorder is introduced into the system, columns are no longer necessarily fully polarized, shown in Fig. S14 (ESI †). Within a column, subcolumns can have different polarizations, resulting in partially polarized columns and a more complicated three-dimensional domain structure. The correlation coefficients are now shifted towards zero (no correlation), indicating a decrease in the tendency of (next nearest) neighbors to align antiparallel (parallel) and thus a more disordered domain structure. A similar antiferroelectric domain structure was previously suggested for BTA crystals based on X-ray diffraction experi- Fig. 6 The effect of disorder on (a) the hysteresis loop and (b) the retention time. The hysteresis loops were simulated for 250 Hz and 300 K. Retention times were obtained by fitting the depolarization curves to eqn (2) with a fixed stretching parameter b = 0.2. No polarization loss was observed after 1 ms for the no disorder case below 600 K. Solid lines are a guide to the eye. The two nucleation mechanisms at an electrode (c) and defect (d) correspond to the field-driven and spontaneous polarization reversal, respectively. Fig. 7 The ground state domain structure obtained after full depolarization of a system without disorder. To elucidate the domain structure, a periodic repetition of the original grid (indicated in red) is shown. ments. 21,35 Zehe et al. studied several BTA compounds with different side-chains to investigate the interplay between electrostatic and steric interactions. They found that compounds with bulky side-chains, and thus larger steric hindrance, can form mesoscale domains with a net polarization. Compounds with smaller side-chains, such as the BTA-C6 investigated here, only formed non-polarized domain structures as in Fig. 7 due to the dominating antiferroelectric electrostatic interaction. The authors support their conclusions with a simple 2D Ising model based on two interaction constants for the nearest neighbor and next nearest neighbor interaction where the latter are associated with steric interactions and can cause preferential parallel alignment when of correct sign and magnitude. However, the relation of the interaction constants to actual materials remains somewhat unclear since it seems unlikely that steric interactions play a significant role for anything but the nearest neighbor interactions. It should be noted that our model only accounts for electrostatic interactions and ignores steric effects beyond the attemptto-flip frequency n 0 . Nevertheless, we obtain a good agreement with experiments. From this we can conclude that steric effects are of minor importance in the case of BTA-C6, in agreement with the conclusion from Zehe et al. Steric effects could play a role for other polar columnar liquid crystals of bulkier molecules, 36 where the ground state will still be antiferroelectric, but where the steric effects might allow mesoscale polar domains. Even though the ground state of BTA is thus antiferroelectric, this has little relevance for its practical applications as a ferroelectric. As we have shown here and in experiments previously, it is possible to polarize the material completely by applying an electric field. It is then kinetically frozen in this state due to the high activation energy for switching. Depending on the disorder, retention times of several months and more can be obtained. 23,37 The material will thus 'never' reach its ground state once it has been polarized. Conclusions In summary, we have developed an electrostatic model that is used as basis for 3D kinetic Monte Carlo simulations to describe switching kinetics in ferroelectrics. We found good agreement between simulations and experiments for a prototype molecular organic ferroelectric. Since the model does not explicitly include steric effects, this leads to the conclusion that these must be of minor importance for this material. Both hysteresis loops and depolarization curves could be simulated for a large range of temperatures and timescales. We investigated different flipping modes and found that the results of our model agree with those from molecular dynamics simulations. The theory of thermally activated nucleation limited switching was used to analyze all results and gave an energy barrier for switching and depolarization of around 1 eV. Even though the ground state of the system is found to be antiferroelectric, this state is under practical conditions never reached due to the slow kinetics associated with this high energy barrier. Finally, we found that nucleation occurs differently in the case of spontaneous polarization reversal (depolarization) compared to field-driven reversal in a hysteresis loop. During depolarization nucleation occurs at defects caused by disorder, while during a hysteresis loops it occurs at the electrodes. By reducing the disorder, the retention time can thus be dramatically increased while the coercive field remains unchanged. This provides a new pathway for the rational design and optimization of ferroelectric devices, specifically for memory applications. Although these results are obtained for a specific material, all conclusions are applicable to the whole class of columnar organic ferroelectrics. More generally, the model itself could be adapted to study other ferroelectric systems, such as BTO or PVDF. 8,38 For inorganic ferroelectrics however, one should take into account an elastic term that penalizes domain wall formation, which will increase the computational complexity. Since the model works for any fixed morphology, including irregular ones, systems with extended disorder, such as dipolar glasses, could also be investigated although any increase in the degrees of freedom will come at the cost of increased computation times. Finally, the insight into the difference between field-driven and spontaneous reversal kinetics, brought about by differences in the rate-limiting nucleation site, can be expected to be relevant for disordered ferroelectrics in general. Conflicts of interest The authors declare no competing financial interests.
2019-01-22T22:24:32.022Z
2019-01-17T00:00:00.000
{ "year": 2019, "sha1": "2b421b10d92cbde3b503aab8098650754667f3e8", "oa_license": "CCBYNC", "oa_url": "https://pubs.rsc.org/en/content/articlepdf/2019/cp/c8cp06716c", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "63c11189199f75394f3378f05fefbad8d66964f3", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Materials Science", "Medicine" ] }
247610569
pes2o/s2orc
v3-fos-license
The Use of a Real-Time COVID-19 Standalone Device in an Emergency Department of a Tertiary Hospital in Singapore: A Pilot Observational Study This study describes the implementation and utility of a standalone device designed, developed, and 3D-printed by PwC Singapore and Southeast Asia Consulting as a response to Corona Virus Disease 2019 (COVID-19), in the Emergency Department (ED) of the National University Hospital in Singapore. Over a 2-week period, all staff used the devices for the duration of their shifts, with the device additionally tagged to patients who were swabbed on suspicion of or surveillance for COVID-19 in the subsequent two weeks. Additional control hardware was placed in the ED to analyze (1) time-intervals of greatest interaction, (2) clusters of close physical distance among staff, (3) areas with high traffic, and (4) potential use of a rapid contact tracing capability. Time-day trends indicated the greatest interaction time-intervals during the beginning of the day, with Monday hosting the greatest average daily interactions across the first two weeks. Social cluster trends indicated the greatest average daily interactions between nurses–nurses during Phase 1, and patients–patients during Phase 2. User-location trends revealed the greatest average daily interaction counts at the intermediate care areas, isolation outdoor tent, pantry, and isolation holding units relative to other areas. Individual-level visualization and contact tracing capabilities were not utilized as nobody contracted COVID-19 during either phase. While congregation in intermediate and resuscitation areas are unavoidable within the ED context, the findings of this study were acted upon, improving social distancing within the pantry and between healthcare groups. This real-time solution addresses multiple privacy concerns while rapidly facilitating contact tracing. Introduction The coronavirus disease (COVID-19) pandemic has afflicted over 40 million people worldwide, leading to more than 1.1 million deaths as of October 2020 [1]. While Singapore was recently named by Bloomberg as "the world's best place to be during COVID," [2] with the case fatality being one of the lowest in the world (below 0.05%), likely contributed by the disproportionately high number of younger migrant workers who were infected (about 95% of total cases), vigilance is still essential to prevent widespread community transmission, especially to the vulnerable population. As there is delay between infection and isolation as the incubation time of the virus can be up to two weeks, coupled with the asymptomatic nature of many patients (with asymptomatic patients still able to transmit the virus), emerging evidence internationally has highlighted early contact tracing and effective social distancing as imperative to reduce the risk of the spread of the severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) [3][4][5]. included on-duty clinicians, nurses, and patient service associates within the department. Phase 2 of the study included ED patients. However, no personal data or patient identifiers were captured for the study. This study was thus exempted from formal institutional review board review as no identifiers were collected or stored. The ED floor layout was studied to determine best strategic locations for placement of hardware to maintain optimal wireless signals between devices. Briefly, the ED comprised resuscitation areas (capacity of 6 standard resuscitation trolleys and 3 negative pressure resuscitation rooms), four distinct intermediate care areas (total capacity of 44 monitored trolleys), ambulatory area (3 triage counters, 5 main consultation rooms and 2 separate seated waiting areas), isolation zones (capacity of 15 monitored trolleys, 13 isolation rooms, 20 makeshift outdoor isolation cubicles and 20 temporary holding cubicles), and a staff pantry. Study Design and Hardware The study was designed to run for four weeks in total; Phases 1 and 2 lasted for two weeks each and ran sequentially but with a two-week break in between. The workflow processes of the Phase 1 were reviewed; improvisations were made, and the revisions were tested out in Phase 2. A total of 10 location beacons were placed at strategic locations within the ED. These location beacons did not track the exact geographical location of the areas within the ED but instead were used as markers to identify the relative distance or location of tagged individuals ( Figure 1). The internal components of location beacons include an nRF52810 SoC supporting Bluetooth Low Energy (BLE), powered by a single CR2032 battery. Setting and Participants The pilot study was conducted in the ED of Singapore's NUH from 1 June 2020 to 12 June 2020 (Phase 1) and from 29 June 2020 to 10 July 2020 (Phase 2). Participants involved included on-duty clinicians, nurses, and patient service associates within the department. Phase 2 of the study included ED patients. However, no personal data or patient identifiers were captured for the study. This study was thus exempted from formal institutional review board review as no identifiers were collected or stored. The ED floor layout was studied to determine best strategic locations for placement of hardware to maintain optimal wireless signals between devices. Briefly, the ED comprised resuscitation areas (capacity of 6 standard resuscitation trolleys and 3 negative pressure resuscitation rooms), four distinct intermediate care areas (total capacity of 44 monitored trolleys), ambulatory area (3 triage counters, 5 main consultation rooms and 2 separate seated waiting areas), isolation zones (capacity of 15 monitored trolleys, 13 isolation rooms, 20 makeshift outdoor isolation cubicles and 20 temporary holding cubicles), and a staff pantry. Study Design and Hardware The study was designed to run for four weeks in total; Phases 1 and 2 lasted for two weeks each and ran sequentially but with a two-week break in between. The workflow processes of the Phase 1 were reviewed; improvisations were made, and the revisions were tested out in Phase 2. A total of 10 location beacons were placed at strategic locations within the ED. These location beacons did not track the exact geographical location of the areas within the ED but instead were used as markers to identify the relative distance or location of tagged individuals ( Figure 1). The internal components of location beacons include an nRF52810 SoC supporting Bluetooth Low Energy (BLE), powered by a single CR2032 battery. A total of 75 personal beacons for individual use were prepared for healthcare workers and patients. The handling of the beacons for individual use is further highlighted in the workflow below. Beacon hardware features included in-built Bluetooth, Wi-Fi, 4 MB Flash Memory, 400mAH@3.7V battery and LCD Screen. Additional routers (TP-Link Archer MR400 AC1200 Wireless 4G Router, ShenZhen, China) and repeaters (TP-Link 860re Range Extender, ShenZhen, China) were installed at various locations within the ED. These helped to capture data emitted from the location A total of 75 personal beacons for individual use were prepared for healthcare workers and patients. The handling of the beacons for individual use is further highlighted in the workflow below. Beacon hardware features included in-built Bluetooth, Wi-Fi, 4 MB Flash Memory, 400mAH@3.7V battery and LCD Screen. Additional routers (TP-Link Archer MR400 AC1200 Wireless 4G Router, ShenZhen, China) and repeaters (TP-Link 860re Range Extender, ShenZhen, China) were installed at various locations within the ED. These helped to capture data emitted from the location and personal beacons ( Figure 2) and reflected the data real-time onto the contact tracing portal and dashboard. The data were continually uploaded onto the server and analyzed to determine the interaction time and distance between individuals and the formation of clusters and areas of higher population density within the ED. and personal beacons ( Figure 2) and reflected the data real-time onto the contact tracing portal and dashboard. The data were continually uploaded onto the server and analyzed to determine the interaction time and distance between individuals and the formation of clusters and areas of higher population density within the ED. Administrative Workflow In Phase 1, a designated administrative staff oversaw the distribution of the beacons at the start of each shift. Each beacon with its unique beacon number tag was pre-matched with each healthcare worker and subsequently distributed accordingly prior to the start of their shift. There was no linkage between the beacons and the individual's mobile phone or any other device. At the back end, the unique beacon was tagged electronically to each of these healthcare workers' randomly generated proxy ID onto the contact tracing portal. At the end of the shift, the staff sanitized and returned the beacon to the designated administrative staff, who untagged the beacon from the portal. The devices were then charged in a remote location at the end of the day to be ready for use the following day. Due to the limited number of beacons provided for the pilot study, Phase 1 was conducted on all day shifts during weekdays. Phase 2 was conducted predominantly in the isolation zones of the ED where patients who presented with signs and symptoms suspicious of COVID-19 were isolated. This Phase was carried out on both day and evening shifts on weekdays. Phase 2 was intended to begin two weeks after the conclusion of Phase 1 to allow modifications and enhancements to be made, especially related to workflows and incorporation of patient movement data. Review of methods to tag and untag individuals to personal beacons was also conducted. After untagging, these beacons were also returned for charging, prior to their next use. Tagging of COVID-19 Suspects This additional workflow was part of Phase 2. Patients who required a COVID-19 swab after initial assessment by the doctors were identified for the study. COVID-19 suspects were issued a beacon to be kept physically with them for the duration of their stay within the ED. Each of these patients were given a unique beacon identification number, which was tagged onto the last four digits of their identity document. Only the last four digits of each individual was captured as data on the dashboard; in the event that a patient were to be identified as COVID-19 positive, we would be able to match their actual identity with these last four digits and extract the relevant data needed to contact trace the other potentially affected individuals. Administrative Workflow In Phase 1, a designated administrative staff oversaw the distribution of the beacons at the start of each shift. Each beacon with its unique beacon number tag was pre-matched with each healthcare worker and subsequently distributed accordingly prior to the start of their shift. There was no linkage between the beacons and the individual's mobile phone or any other device. At the back end, the unique beacon was tagged electronically to each of these healthcare workers' randomly generated proxy ID onto the contact tracing portal. At the end of the shift, the staff sanitized and returned the beacon to the designated administrative staff, who untagged the beacon from the portal. The devices were then charged in a remote location at the end of the day to be ready for use the following day. Due to the limited number of beacons provided for the pilot study, Phase 1 was conducted on all day shifts during weekdays. Phase 2 was conducted predominantly in the isolation zones of the ED where patients who presented with signs and symptoms suspicious of COVID-19 were isolated. This Phase was carried out on both day and evening shifts on weekdays. Phase 2 was intended to begin two weeks after the conclusion of Phase 1 to allow modifications and enhancements to be made, especially related to workflows and incorporation of patient movement data. Review of methods to tag and untag individuals to personal beacons was also conducted. After untagging, these beacons were also returned for charging, prior to their next use. Tagging of COVID-19 Suspects This additional workflow was part of Phase 2. Patients who required a COVID-19 swab after initial assessment by the doctors were identified for the study. COVID-19 suspects were issued a beacon to be kept physically with them for the duration of their stay within the ED. Each of these patients were given a unique beacon identification number, which was tagged onto the last four digits of their identity document. Only the last four digits of each individual was captured as data on the dashboard; in the event that a patient were to be identified as COVID-19 positive, we would be able to match their actual identity with these last four digits and extract the relevant data needed to contact trace the other potentially affected individuals. Data Analysis All data were captured real-time onto the Dashboard. All count, time, and distance measurements were summarized using mean. The data were subsequently summarized and displayed in Microsoft Power BI (Microsoft Corp, Redmond, WA, USA). Phase 1 There was an average of 5582 daily interactions throughout the 10 days of Phase 1, at an average interaction distance of 1.31m, when a filter of 2 m [15] was applied ( Figure 3). No staff members contracted COVID-19 during Phase 1. As a result, there was no scenario in which contact tracing was required. However, the ability to perform rapid contract tracing was made available to the contact tracing team should the need arise. Data Analysis All data were captured real-time onto the Dashboard. All count, time, and distance measurements were summarized using mean. The data were subsequently summarized and displayed in Microsoft Power BI (Microsoft Corp, Redmond, WA, USA). Phase 1 There was an average of 5582 daily interactions throughout the 10 days of Phase 1, at an average interaction distance of 1.31m, when a filter of 2 m [15] was applied ( Figure 3). No staff members contracted COVID-19 during Phase 1. As a result, there was no scenario in which contact tracing was required. However, the ability to perform rapid contract tracing was made available to the contact tracing team should the need arise. Based on the user-user interactions within a 2-m distance, the greatest time-periods of interaction among all staff occurred at 9:00 a.m. to 10:59 a.m. across the 10 days of Phase 1, with an average of 1444.8 interactions per day. The average daily interactions were additionally calculated by multiplying the average daily interactions per user with the number of users who were actively using the device on a given day. The greatest average daily interactions occurred on Mondays with a total of 6989 interactions across Phase 1, and the fewest on Thursdays with a total of 4111 interactions across Phase 1. Among these interactions, social clusters were examined to determine physical distancing compliance among hospital workers by filtering for user-user interactions that occurred within 1 m [16], for a duration of more than 1 min during Phase 1. User-user interactions were compared individually through the interaction category on the dashboard, displaying the largest average daily interactions occurring between nurses-nurses (640 interactions) over Phase 1 (Figure 4). Fewer average interactions were found between doctors-doctors (388 average daily interactions), doctors-nurses (161 average daily interactions), admin-admin (118 average daily interactions) and doctors-admin (47 average daily interactions) over the course of Phase 1. Based on the user-user interactions within a 2-m distance, the greatest time-periods of interaction among all staff occurred at 9:00 a.m. to 10:59 a.m. across the 10 days of Phase 1, with an average of 1444.8 interactions per day. The average daily interactions were additionally calculated by multiplying the average daily interactions per user with the number of users who were actively using the device on a given day. The greatest average daily interactions occurred on Mondays with a total of 6989 interactions across Phase 1, and the fewest on Thursdays with a total of 4111 interactions across Phase 1. Among these interactions, social clusters were examined to determine physical distancing compliance among hospital workers by filtering for user-user interactions that occurred within 1 m [16], for a duration of more than 1 min during Phase 1. User-user interactions were compared individually through the interaction category on the dashboard, displaying the largest average daily interactions occurring between nurses-nurses (640 interactions) over Phase 1 (Figure 4). Fewer average interactions were found between doctors-doctors (388 average daily interactions), doctors-nurses (161 average daily interactions), adminadmin (118 average daily interactions) and doctors-admin (47 average daily interactions) over the course of Phase 1. Taking advantage of the fixed location beacons, we were able to determine the various levels of interaction among staff within specific areas of the ED. More specifically, user-location trends were analyzed to determine the locations that hosted greater crowding (average interaction distance), as well as greater interaction counts (average daily interactions) within the ED ( Taking advantage of the fixed location beacons, we were able to determine the various levels of interaction among staff within specific areas of the ED. More specifically, user-location trends were analyzed to determine the locations that hosted greater crowding (average interaction distance), as well as greater interaction counts (average daily interactions) within the ED ( Phase 2 In Phase 2, we recognized that having an additional administrative staff to oversee the workflow of pre-assigning the beacons, then tagging beacons to the matched healthcare workers' proxy ID onto the portal electronically would not be a feasible longterm solution, hence it was improvised into a self-tagging system using a QR code generated for the personal beacons as well as for each individual staff member. A QR code Taking advantage of the fixed location beacons, we were able to determine the various levels of interaction among staff within specific areas of the ED. More specifically, user-location trends were analyzed to determine the locations that hosted greater crowding (average interaction distance), as well as greater interaction counts (average daily interactions) within the ED ( Phase 2 In Phase 2, we recognized that having an additional administrative staff to oversee the workflow of pre-assigning the beacons, then tagging beacons to the matched healthcare workers' proxy ID onto the portal electronically would not be a feasible longterm solution, hence it was improvised into a self-tagging system using a QR code generated for the personal beacons as well as for each individual staff member. A QR code Phase 2 In Phase 2, we recognized that having an additional administrative staff to oversee the workflow of pre-assigning the beacons, then tagging beacons to the matched healthcare workers' proxy ID onto the portal electronically would not be a feasible long-term solution, hence it was improvised into a self-tagging system using a QR code generated for the personal beacons as well as for each individual staff member. A QR code carrying each staff's unique proxy ID had been distributed prior to the start of Phase 2. At the beginning of each shift, the healthcare worker was given a personal beacon with its unique QR code and proceeded to sign into the portal using their own QR code, then scanned the beacon's QR code to tag themselves electronically to the beacon. At the end of the shift, the same process untagged the individual from the beacon. There was an average of 530 daily interactions throughout the 10 days of Phase 2, at an average interaction distance of 2 m, when a filter of 2 m [15] was applied ( Figure 6). No staff members or patients contracted COVID-19 during Phase 2 as well. Thus, there was no scenario in which contact tracing was required in Phase 2 as well. well, in order to understand physical distancing compliance among hospital workers as well as their respective patients. User-user interactions that occurred within 1 m [16] of each other were filtered and compared by interaction category (Figure 7), uniquely identifying the greatest average daily interactions between patients-patients (82 average daily interactions), and doctors-doctors (31 average daily interactions). There were fewer average daily interactions found between doctors-patients (12), admin-admin (10) and doctors-admin (10), nurses-nurses (7), patients-admin (6), and doctors-nurses (3) within the ED. Based on the user-user interactions within a 2m distance during Phase 2, the greatest time-periods of interaction among all staff over the 10-day period occurred between 4:00 p.m.-4:59 p.m., with an average of 37.8 average interactions per day of Phase 2. The average daily interactions were not computed for Phase 2 due to logistical issues that resulted in a lack of hardware usage and accompanied data gaps. Among these interactions, social clusters were further examined within Phase 2 as well, in order to understand physical distancing compliance among hospital workers as well as their respective patients. User-user interactions that occurred within 1 m [16] of each other were filtered and compared by interaction category (Figure 7), uniquely identifying the greatest average daily interactions between patients-patients (82 average daily interactions), and doctors-doctors (31 average daily interactions). There were fewer average daily interactions found between doctors-patients (12), admin-admin (10) and doctors-admin (10), nurses-nurses (7), patients-admin (6), and doctors-nurses (3) within the ED. Taking advantage of the fixed location beacons once again, we were able to deter user-location trends among both staff and patients within Phase 2 of the ED (Figu Intermediate care area 2 was uniquely identified once again for having one of the la average daily interactions over the 10-day period (86.8), other than the Isolation ho unit which hosted 173.8 average daily interactions, relative to all other locations l Taking advantage of the fixed location beacons once again, we were able to determine user-location trends among both staff and patients within Phase 2 of the ED (Figure 8). Intermediate care area 2 was uniquely identified once again for having one of the largest average daily interactions over the 10-day period (86.8), other than the Isolation holding unit which hosted 173.8 average daily interactions, relative to all other locations listed within the ED. COVIDSafe in Australia and TraceTogether in Singapore require Bluetooth energy consumption. While these Bluetooth signals allow for the capturing of contacts within the vicinity without any recall biases, they force the application to continuously run in the foreground in order to effectively work. This hardware solution circumvents this issue by relying on stand-alone beacons instead of the limitations of the phone itself. Furthermore, where the elderly who often are not technology-savvy form a substantial proportion of presentations to the ED, the device itself is user friendly, which is critical within this population, as it requires no additional steps to activate it, or power it off. The device automatically switches on after it has been tagged to the end-user and does not require any manual navigation on the user's end. In addition to being technologically user-friendly, the hardware is small and simplistic by design (Figure 9, Appendix 2), easily carried in the pockets or scrubs of healthcare workers or clipped on a ring and worn on lanyard ( Figure 10, Appendix 1). Hence, the device is hassle-free: easy to sanitize and charge before and after use. Dashboard for Contact Tracing While no patient or staff member contracted COVID-19 in either Phase 1 or 2, the dashboard can be activated to allow for individual-level visualization and contact tracing in the event of a positive contraction ( Figure S1). Contact tracing via the Contra platform can be visualized and performed by the following four steps: Step 1: Notify: Staff or patients who tested COVID-19 positive must notify relevant authorities within the hospital. Staff can also notify on behalf of patients after their testing results are received. Step 2: Flag: After confirmation that a particular staff/patient has been tested positive, Contra dashboard administrator logs into the Contra dashboard. He/she then selects the appropriate filters (contact distance, contact time, days) and searches for the user and clicks on 'Flag as Infected'. Step 3: Assess: After the person has been flagged, the system then calculates risk level for people in contact based on their contact duration and distance with the infected person. At this point, the administrator selects the infected person in the dashboard and clicks on 'View Cluster Info' for information on all other people at risk. Step 4: Communicate: The administrator and other relevant hospital departments then can reach out (outside the Contra platform) to people at risk providing them with the necessary advisory and next steps. Principal Findings This hardware solution uniquely captures and visualizes clusters, count, time, and distance measurements in a user-friendly manner through the Contra dashboard without any privacy or security concerns. The key findings were the time-day trends of interaction, social cluster trends within a 1-m distance, and location data that displayed the areas of high traffic within the ED throughout Phases 1 and 2. Time-day trend findings were key in identifying when frequent and lengthy close contact was occurring among individuals on a day-to-day basis. The findings from Phase 1 suggested that the greatest amount of daily interaction times over the 10-day period among all staff members typically occurred in the mornings (9:00 a.m.-10:59 a.m.), with Monday hosting the greatest interaction counts relative to any other weekday. Phase 2 time-day trends included hardware usage by patients as well as staff within the ED, uniquely highlighted evenings (16:00-16:59) as hosting the greatest counts of human traffic. While no day analysis was computed for Phase 2 due to a lack of hardware usage that created gaps within the data, Figure 6 suggests that Fridays were most crowded within the ED, with an emphasis on evening-times where patients were most likely being discharged. However, the information gathered ultimately lacks data on the weekends and night shifts, which resulted in an underestimated recorded number of healthcare staff on shift and ED attendance; hence, it is important to note that the Phase 2 findings may not be entirely representative of the time and days of high traffic throughout the study. Based on the social cluster trends displaying physical distance incompliance (<1 m), the largest average daily interactions seemed to occur between nurses-nurses and doctorsdoctors in Phase 1, and patients-patients in Phase 2. While social cluster information was not actionable within the context of the study as no staff member or patient contracted COVID-19 during either Phase 1 or 2, these trends are critical in the event that contact tracing is required within the ED. In the event that an individual becomes positive, the Contra dashboard would allow the user to rapidly identify social clusters and points of contact within the ED, easily alerting and notifying necessary individuals. Furthermore, it should be noted that the dashboard (Figure 8) allows various parameters to be adjusted (e.g., distance of 50 cm, 1 m, 2 m, etc., time) to meet with any change in the Ministry of Health (MOH) guidelines. Finally, the location interaction feature identified the highest traffic locations, uniquely identifying intermediate areas (both 1 and 2), the outdoor isolation zone, and the pantry as having the highest recorded number of interactions per day during Phase 1, with Intermediate area 2 (and the Isolation holding area) additionally highlighted as high-traffic regions within Phase 2. However, it is important to consider that this information seems appropriate given the natural structure of the ED; both doctors and nurses frequently need to congregate around intermediate areas in order to work together and attend to patients on a normal day-to-day shift. For instance, when a sick patient presents to a given resuscitation area, many doctors and nurses are required to be in close proximity to properly attend to a sick patient for a prolonged period of time. Hence, social distancing may not be entirely possible due to the nature of the work itself, emphasizing a need for strict compliance to mask wearing, hand hygiene, and personal protective equipment among health care workers within the ED that may circumvent the issue of proximity. Implications and Effected Change While some of the findings could not be acted upon given the nature of ED work that necessitates proximity (resuscitation and intermediate areas), the findings were contextualized and considered, resulting in some amendments to improve social distancing within the ED. For example, as the pantry was identified as high risk, mitigating steps were taken to limit access to a maximum of 8 people at any given time. Based on the social cluster trends coupled with the aforementioned location data explored on the dashboard, the ED additionally dedicated new areas for doctors and nurses within the ED that promoted dining separately and limited the number of individuals gathering in a closed and confined space. This implementation facilitated the separation of social interaction between different groups of health workers further (doctors, nurses, admin). Finally, due to the proximity information gathered on the Contra dashboard, transparent plexiglass was installed within the ED to separate diners at each table from each other: spacing out tables at a bare minimum of 2 m away from each other to circumvent the issue of proximity in regions where PPE and masks may be removed. Strengths This observational study highlighted and addressed multiple concerns that earlier contract tracing devices and applications had faced within an outbreak setting. For instance, the hardware solution resolved many of the personal data collection and privacy concerns around GPS location trackers. The standalone hardware device requires no linkage to any other mobile device and carries no personal user data onto the portal. Additionally, it does not include any GPS location around the user in order to track the users every move, eliminating some key user privacy concerns. Additionally, multiple contact tracing applications available on the phone such as COVIDSafe in Australia and TraceTogether in Singapore require Bluetooth energy consumption. While these Bluetooth signals allow for the capturing of contacts within the vicinity without any recall biases, they force the application to continuously run in the foreground in order to effectively work. This hardware solution circumvents this issue by relying on stand-alone beacons instead of the limitations of the phone itself. Furthermore, where the elderly who often are not technology-savvy form a substantial proportion of presentations to the ED, the device itself is user friendly, which is critical within this population, as it requires no additional steps to activate it, or power it off. The device automatically switches on after it has been tagged to the end-user and does not require any manual navigation on the user's end. In addition to being technologically user-friendly, the hardware is small and simplistic by design (Figure 8, Appendix A Figure A2), easily carried in the pockets or scrubs of healthcare workers or clipped on a ring and worn on lanyard (Figure 9, Appendix A Figure A1). Hence, the device is hassle-free: easy to sanitize and charge before and after use. Lastly, from a cost perspective, the total solution cost is under 10 dollars per user per month, inclusive of all software and hardware, if mass deployed at scale. Limitations There were several limitations of the study, which served as learning points for the team. 1. Battery/Charging Issues The battery life of the device lasted up to a maximum of 10 h; effective data would have been lost if these devices were used past 10 h. In addition, it was not possible to charge the device while it was being used. Enhancements to the battery and use of lowenergy solutions may be able to prolong the battery life of the devices. 2. Lack of Prompts The lack of visual, audio, or tactile notifications from the device reduced the ability of prompts for the individual carrying it to be aware of potential breach of social distancing measures. Future iterations could consider adding useful vibratory, light, and sound features to enable the prevention of prolonged close contact time, rather than needing to remedy the situation later. 3. Device Tagging and Untagging Issues During Phase 1 of the pilot study, the handling of the device-issuing at the start of Lastly, from a cost perspective, the total solution cost is under 10 dollars per user per month, inclusive of all software and hardware, if mass deployed at scale. Limitations There were several limitations of the study, which served as learning points for the team. Battery/Charging Issues The battery life of the device lasted up to a maximum of 10 h; effective data would have been lost if these devices were used past 10 h. In addition, it was not possible to charge the device while it was being used. Enhancements to the battery and use of low-energy solutions may be able to prolong the battery life of the devices. Lack of Prompts The lack of visual, audio, or tactile notifications from the device reduced the ability of prompts for the individual carrying it to be aware of potential breach of social distancing measures. Future iterations could consider adding useful vibratory, light, and sound features to enable the prevention of prolonged close contact time, rather than needing to remedy the situation later. Device Tagging and Untagging Issues During Phase 1 of the pilot study, the handling of the device-issuing at the start of each shift, collection of the device at the end of shift, accounting for each device and recharging the devices-was outsourced to an administrative staff. To ensure that the process of issuing the devices personally to a large number of staff at the start of each shift was not time consuming, the device allocation was planned out beforehand by the administrative staff, and then tagged backend onto the portal. This put a strain on the administrative staff to run the process. Specific improvements were thus made to address administrative strain in Phase 2. The introduction of the QR code method for signing in, self-tagging, and untagging of the devices improved the end-user experience and ensured more accurate data to be reflected in real time as well. Compliance Issues Inability to manage the devices during peak hours In Phase 1, the additional steps required for doctors to manually tag the patients onto the portal following a COVID-19 swab test was relatively time-consuming during peak periods where patient load was high and there were more urgent issues to attend to while on clinical duty. This could have led to a loss of movement and contact data as the doctors were unable to practically comply with entering the details onto the portal and issuing the device to the patient. However, this was resolved within Phase 2 as previously described. End-User Behaviour and Attitudes The self-service method of assigning and revoking the devices (Phase 2) led to multiple occasions whereby the end-users either forgot to tag themselves despite being handed the device at the start of shift or forgot to untag the devices before returning them. This resulted in inaccurate real-time data being collected and represented. However, we were able to filter by end-time to allow such inaccuracies to be removed from the summary display. Recommendations If the device were to be mass produced, there could potentially be an option for every staff member to be responsible for each of their own devices that they bring to work every shift, and to self-assign or revoke them when necessary. In addition, the device could also have better battery life, with simple power on/off buttons and indicators for when a device is faulty and requires troubleshooting. There could also be an alarm indicator for distance alerts to remind users to be wary of physical distancing, or to alert when the battery is running low or is fully charged. The bigger challenge remains in bringing about behavioral change amongst end-users with regard to using the device, especially for tagging of patients suspected of COVID-19. If a practical and fuss-free solution can be rolled out for the execution of the patient tagging process, perhaps it would improve the compliance rate of the end-users involved. Issuing and tagging of the device during the initial patient administration process may be a feasible solution. In a hectic work environment such as the ED, there are a myriad of situations in which congregation is both inevitable such as when more attention is required for sicker patients and where medical interventions are time sensitive. In these cases, social proximity is necessary and unavoidable. Hence, the role of the beacon in prompting physical distancing may be less appropriate yet may still be a good reminder for individuals interacting in time-periods after their respective shifts. Future Enhancements (1) Battery Life Future enhancements could consider streamlining the end-to-end assignment of devices further as well as improving the battery-life of the hardware itself to obtain more accurate data within the ED. While Phase 2 introduced the process of self-tagging instead of manual administrative delegation, which simplified and improved the accuracy of data, there is still more potential to streamline the method of device delegation further to the point of having no issues at all. This could be done by improving the recharging process of the hardware, which is manual at the present moment, involving extra time for admin staff in Phase 1, and individual users in Phase 2. In addition, reducing the limited battery life of the hardware would remove several of anomalies in the data with respect to the time and clusters of interaction discussed previously. (2) Visual, Audio, and Tactile Cues Future enhancements to the hardware solution itself could incorporate visual, audio, or tactile cues to notify and alert individuals directly on possible social distance violations. For example, if an individual is within 1 m of another individual, the hardware could take the extra step of notifying the individuals in a discrete manner so that social distancing compliance may be achieved instead of merely tracked in the event of a COVID-positive case. However, as mentioned previously, proximity may be unavoidable in intermediate care and resuscitation areas, and these cues may be a source of distraction to patient care. Hence, future enhancements could consider the "activation" of cues in only specific areas that elicit high traffic and may not require it, such as the pantry within the ED. Conclusions The study was key in evaluating the effectiveness of this "real-time" standalone solution in a real-world outbreak setting. In addition to resolving numerous issues experienced with alternative solutions such as data privacy and security, and Bluetooth discrepancies, the key features of the hardware include a contact tracer platform, real-time time-day trends, as well as location trends within the ED. Despite some technical, usability, and logistical issues, the study suggests that this hardware solution is a robust and effective location tracking, contract tracing, and safe distancing tool within the ED setting that must be explored further. As no patients or staff that were tracked were identified as COVID positive throughout the study, there was no case in which the contact tracer platform needed to be used for contact tracing: identifying healthcare workers or other patients who were in proximity or spent a long time interacting with a given positive patient. While in the context of this study, this may have been a potentially useful and efficient contact tracing tool, and it proved to be useful and clinically validated in a separate study on foreign workers in a dormitory setting in Singapore that is in the process of being reported [17].
2022-03-23T15:21:34.517Z
2022-03-21T00:00:00.000
{ "year": 2022, "sha1": "3fab3afdd272fd30b8dd2da15ed52e576358811b", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2673-8112/2/3/30/pdf", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "8b90eb209863f7c2132fad575cc0aa1ed2a3c4e1", "s2fieldsofstudy": [ "Medicine", "Engineering" ], "extfieldsofstudy": [ "Medicine" ] }
86101857
pes2o/s2orc
v3-fos-license
Effects of high-temperature–short time (HTST) drying process on proteolysis, lipid oxidation and sensory attributes of Chinese dry-cured chicken ABSTRACT The objective of this study was to accelerate the drying process of Chinese dry-cured chicken using high temperatures. Salted chicken samples were treated with different high-temperature–short-time (HTST) combinations (50°C/27 h, 55°C/25 h, 60°C/23 h, 65°C/21 h). The effects of various high temperatures on proteolysis, free amino acids (FAAs), thiobarbituric acid reactive substances (TBARS), and sensory attributes were analyzed. The results revealed that high temperature accelerated lipid oxidation, protein oxidation, and proteolysis without producing undesirable flavors. By using the HTST process, the Warner–Bratzler-shear-force (WBSF) values were significantly increased (P < 0.05), and the scores for color, aroma, and taste were enhanced. The best sensory attributes were obtained with the 55°C/25 h-treated samples. Moreover, samples treated with 55°C/25 h had the highest TBARS values (1.32 mg MDA/kg) and total FAAs contents (4693.2 mg/kg muscle). Therefore, the use of high temperature is an effective way to accelerate the dry-curing process and improve the sensory qualities of Chinese dry-cured chicken. Introduction Dry-cured chicken is a traditional dry-cured meat product made in southeast China and is famous for its unique cured flavor. The traditional method for making dry-cured chicken involves natural maturation, which is climate limited and time consuming. Increasing the drying temperature is a potential way of shortening the process of producing drycured meat products (Arnau, Serra, Comaposada, Gou, & Garriga, 2007) and has been successfully applied to shorten the drying period of Jinhua ham (J. Zhang, Jin, Wang, & Zhang, 2011). The Chinese sausage industry has also widely used high-temperature dehydration procedures (50-55°C) to accelerate the process (Feng et al., 2014;Sun, Cui, Zhao, Zhao, & Yang, 2011;L. Zhang, Lin, Leng, Huang, & Zhou, 2013). However, increasing the temperature may significantly influence the chemical and biochemical reactions during the drying period, leading to changes in the sensory attributes or deterioration of the finished products.Thus, understanding the effect of high temperature on the physical and chemical characteristics of the product is significant for manufacturers to improve the profit margin and food qualities of Chinese dry-cured chicken. During the drying period, muscle proteins and lipids are hydrolyzed mainly by endogenous enzymes, resulting in increased amounts of peptides, free amino acids (FAAs), and free fatty acids (Toldrá, Flores, & Sanz, 1997). These products constitute the main characteristics of dry-cured flavor substances and may continue to react with one another or be hydrolyzed to produce volatiles that con tribute to the unique aroma of the final product (Barbieri et al., 1992). The effect of temperature during the drying period has been described in previous studies (Gou, Morales, Serra, Guàrdia, & Arnau, 2008;Rubio-Celorio, Garcia-Gil, Gou, Arnau, & Fulladosa, 2015;Sánchez-Molinero & Arnau, 2014). Indeed, temperature is an essential factor because it affects the action of endogenous muscle peptidases, which plays an important role in proteolysis (Mora et al., 2015). Martin et al. (1998) observed that the drying temperature determines the levels and the types of compounds released via protein breakdown during the dry curing of Iberian hams. These changes in protein composition contribute to the texture and to the sensory and nutritional quality of meat products (Visessanguan, Benjakul, Riebroy, & Thepkasikul, 2004). However, high temperature is an important factor for accelerating lipid oxidation, which also exerts effects on the taste and odor compound formation and is also the main reason for off-flavor, rancidity, or textural modification of dry-cured meat products (Broncano, Petrón, Parra, & Timón, 2009;Harkouss et al., 2015). The positive effect of high-temperature ripening on lipolysis and lipid oxidation of Jinhua ham has previously been reported (J. Zhang et al., 2011). However, to the best of our knowledge, few studies have investigated the use of the HTST process for producing dry-cured chicken or its effects on the product's proteolysis and sensory qualities. In this study, the HTST drying process was studied as an alternative method to accelerate the process of Chinese drycured chicken. The purpose of this work was to study the influence of HTST on the proteolysis, lipid oxidation, and sensory attributes of dry-cured chicken, compared with the traditional drying method to determine the feasibility of using high temperature to reduce the production time of Chinese dry-cured chicken, and to optimize the process parameters. Materials Chinese native three-yellow-chickens were uniformly slaughtered according to the Animal Experimental Special Committee of Nanjing Agricultural University (NAU), which governs the use of experimental animals. Twenty chicken breasts were collected and subjected to trimming, cleaning, and freezing at −20°C prior to use. Dry-cured chicken preparation and sampling After thawing at 0~4°C for 8 h, the chicken breasts were immersed in precooled curing water at 0~4°C for 20 h (curing formulation: 300 g of salt in 3 kg of water). The salted breasts were hung in a preheated oven (KBF 115pgm,Binder,Germany). Control samples were treated with one of the traditional methods used by a local factory: drying for 7 days at 15°C with 70% relative humidity. HTST-treated samples were treated with various temperature-time combinations to achieve the same moisture content with the control: 50°C/27 h, 55°C/25 h, 60°C/23 h, and 65°C/21 h (RH = 70%). Immediately after the drying process, the samples were cooled to room temperature for 1 h and vacuum-packaged (DC-800, Promarks Inc., USA) with plastic vacuum packaging bags. Four samples from each treatment were randomly selected for the evaluation of the moisture content and water activity. Eight samples from each treatment were randomly selected and kept at 4°C, of which four were used for the Warner-Bratzler shear force (WBSF) analysis and four were used for the sensory evaluation. All of the WBSF and sensory evaluations were performed the day after the samples were produced. Four samples from each treatment were randomly selected for the analysis of their chemical properties. The remaining samples were kept at −20°C for further use. The entire production procedure was replicated three times at different time points. Determination of moisture content and water activity Moisture content was determined according to the method specified in ISO-1442ISO- (1997. The samples were dehydrated in an oven (DHG-903385-III, Shanghai CIMO Medical Instrument Manufacturing Co., I.TD, Shanghai, China) at 105°C to a constant weight. Water activity (a w ) was detected at 25°C using a water activity meter (LabMaster-aw, Novasina AG, Switzerland). Instrumental texture analysis Tenderness was evaluated via WBSF analysis according to Jose M. Lorenzo, Bermúdez, et al. (2015) with slight modifications. The samples were placed in vacuum bags (unsealed) and heated to 70°C in a water bath (72°C). After being chilled at 4°C for 8 h, four 25 mm × 10 mm × 10 mm (height × width × length) cores were removed from each sample parallel to the muscle fiber direction. Each core was cut vertically in the direction of the muscle fibers using a Warner-Bratzler shear blade. The WBSF data were obtained using a texture analyzer (TA-XTplus, Stable Microsystems, UK). Determination of thiobarbituric acid-reactive substances (TBARS) The thiobarbituric acid-reactive substances (TBARS) concentration was determined according to Salih, Smith, Price, and Dawson (1987) with slight modifications. A 5 g minced sample was homogenized with 25 mL of cold (4°C) extraction solution containing 20% perchloric acid and 20 mL of distilled water and 0.50 mL of butylated hydroxytoluene (BHT) in a Virtis homogenizer at 10,000 rpm for 1 min. The homogenate was centrifuged at 2000 g for 10 min at 4°C. The blended sample was filtered into a 50-mL Erlenmeyer flask. The filtrate was adjusted to 50 mL with distilled water, and 2 mL of the filtrate was added to 2 mL of 0.02 M TBA. Test tubes were heated in a thermostatically controlled water bath for 30 min at 95°C to develop the malonaldehyde-TBA complex and then cooled for 5 min with cold tap water. The absorbance was determined at 532 nm using a multifunctional microplate reader (Model Spectral Max M2e, MD, USA) against a blank containing 2 mL of 10% per-chloric acid and 2 mL of 0.02 M TBA reagent. The TBARS concentration was calculated from a standard curve in triplicate using solutions of 1,1,3,3-tetraethoxypropane (TEP). The results were expressed as mg malonaldehyde (MDA) equivalents per kg of meat sample. Protein carbonyls Protein carbonyl content was evaluated according to the method described by L. Zhang et al. (2013). Carbonyl groups were reacted with 2,4-dinitrophenylhydrazine (DNPH) to develop the protein hydrazones, which were detected by measuring the absorbance at 370 nm in a spectrophotometer (UV-2450, SHIMADZU, Japan). Protein concentrations were calculated using a standard BSA assay by measuring the absorbance at 280 nm. The content of carbonyl groups was expressed as nmol carbonyl/mg protein using an extinction coefficient of 21.0 mM −1 × cm −1 . Proteolysis The protein composition was fractionated according to the method described by Sun et al. (2011) with slight modifications. First, 5 g minced samples were homogenized with 50 mL of phosphate buffer A (15.6 mM Na 2 HPO 4 and 3.5 mM KH 2 PO 4 , pH 7.5) at 8000 rpm for 1 min in ice bath. The homogenate was centrifuged at 5000 g for 15 min at 4°C. The extraction was repeated twice. The supernatants, which contained water-soluble proteins, were combined. Then the remaining pellet was homogenized with 50 mL of phosphate buffer B (0.45 M KCl, 15.6 mM Na 2 HPO 4 and 3.5 mM KH2PO4, pH 7.5) at 8000 rpm for 1 min in an ice bath and centrifuged at 5000 g for 15 min at 4°C. The extraction with phosphate buffer B was repeated twice, and the supernatant was combined to obtain salt-soluble proteins. The concentrations of water-soluble and salt-soluble proteins were determined with a BCA Protein Assay Kit (Pierce, USA). The samples were then mixed with treatment buffer (125 mmol/L Tris, 40 g/L sodium dodecyl sulfate (SDS), and 250 g/L glycerol), heated at 50°C for 20 min and then stored at −80°C for subsequent sodium dodecyl sulfate polyacrylamide gel electrophoresis (SDS-PAGE) according to the method described by Etlinger, Zak, and Fischman (1976). The gels were scanned with a GT-800F scanner (EPSON), and then, the densities of the targeted bands were analyzed by Quantity One software (Bio-Rad). Free amino acids Free amino acids (FAAs) were analyzed according to the procedures described by Aro et al. (2010). Sensory evaluation Sensory evaluation was performed by an experienced sensory panel, composed of 17 members of the National Centre of Meat Quality and Safety Control. The samples subjected to each treatment were cooked in boiling water for 30 min, and then cooled to room temperature. The chicken breasts were then sliced into pieces with thick nesses of approximately 5 mm, and placed on separate white ceramic plates. Each treatment was identified with a random three-digit code. The panelists were instructed to gargle between evaluations to reduce any effects of other samples. All of the tasting sessions were conducted at the same time of each test day in a quiet room with a mixture of natural and fluorescent light, and with no interactions between panelists. Each panelist was asked to evaluate the sensory attributes of the chicken samples of all five treatments, including the color, aroma, taste, and texture. A 9point hedonic scale was applied (Lim, 2011): 1, dislike extremely; 2, dislike very much; 3, dislike moderately; 4, dislike slightly; 5, neither like nor dislike; 6, like slightly; 7, like moderately; 8, like very much and 9, like extremely. Statistical analysis The entire experiment was replicated three times at different times, and a completely randomized design was used. All of the data from the three replicates were analyzed using Excel 2007 (Microsoft, Washington) and SPSS software (SPSS Inc., Chicago, IL, USA). Differences among individual means were compared by Duncan's multiple range test. Effects were considered significant at P < 0.05. Moisture content and water activity (a w ) analysis Increasing the temperature did not significantly affect (P > 0.05) the moisture content or a w of the dry-cured chicken (Table 1). Moisture is an important factor in the standardization of the product, and it not only affects the final appearance and juiciness but also has great economic importance to the industry. Generally, higher temperature increases the effective water diffusivity and facilitates the migration of water (Sánchez-Molinero & Arnau, 2014). Therefore, the drying-time of each HTST group was adjusted to standardize the moisture of the samples to the control. The value of a w indicates the unbound and free water that is available to support the chemical and biological reactions in a system, especially the growth of microorganisms (Feng et al., 2014). No significant differences were found between the a w values, indicating that HTST did not decrease the storage properties of the dry-cured chicken. This finding was consistent with previously reported results for drycured hams (Costa-Corredor, Serra, Arnau, & Gou, 2009). Tenderness determined by WBSF Elevating the drying-temperature significantly increased (P < 0.05) the WBSF values in the HTST-treated samples ( Table 1). The WBSF values of the control group were significantly higher than 50HT (P < 0.05) but significantly lower than 65HT (P < 0.05), and there was no significant difference between the control and 55HT~60HT (P > 0.05). WBSF is related to the tenderness, which is one of the most important sensory characteristic of meat product (Cai, Chen, Wan, & Zhao, 2011). Huang, Huang, Xu, and Zhou (2011) observed a decrease in tenderness when the temperature was increased from 40°C to 60°C, which was consistent with our study. Increased temperature may strengthen the myofibrillar protein networks or cause protein denaturation, leading to toughness (Bouton, Harris, & Shorthose, 1982). However, an appropriate high temperature may also contribute to tenderness by solubilizing the collagen and connective tissue (Bouton et al., 1982), which may explain the lower WBSF value in 50HT. Moreover, Christensen, Purslow, and Larsen (2000) observed an increase in tenderness for the same temperature interval and attributed it to collagen solubility, which was not consistent with our results. This was most likely because of the relatively low collagen content in the longissimus dorsi of chicken breast. Excessive tenderness or toughness may have a negative effect on the texture of the final product (Ishiwatari, Fukuoka, & Sakai, 2013). The relationship between the WBSF values and the texture of Chinese dry-cured chicken was analyzed in the following sensory evaluation. Lipid oxidation determined by TBARS Higher drying temperatures significantly influenced (P < 0.05) the TBARS values of dry-cured chicken ( Table 2). The results showed that the TBARS values increased between 50HT and 55HT, and then gradually decreased between 55HT and 65HT. Yun, Shahidi, Rubin, and Diosady (1987) reported that the lipid-oxidation depends on the thermal-processing temperature. Therefore, the initial increasing phase may be caused by the higher temperature, which was consistent with other studies (Broncano et al., 2009;Wang et al., 2013;Wenjiao, Yongkui, Yunchuan, Junxiu, & Yuwen, 2014). However, aldehydes are unstable and can be directly degraded into volatile compounds (Ventanas, Estévez, Delgado, & Ruiz, 2007), or interact with other groups of proteins (Jin et al., 2012), leading to the formation of fluorescent Schiff bases (Harkouss et al., 2015). The formation of such products prevents the reaction of aldehydes and TBA, explaining the subsequent decrease from 55HT to 65HT observed in our study. A similar decrease in TBARS was previously reported by Roldan, Antequera, Armenteros, and Ruiz (2014) in lambs. Several reports showed that lipid oxidation played an important role in the development of the typical dry-cured flavor (Barbieri et al., 1992;Ruiz, García, Muriel, Andrés, & Ventanas, 2002). There was no significant difference between the control group and 65HT (P > 0.05), indicating that HTST could achieve the same or even higher levels of lipid oxidation. However, excessive lipid oxidation may result in off-flavor in dry-cured meat products (Böttcher, Steinhäuser, & Drusch, 2015). In our study, the TBARS was 0.92-1.32 mg MDA/kg sample, which was higher than the values obtained by Feng et al. (2014) in Chinese sausage and by Cilla, Martínez, Beltrán, and Roncalés (2006) in dry-cured ham, but was lower than the threshold value for off-flavor (2 mg MDA/kg) reported by Wenjiao et al. (2014). The relationship between TBARS and the flavor of dry-cured chicken is further analyzed in the following sensory evaluation. Protein oxidation determined by protein carbonyl The results of the HTST groups were significantly higher (P < 0.05) than those of the control (Table 2), but increasing the temperature from 50HT to 65HT produced no significant effect (P > 0.05). Roldan et al. (2014) reported that protein carbonyls reached similar final values regardless of the heating temperature, which was consistent with our study. Moreover, the higher results in the HTST groups demonstrated the accelerating effect of high temperature on the protein-oxidation rate, which was consistent with a previous study on lamb loins that were heated for 24 h at temperature ranging from 60 to 80°C (Roldan et al., 2014). High temperature is known to enhance protein carbonylation because of several effects, such as the release of free catalytic iron and the formation and cleavage of hydroperoxides (Estévez, 2011). Protein oxidation during the ripening of meat products was also suggested to be involved in the formation of Strecker aldehydes, which contribute to the dry-cured flavor (Toldra, 1998). The protein-oxidation results detected here exceeded those recorded for Chinese-style sausage dried at 55°C for 48 h (L. Zhang et al., 2013). This may be due to the different oxidative stabilities of the varying compositions of protein and resistance of muscle fibers to thermal treatment (Ma, Ledward, Zamri, Frazier, & Zhou, 2007). Indeed, the formation of protein carbonyls from particular amino acid side chains contributes to the impairment of the myofibrillar protein conformation (Estévez, 2011). Proteolysis Electrophoretic analysis of water-soluble protein ( Figure 1A and Table 3) and salt-soluble protein ( Figure 1B and Table 4) revealed a significant difference (P < 0.05) between the HTST groups and the control. Water-soluble protein bands with molecular weights of 110~230 kDa and 35~50 kDa decreased in the HTST groups, whereas those with a molecular weight of 58 kDa displayed a higher density in 50HT and 55HT but a decreased density in 60HT and 65HT. In addition, those with a smaller molecular weight of 16~17 kDa increased slightly as the temperature increased. In summary, the HTST process contributes to the degradation of water-soluble proteins with high molecular weights, whereas proteins with smaller molecular weights persisted in 50HT and 55HT. Salt-soluble proteins had greater thermal stability than water-soluble proteins ( Figure 1B). Protein bands with a high molecular weights of 55 kDa~230 kDa gradually decreased from 50HT to 65HT. Bands at 43~44 kDa and 35 kDa displayed higher densities in 55HT, whereas those at 18~20 kDa increased slightly as the temperature increased in the HTST groups, which is consistent with the watersoluble protein results. Proteolysis has an important effect on texture, taste, and, indirectly, the aroma development of dry-cured meat products (Toldra, 1998). Harkouss, Safa, Gatellier, Lebert, and Mirade (2014) reported that the rates of proteolysis are increased by 3 or 4 times when the temperature is increased. In this study, high temperature resulted in the degradation of proteins with high molecular weights, which was consistent with previous research on Chinese sausage under similar drying conditions (Feng et al., 2014). The results demonstrated the positive effect of high temperature on proteolysis, which is related to the strong activity of muscle proteases in a certain temperature range (50-55°C) (Flores et al., 2006). The additional protease activity caused by high temperature may induce two phenomena: 1. both watersoluble and salt-soluble proteins with high molecular weights are degraded, and 2. more native stromal proteins were most likely hydrolyzed into smaller peptides (Molina & Toldra, 1992) and FAAs that can directly contribute to the flavor of dry-cured meat products (Cordoba et al., 1994). FAA content Total FAA content was significantly (P < 0.05) affected by varying the temperature (Table 5). Samples treated with 55HT had a significantly (P < 0.05) higher total FAA content (4692.7 mg/kg muscle) and a higher concentration of each individual amino acid, except for tyrosine and taurine, which showed higher content in 50HT. In addition, total FAA content in 65HT was significantly (P < 0.05) lower than in other groups. These differences were most likely attributable to the different activities of the proteolytic enzymes at different temperatures. The results of this study were generally lower than those reported in dry-cured ham (Martín, Antequera, Ventanas, Benítez-Donoso, & Córdoba, 2001;Virgili, Saccani, Gabba, Tanzi, & Soresi Bordini, 2007), and lacón (Lorenzo, Fonseca, Gómez, & Domínguez, 2015). The conversion of peptides into FAAs would occur during the last step of the proteolytic process involved in ripening, mainly produced by cathepsin, calpains, and amino-peptidases (Toldrá et al., 1997). The activities of these enzymes are dependent on the temperature during the drying process (Toldrá et al., 1997;Zhao et al., 2005). Martín et al. (2001) observed that the drying-temperature stimulated proteolytic activity of cathepsin D and exopeptidases of both muscle and microbial origin in Iberian ham, leading to the release of amino acids. Moreover, Zhao et al. (2005) reported that the activities of cathepsin B and cathepsin L during the processing of Jinhua ham increased as the temperature increased. In addition, Toldrá, Rico, and Flores (1992) concluded that high temperature (65 and 69°C for 15 min) inactivated both cathepsin B and arginyl hydrolyzing activities and reduced the cathepsin H, leucyl and tyrosyl hydrolyzing activities (<4%), which could explain the low FAA content in 65HT in our study. At the end of the drying process, the major FAA was alanine, followed by lysine, glutamine, leucine, and arginine. The same FAAs have also been reported as the most abundant in dry-cured meat products, such as Iberian ham (Cordoba et al., 1994;Martín et al., 2001), Parma ham (Sforza et al., 2001), dry-cured cecina (Lorenzo, Fonseca, et al., 2015), and lacón (Garrido, Domínguez, Lorenzo, Franco, & Carballo, 2012). The results also showed that the FAA with the lowest levels at the end of the drying process was taurine, which was consistent with the results of lacón (Garrido et al., 2012). FAAs have been reported as precursors of sour, sweet, and bitter tastes in dry-cured ham (Cordoba et al., 1994). Some amino acids were correlated with specific ham tastes, such as glutamic acid with saltiness; alanine with sweetness; arginine, valine, and histidine with bitterness; tyrosine and lysine with aged taste; and leucine with acid taste (Careri et al., 1993). The combination of all of the FAAs contributes to the characteristic taste of dry-cured ham (Bermúdez, Franco, Carballo, Sentandreu, & Lorenzo, 2014). The higher FAA content in 55HT might make an important contribution to improving the flavor of dry-cured chicken. Besides, the FAAs may also act as flavor precursors in the generation of Table 3. Relative values of water-soluble protein bands of Chinese dry-cured chicken treated with various methods. Tabla 3. Valores relativos de las bandas proteínicas solubles en agua del pollo curado chino tratado con diferentes métodos. 0.29 ± 0.035 c 0.14 ± 0.020 a 0.20 ± 0.020 b 0.28 ± 0.007 c 0.15 ± 0.018 a 16 kDa 0.39 ± 0.024 a 0.43 ± 0.033 a 0.61 ± 0.025 b 0.78 ± 0.033 c 0.81 ± 0.045 c The relative value of proteins bands was calculated as the density of targeted bands in different treatment conditions over the density of a reference band of 250 kDa in the marker to avoid the errors between different repetitions. HTST Treatment: High-temperature-short-time drying treatment. 15LT: 15°C/7 d control; 50HT: 50°C/27 h HTST treatment; 55HT: 55°C/25 h HTST treatment; 60HT: 60°C/23 h HTST treatment; 65°C/21 h HTST treatment. Means in the same row with different superscripts show significant difference between treatments at P < 0.05. Sensory evaluation No undesirable flavors or tastes were observed by the panelists in the sensory evaluation of the samples (Table 6). For the color scores, a significant difference (P < 0.05) was found between the HTST groups and the control. The samples in the HTST groups were significantly more appreciated by the panelists for their light and fresh colors. Samples treated with 50HT, 55HT, and 60HT showed significantly higher scores for aroma (P < 0.05) than 65HT and the control. In addition, the 50HT and 55HT groups showed higher scores for taste. Thus, as the drying-temperature increased, the scores of color, aroma, and taste followed the similar trend. The highest score for texture was observed in 55HT and 60HT, whereas excessive toughness was indicated by lower texture scores in 65HT. The sensory quality of the dry-cured meat product was affected by the biochemical reactions during the drying process. The scores revealed a significant effect of high temperature on color, aroma, taste, and texture, which may result from the accelerated lipid oxidation and additional proteolysis. Color is an important trait in food quality and is considered to be an indicator of meat freshness and doneness for consumers (Huang et al., 2011). The results showed that samples treated with the HTST process obtained a higher color score because of an impression of brightness. An increase in brightness was also found in the study of dry-cured ham treated with the HTST process (Sánchez-Molinero & Arnau, 2014). For the aroma and taste scores, many researchers have reported that a variety of small peptides and FAAs produced by the dry-cured meat products contribute to aroma characteristics (Virgili et al., 2007), taste properties, and water-soluble flavor precursors (Koutsidis et al., 2008). In addition, Careri et al. (1993) found that hams with the highest acceptability scores had high levels of free tyrosine and lysine. In our study, samples with a higher FAA content obtained a higher score for taste, which confirmed the correlation between FAAs and the taste of dry-cured meat products. Finally, texture is rated by consumers as the most important quality characteristic of meat (Shackelford et al., 2001). The scores for texture revealed that high temperature made no significant difference between 55HT and 60HT and the control but did result in remarkable toughness, as indicated by the lower texture scores in 65HT. According to the results of sensory evaluation, the best HTST treatment parameters should be 55HT (55°C/25 h). Conclusion By decreasing the drying time to less than 25 h, the HTST process effectively accelerated proteolysis, as reflected by the decrease in the large molecule protein bands of both water-soluble and salt-soluble proteins. Tenderness was significantly (P < 0.05) affected by the temperature, as indicated by the increased WBSF. The oxidation of protein and lipids was accelerated by the HTST drying treatment, while no undesirable flavors or tastes were observed in the sensory panel. Samples treated at 55HT (55°C/25 h) exhibited the highest contents of total FAAs and most of the individual FAAs. The color, aroma, and taste scores in the HTST groups were significantly (P < 0.05) higher than in the control. The best sensory attributes were observed in 55°C/25 h-treated samples. In conclusion, applying high-temperature drying condition is a novel method with great potential to accelerate the manufacture of Chinese dry-cured chicken and improve its sensory properties. Disclosure statement No potential conflict of interest was reported by the authors. Funding This research was funded by Jiangsu science and technology support project [BE2014304].
2019-03-30T13:13:32.348Z
2016-01-14T00:00:00.000
{ "year": 2016, "sha1": "751f8c7227909552bc3c17928469bdd817d8eeb2", "oa_license": "CCBY", "oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/19476337.2015.1124291?needAccess=true", "oa_status": "GOLD", "pdf_src": "TaylorAndFrancis", "pdf_hash": "ace7a9008202265e3f2536607fd7b56d4cac5023", "s2fieldsofstudy": [ "Agricultural and Food Sciences" ], "extfieldsofstudy": [ "Chemistry" ] }
262063662
pes2o/s2orc
v3-fos-license
Modeling Myotonic Dystrophy Type 2 Using Drosophila melanogaster Myotonic dystrophy 2 (DM2) is a genetic multi-systemic disease primarily affecting skeletal muscle. It is caused by CCTGn expansion in intron 1 of the CNBP gene, which encodes a zinc finger protein. DM2 disease has been successfully modeled in Drosophila melanogaster, allowing the identification and validation of new pathogenic mechanisms and potential therapeutic strategies. Here, we describe the principal tools used in Drosophila to study and dissect molecular pathways related to muscular dystrophies and summarize the main findings in DM2 pathogenesis based on DM2 Drosophila models. We also illustrate how Drosophila may be successfully used to generate a tractable animal model to identify novel genes able to affect and/or modify the pathogenic pathway and to discover new potential drugs. Introduction Myotonic dystrophy type 2 (DM2, OMIM 602668) is a multi-systemic autosomal dominant disease that displays a wide spectrum of clinical manifestations, including proximal myotonia, degeneration of muscle fibers, cataracts, defective cardiac conduction, insulin resistance, and other endocrine disorders [1,2]. The genetic basis for DM2 is an unstable CCTG repeat on chromosome 3q21, in the first intron of the cellular nucleic acid-binding protein (CNBP) gene, also named ZNF9 (zinc finger protein 9; [3]).The cause for the unstable expansion is unknown; however, it is clear that the expanded DM2 alleles are strongly variable, with significant increases in length over time [4].The size of the (CCTG)n repeat is below 30 repeats in normal individuals, whereas in DM2 patients, it is between 75 CCTG and 11,000 repeats [3,4].The typical onset of DM2 is in adulthood and has variable manifestations, such as early onset cataracts (less than 50 years of age), various grip myotonias, thigh muscle stiffness, muscle pain, and weakness in the flexors of the fingers.These complaints often appear between 20 and 50 years of age [2]. The muscular defects observed in DM2 represent the predominant manifestation of the disease and encompass muscle weakness, myotonia (an inability of muscles to relax after contraction), and muscle atrophy over time.The muscles primarily affected by the condition tend to be those closer to the body (proximal muscles) [5].The severity and the specific muscle groups involved can vary across individuals [5].Furthermore, being a multi-system disorder, DM2 can affect organs and systems beyond the muscular system.The central nervous system involvement in DM2 has been the subject of extensive investigation in recent years, although several questions remain unanswered.Neurodegeneration evidence exists in DM2, especially in certain brain regions.Individuals with DM2 have reported cognitive impairments, including problems with attention, memory, and executive functions.These cognitive issues underscore central nervous system involvement and imply a neurodegenerative aspect of the disease [6].While most of these insights have originated from clinical observations, comprehensive studies to examine the nervous system's contribution to pathology and the interplay between neuronal and muscular degeneration have not been elucidated yet. To characterize molecular mechanisms underlying DM2 pathogenesis, different vertebrate and invertebrate animal models have been successfully generated.Interestingly, Drosophila has emerged as a very reliable model for studies on DM2 since the observed phenotype is highly reminiscent of human disease. In this review, we will describe the principal tools used in Drosophila to study and dissect molecular pathways related to muscular dystrophies and summarize the main findings on DM2 pathogenesis based on DM2 Drosophila models.Finally, we will illustrate how Drosophila may be successfully used to generate a tractable animal model to identify novel genes able to affect and/or modify the pathogenic pathway and to discover new potential drugs. DM2 Pathogenesis The pathogenic mechanism of DM2 is still not fully understood.There are three main hypotheses of how the CCTG repeat expansion results in the disease's manifestation (Figure 1). Individuals with DM2 have reported cognitive impairments, including problems w attention, memory, and executive functions.These cognitive issues underscore cent nervous system involvement and imply a neurodegenerative aspect of the disease [ While most of these insights have originated from clinical observations, comprehensi studies to examine the nervous systemʹs contribution to pathology and the interpl between neuronal and muscular degeneration have not been elucidated yet. To characterize molecular mechanisms underlying DM2 pathogenesis, differe vertebrate and invertebrate animal models have been successfully generate Interestingly, Drosophila has emerged as a very reliable model for studies on DM2 sin the observed phenotype is highly reminiscent of human disease. In this review, we will describe the principal tools used in Drosophila to study a dissect molecular pathways related to muscular dystrophies and summarize the ma findings on DM2 pathogenesis based on DM2 Drosophila models.Finally, we will illustra how Drosophila may be successfully used to generate a tractable animal model to ident novel genes able to affect and/or modify the pathogenic pathway and to discover ne potential drugs. DM2 Pathogenesis The pathogenic mechanism of DM2 is still not fully understood.There are three ma hypotheses of how the CCTG repeat expansion results in the disease's manifestati (Figure 1). CNBP Protein Loss of Function According to some studies [7][8][9][10][11][12], the CCTG expansion localized in the first intron CNBP affects its expression in cis by forming dsDNA secondary structures that al transcription [12] or by inducing nuclear sequestration of the expanded transcripts [ leading to haploinsufficiency; indeed, mice carrying homozygous or heterozygo deletion of the CNBP allele develop clinical manifestations strongly reminiscent of DM myopathy [11].Similarly, the silencing of CNBP from Drosophila muscle tissues caus CNBP Protein Loss of Function According to some studies [7][8][9][10][11][12], the CCTG expansion localized in the first intron of CNBP affects its expression in cis by forming dsDNA secondary structures that alter transcription [12] or by inducing nuclear sequestration of the expanded transcripts [7], leading to haploinsufficiency; indeed, mice carrying homozygous or heterozygous deletion of the CNBP allele develop clinical manifestations strongly reminiscent of DM2 myopathy [11].Similarly, the silencing of CNBP from Drosophila muscle tissues causes severe locomotor defects that can be fully rescued by reconstitution with either Drosophila CNBP or by its human counterpart [13].However, while some studies reported that CNBP protein levels are significantly reduced in muscle of DM2 patients, other works failed to observe such reduction [7][8][9][10][11]14], most likely as a consequence of the limited sample sizes and the variability of the disease. CNBP is a highly conserved ssDNA-binding protein [15] involved in the control of transcription by binding to ssDNA and unfolding G-quadruplex DNAs (G4-DNAs) in the nuclei, or translation, by binding to mRNA and unfolding G4-related structures in the cytosol [8,[16][17][18][19][20].Thus, CNBP protein deficiency can also affect CNBP targets correlating with the pathogenesis of DM2.In line with this view, we have recently demonstrated that CNBP is involved in polyamine biosynthesis by regulating the translation of ornithine decarboxylase (ODC; [13]), a key regulator of the metabolism of polyamines. Toxic Gain of Function mRNA from Expanded Repeats The CCTG expansion can be transcribed bidirectionally, resulting in the generation of both a sense and an antisense transcript [21,22].The accumulation of these transcripts can give rise to a toxic expanded RNA that has been proposed to have three main gain-of-function pathological mechanisms: (1) formation of toxic repeated RNA foci; (2) splicing defects related to defective functions of RNA-binding proteins, such as the muscleblind-like proteins (MBNL1-3) and CUG-binding protein 1 (CUG-BP1) [23,24]; (3) a recently discovered retention of the long intron 1 in CNBP mRNA.Retention of intron 1 has been found in different DM2 patient-derived cells, suggesting that CCUG expansions can have an inhibitory effect on CNBP pre-mRNA splicing by altering the RNA structure and/or the access of splicing factors to intronic regulatory regions [25]. Tetrapeptide-Repeat Rrotein (TPR)-Mediated Toxicity Intronic CCUG expansion in the CNBP mRNA can undergo non-canonical Repeat Associated Non-AUG (RAN) translation [21,26,27], producing two different tetrapeptide repeated protein TPRs (LPAC and QAGR) that disrupt cellular homeostasis [21].The two TPRs are produced by the bidirectional translation of the CCUG expansion, producing the LPAC tetrapeptide (leucine-proline-alanine-cysteine) in the sense direction and the QAGR tetrapeptide (glutamine-alanine-glycine-arginine) in the antisense direction.Both LPAC and QAGR have been found to be accumulated in brain biopsies from DM2 patients and seem to be responsible for at least some of the neurological features in people affected by myotonic dystrophy type 2 [21,28]. Each of these three potential mechanisms of toxicity is likely to contribute to disease initiation and progression; however, it is unclear to what extent each of them contributes to the development and the clinical manifestations of the disease and how they interact with each other or whether they act synergistically [22].It has recently been proposed that in the early stage of the disease, the main DM2 pathogenic mechanisms are CNBP haploinsufficiency and RNA toxic gain-of-function, while later toxic mRNAs are transported to the cytoplasm, where RAN translation occurs, leading to the production of toxic peptide and to a worsening of the phenotype [22]. In addition to DM2, there is another form of myotonic dystrophy, DM type 1 (DM1, Steinert' disease, MIM 160900), caused by an expansion of CTG repeats in the 3 untranslated region of the DM protein kinase (DMPK) gene [29].DM1 and DM2 display several similarities in clinical features, although DM2 lacks a congenital or early onset form. The finding that these two distinct mutations cause largely similar clinical syndromes has highlighted that they share similar molecular mechanisms [30].However, additional pathogenic mechanisms like changes in gene expression, microRNA, epigenetic modifications, protein translation, and metabolism may contribute to disease pathology and clarify the phenotypic differences between these two types of myotonic dystrophies [1]. Drosophila melanogaster as a Tool to Study Neuromuscular Disorders Drosophila melanogaster is a powerful animal model that can be used for genetic studies of human diseases.Fruit flies share around 75% of human disease-related genes [31,32], a similarity that makes Drosophila an excellent in vivo model system capable of revealing novel mechanistic insights into human disorders, providing the foundation for translational research and the development of therapeutic strategies [33,34].In recent years, Drosophila has emerged as an excellent model organism for human neurodegenerative and neuromuscular disorders [35].The fruit fly offers multiple advantages for the investigation of the molecular mechanisms of this kind of disease.In particular, the large progeny and the short life cycle allow for rapid study of the effects of genetic mutations on the neuromuscular system over the course of life [33][34][35].Several biological assays have been developed for analyzing the possible role of genetic and/or chemical modifiers in the pathogenesis of the diseases (Figure 2). Drosophila melanogaster as a Tool to Study Neuromuscular Disorders Drosophila melanogaster is a powerful animal model that can be used for genetic studies of human diseases.Fruit flies share around 75% of human disease-related genes [31,32], a similarity that makes Drosophila an excellent in vivo model system capable of revealing novel mechanistic insights into human disorders, providing the foundation for translational research and the development of therapeutic strategies [33,34].In recent years, Drosophila has emerged as an excellent model organism for human neurodegenerative and neuromuscular disorders [35].The fruit fly offers multiple advantages for the investigation of the molecular mechanisms of this kind of disease.In particular, the large progeny and the short life cycle allow for rapid study of the effects of genetic mutations on the neuromuscular system over the course of life [33][34][35].Several biological assays have been developed for analyzing the possible role of genetic and/or chemical modifiers in the pathogenesis of the diseases (Figure 2).[36]) or (F) adult flight muscles (adapted from [37]) are other important tools used for assessing defects in the development and function of the muscles associated with muscular dystrophies.When not specified, they are our original images. In order to determine movement defects in fly models for neuromuscular diseases (D) Disease genes can be expressed in the eye using specific GAL4 drivers to analyze neurodegeneration.The external eye offers a rapid readout, as the degenerative eye can show disruption of the stereotyped organization of ommatidia, leading to a rough eye phenotype.This easily observable phenotype enables genetic screens aimed at identifying modifiers (enhancers or suppressors) of eye alteration.(E) Analysis of larval muscles (adapted from [36]) or (F) adult flight muscles (adapted from [37]) are other important tools used for assessing defects in the development and function of the muscles associated with muscular dystrophies.When not specified, they are our original images. In order to determine movement defects in fly models for neuromuscular diseases such as DM2, it is possible to evaluate individual locomotor capabilities in wandering larvae or adult flies.One of the most common and convenient bioassays is the analysis of larval peristalsis.A peristaltic wave is a muscle contraction that propagates along the animal body and involves the simultaneous contraction of the left and right side of each segment, allowing larval movement [38].To analyze the motility of Drosophila larvae, it is possible to quantify different parameters such as the number of larval peristaltic waves performed in 1 min, the distance covered by each larva in the time unit (speed of larval locomotion), and the duration of the peristaltic wave (Figure 2A; [13,39].To assess movement capabilities in Drosophila adults, a widely used locomotion assay is the climbing assay, in which locomotion performances can be assessed using the fly negative geotactic response.In this test, an equal ratio of males or females of the desired ages are placed into a conical tube, flies are tapped down to the bottom of the tube and their subsequent climbing activity is quantified as the percentage of flies reaching the top of the tube in 10 s (Figure 2B) [40,41].Other parameters that can also be evaluated to measure locomotor capabilities in adult flies are the distance covered by each fly in the time unit (fly speed) or the decrease in locomotor performance on repetition of the test (fatigue) [42,43].The outcomes derived from these tests are also regarded as indicators of muscle functionality. The decline in locomotor function is also a prominent feature of aging, and it is evident that aging progressively modifies the physiological balance of the organism, increasing the susceptibility to neuromuscular degenerative diseases [44,45].However, how aging interconnects with disease-causing genes is not well known.Mutation in disease-related genes in Drosophila can also affect the lifespan and accelerate aging; therefore, it is crucial to analyze survival modifiers using the comparison of survival curves.A survival curve is a graphical representation of the proportion of a population that survives over time [46].The fruit fly is a highly advantageous model organism for studying the mechanisms of aging due to its relatively short lifespan, cost-effective breeding, and large number of progeny [33,34].To measure longevity in Drosophila and to generate a survival curve, groups of flies are maintained under tightly controlled environmental conditions, such as temperature, humidity, and light cycle, and their survival is monitored over time.The survival of the flies is assessed by counting the number of living and dead flies at regular intervals.The data collected from these observations are plotted on a graph where the shape of the curve provides insights into the aging process and whether mutation in specific disease genes affects the lifespan (Figure 2C) [13,35,47,48]. Neurodegeneration can also be easily analyzed in Drosophila eyes.The compound eye of the fruit fly is composed of about 800 repeating subunits called ommatidia, each of which consists of an ordered hexagonal array of 8 photoreceptor neurons, so precise that it is often referred to as a "neurocrystalline lattice" [49][50][51].This rigid organization allows us to exactly evaluate the effect of altered gene expression and mutated proteins on the external morphology of the eye and to detect slight alterations in ommatidia geometry due to cellular degeneration ([35] and references therein).Notably, the eye ommatidia array is disrupted when toxic proteins are expressed during development, allowing, for example, the use of an eye roughness assessment to identify modifiers of RAN-translated peptide toxicity [52,53].Although eye degeneration is not a prominent feature of DM2, the external eye also offers a rapid readout for genetic screens of genes possibly involved in neuromuscular disorders, as the degenerative eye can show disruption of the ommatidial structure, reduced size, and loss of pigmentation, which can easily be viewed using a dissecting microscope.Of note, the morphology of the eye can be dramatically disrupted without compromising the overall health of the fly (Figure 2D).Using eye-specific GAL4 drivers (GMR-GAL4), both disease genes or candidate modifiers can be expressed specifically in the eye, and the effects of highly toxic genes or proteins can be assessed in adult flies without lethality concerns. Once a new gene is identified as an eye neurodegeneration modifier, it is crucial to subsequently evaluate its function in other tissues that might be more characteristically affected in neuromuscular disorders, such as the brain or muscle.The GAL4/upstream activating sequence (UAS) system is a highly potent tool for precise gene expression.It relies on the properties of the yeast GAL4 transcription factor to activate the transcription of targeted genes by binding to UAS cis-regulatory sites.Drosophila strains have been genetically modified to incorporate both components, providing a wide array of combinations.This system is versatile and can be utilized for both gene silencing or expression in specific tissues or developmental stages [54]. Neuromuscular diseases, including myotonic dystrophies, are often characterized by muscular defects, including muscle atrophy and myotonia; thus, it is essential to analyze muscle structure and physiology [55].In this regard, the muscle fillet of Drosophila larvae is a commonly used tissue for studying muscle development and function.The larval muscle fillet can be stained using a variety of techniques to visualize muscle structure and specific markers.For example, fluorescent dyes, such as rhodamine phalloidin, can be used to label muscular actin filaments, while antibodies against specific muscle proteins can be used to identify cell types or structural components [36,56].Confocal microscopy is then used to capture high-resolution images of the muscle fillet, allowing the analysis of muscle structure and function at the cellular and subcellular levels (Figure 2E) [36].At the functional level, neurophysiological techniques involving the neuromuscular junction (NMJ) can provide insights into the communication between motor neurons and muscles.The NMJ is a specialized synapse connecting a motor neuron to a muscle fiber, leading to muscle contraction.Parameters, such as the number and branching pattern of neuronal connections to the muscle, are often analyzed to evaluate neuromuscular defects in Drosophila larval fillets [57].In addition, electromyography (EMG), a technique based on recording the electrical activity of muscles in response to nerve stimulation [57], can be used to investigate the NMJ activity.To the best of our knowledge, such techniques have not been utilized yet in Drosophila models of DM2. Defects in muscle development and function can also be evaluated in adult flies by analyzing adult flight muscles.To this end, dorsoventral sections of resin-embedded adult thoraces can be analyzed to measure the area of Indirect Flight Muscles (IFM) and evaluate morphological defects (Figure 2F) [37,58]. CNBP Protein Downregulation Haploinsufficiency of the CNBP gene, consequent to the nuclear sequestration and/or altered processing of expanded pre-mRNAs, has been proposed to play an important role in the pathogenesis of DM2.Mice carrying a heterozygous deletion of the CNBP allele show a phenotype strongly reminiscent of DM2: myotonia, increased fiber type variability, cataracts, and cardiac abnormalities [11,59].Studies on muscle tissues or myoblasts from DM2 patients provided controversial results regarding CNBP haploinsufficiency, possibly related to differences in the experimental design; some studies found normal CNBP RNA and protein levels in muscle tissues [60,61], while recent findings documented reduced levels and altered splicing of CNBP RNA, with corresponding low protein levels in muscle tissues but not in cell cultures [7,11].Another study showed decreased levels of CNBP protein but not RNA in DM2 muscle cell cultures, suggesting that the pathological expansion could affect the processing, the nuclear export, or the translation of the mutated RNA [62]. In line with this, ablation of CNBP from Drosophila muscle tissues has recently been shown to cause severe locomotor defects, which can be fully recovered by reconstitution with Drosophila CNBP or its human counterpart [13].The CNBP-dependent locomotor phenotype in Drosophila is linked to the ability of CNBP to control polyamine content by regulating the translation of ornithine decarboxylase (ODC; [13]).ODC is a key regulator of the metabolism of polyamines (putrescine, spermine, and spermidine), small intracellular polycations that control essential cellular functions, such as cell growth, viability, replication, translation, differentiation, and autophagy [63][64][65][66].Because of their critical role, the intracellular concentration of polyamines is tightly regulated; thus, CNBP loss of function has a strong impact on the processes related to these molecules.Of note, muscle biopsies obtained from DM2 patients showed reduced levels of both CNBP and its translational target ODC compared to healthy individuals, as in the DM2 fly model.Consistently, the content of the ODC metabolite putrescine was also significantly reduced in DM2 patients, indicating that polyamine synthesis might indeed be downregulated in the human disease context [13]. Remarkably, it was observed that polyamine feeding rescues the locomotor defects in the dystrophic fly model, suggesting a potential novel therapeutic avenue to treat DM2 patients.These findings highlight how Drosophila represents an excellent model to study the DM2 pathogenic mechanisms related to CNBP loss of function, and to identify possible new therapeutic strategies. Toxic Gain of Function of RNAs-Bi-Directional Antisense Transcription The expansion of the CCUG repeat in intron 1 of CNBP results in the synthesis of a long pre-mRNA.This toxic mRNA triggers a gain-of-function mechanism that elicits the formation of nuclear foci; the sequestration of splicing factors, such as MBLN, with consequent splicing defects; and the retention of CNBP intron 1 [25,67]. In order to investigate the pathogenic mechanism of CCUG repeat expansions in an animal model of DM2, flies expressing pure, uninterrupted CCUG repeat expansions, ranging from 16 to 720 repeats in length, have been generated [68].Transgenic expression of the expanded CCUG repeats with an eye-specific driver GMR-GAL4 leads to abnormal pigmentation and a rough eye surface, indicative of disruption of the ommatidial structure and neurodegeneration.The severity of the phenotype was dependent on the length of the CCTG repeat.Similarly, the specific expression of CCUG-expanded RNA in muscle using the How 24B -GAL4 driver leads to the formation of toxic ribonuclear foci in the cytoplasm of muscle cells.These results indicate that this DM2 fly model recapitulates key features of human DM2, including RNA repeated-induced toxicity, ribonuclear foci formation, and changes in alternative splicing dependent on MBNL [68].Interestingly, the levels of CNBP protein are not mutated in these flies, suggesting that CNBP haploinsufficiency is not related to the sole quadruplet expansion but rather to the genetic mutation occurring in the proper context of the human gene [13]. Moreover, the expression of (CCUG) 106 repeats in the Drosophila eye has been shown to trigger a strong apoptotic response [69].Inhibition of apoptosis through chemical compounds rescued the retinal disruption phenotype, underlying the power of this DM2 Drosophila model as a tool for drug screening.Indeed, in a recent study, 3140 small-molecule drugs from FDA-approved libraries were screened through lethality and locomotion phenotypes using the DM2 Drosophila model expressing (CCTG) 720 repeats in the muscle.Ten effective drugs that improved both the survival and locomotor activity of DM2 flies have been identified, uncovering potential drug targets that may mitigate the progression of the disease [36]. A common feature of both DM1 and DM2 is the ability of CUG-and CCUG-expanded RNA, respectively, to form secondary structures and sequester RNA-binding proteins forming nuclear foci [70].CCUG repeats tend to bind MNBL with higher affinity than CUG and to form bigger foci.However, DM2 patients generally experience a milder phenotype than DM1 patients. To explore this paradox and address divergent aspects of pathology in DM1 and DM2, novel Drosophila models expressing the respective CUG-and CCUG-expanded RNA in skeletal and cardiac muscle (using the muscle-specific driver myosin heavy chain Mhc-Gal4 or the cardiac-specific driver GMH5-Gal4), have been generated and evaluated [58].The expression of either CUG or CCUG-expanded repeats has been shown to sequester MBLN in ribonuclear foci in both muscle and cardiac tissue and that, as a consequence, MBNL-dependent splicing was altered.Interestingly, the expression of autophagy-related genes (Atg4, Atg7, Atg8, Atg9, Atg14) has been found to increase in the muscular and cardiac tissues of both DM1 and DM2 model flies [58].Physiologically, expression of CUG-or CCUG-expanded RNA in the muscles caused muscle degenera-tion with consequent reduced muscle area, diminished survival, and decreased locomotor performance [58].The two DM1 and DM2 fly models represent excellent animal models to investigate the clinical differences between these two human diseases, to increase knowledge about their pathogenesis, and to improve the development of new treatments. The important role of MBNL1 in both DM1 and DM2 pathogenesis is also supported by the evidence that cardiac overexpression of Mbnl, the Drosophila MBNL1 ortholog, is sufficient to rescue the heart dysfunctions and the reduced survival observed in the DM1 and DM2 fly models [37].Interestingly, it has also been found that the CCUG repeated RNA is bound by rbFox1, an RNA-binding protein involved in the regulation of different phases of RNA physiology [71][72][73].Differently from MBNL, rbFox1 preferentially associates with the CCUG repeats and not with CUG repeats and is sequestered in ribonuclear foci.Overexpression of rbFox1 has been shown to rescue both the muscular atrophy and locomotion ability of flies bearing the CCUG repeat expansion, demonstrating the importance and specific role of this protein in the pathogenesis of DM2. RAN Translation-Protein Toxicity The use of Drosophila melanogaster as a model organism has also been instrumental in studying the toxicity of repeat-associated non-AUG (RAN) proteins, which are produced by non-canonical translation of abnormal repeat expansions in various genetic disorders, including myotonic dystrophy type 2 [21,26].Through RAN translation, a protein is synthesized from a repeated nucleotide sequence that does not contain an AUG codon.The repetitive peptides are the result of RAN translation initiating at different sites within the repeat expansion, leading to the generation of different aminoacidic sequences depending on the reading frame [26]. CCTG expansions in DM2 have been shown to be bidirectionally expressed; thus, transcribed CCUG-repeated RNAs can be translated in two different tetrapeptide repeats: LPAC, leucine-proline-alanine-cysteine in the sense direction or QAGR, glutamine-alanineglycine-arginine in the antisense direction [21].These tetrapeptide products are repetitive in nature and can have aberrant biochemical behavior, leading to their accumulation inside the cells.Of note, the accumulation of these toxic RAN products has been implicated in the pathogenesis of DM2 and the associated cellular dysfunction in different tissues [21].Interestingly, LPAC and QAGR peptide-mediated toxicity seem to be independent of RNA gain of function in DM2 pathogenesis [21].In DM2 brain autopsy samples, LPAC proteins have been found in the gray matter, including neurons, astrocytes, and glia, and QAGR proteins have been found in the white matter [6,21]. The Drosophila melanogaster model of DM2-CCTG RAN-translation has not been reported yet.However, Drosophila has been successfully used in a model of amyotrophic lateral sclerosis and frontotemporal dementia to dissect the pathogenic mechanisms of the disease [74][75][76][77].This example supports Drosophila as an effective system for the study of RAN-dependent protein toxicity in neuromuscular degenerative diseases.Similar approaches could be set up to characterize the toxic contribution of RNAs and RAN tetrapeptides to the onset and progression of DM2 pathogenesis. Conclusions In conclusion, Drosophila melanogaster has proven to be a valuable animal model for studying myotonic dystrophy type 2 (Table 1).Although DM2 is a human-specific disorder, researchers have successfully utilized fruit flies to gain insights into the underlying mechanisms of the disease.The ability to dissect the different pathogenic mechanisms in DM2 fly models has provided evidence that both loss of function of CNBP and RNA toxic gain of function of the CCUG repeat contribute to pathology.Studies on Drosophila CNBP loss of function showed that the CNBP-dependent locomotor phenotype is linked to the ability of CNBP to control polyamine content by regulating the translation of ODC.Remarkably, polyamine feeding rescues the locomotor defects in this fly model, suggesting a potential novel therapeutic avenue for treating DM2 patients.The CCUG repeat toxicity also plays a crucial role in inducing DM2 disease through the sequestration of MBNL1 and rbFox1 factors and the formation of ribonuclear foci in muscle cells.Interestingly, it has been demonstrated that overexpression of Mbnl or rbFox1 in Drosophila is capable of rescuing both muscular atrophy and the locomotion ability of flies bearing the CCUG repeat expansion.Furthermore, a recent study identified ten effective drugs that improved both the survival and locomotor activity of the DM2 Drosophila model expressing (CCUG) 720 repeats in the muscle, uncovering potential drug targets that may mitigate the progression of the disease. Ultimately, Drosophila models have significantly accelerated the discovery of deregulated genes and pathways in DM2, including regulators of autophagy and apoptosis. The possibility to knock down the CNBP gene or to express the CCTG repeated RNA in specific fly tissues allowed us to selectively recapitulate the distinct DM2-associated molecular alterations and the corresponding phenotypes.Thus, fly DM2 models have been pivotal to discern the individual contribution of the different pathogenetic mechanisms to the onset and progression of the disease. The examples reported above demonstrate how Drosophila has been instrumental in identifying potential therapeutic targets for DM2.The ability to manipulate genes, observe phenotypic effects, and conduct large-scale genetic screenings in Drosophila has provided, and will surely continue to do so, additional valuable insights in the understanding of this complex disease that still lacks a resolutive treatment.However, while Drosophila has improved our comprehension of DM2, it is important to acknowledge that it cannot fully replicate the complexity of the human disease.Thus, further investigations using complementary model systems and clinical studies are essential for a full understanding of DM2 and the development of effective therapies. Figure 1 . Figure 1.Possible molecular consequences of CCTG nucleotide repeat expansion in the CNBP ge Loss of function: expansion of the repeats form dsDNA secondary structures that can el transcriptional gene silencing, resulting in partial or complete loss of the native protein encoded the CNBP gene.Transcribed repeated RNAs can also fold into complex structures that sequestered into the nucleus, resulting in haploinsufficiency.RNA toxicity: transcribed CCU repeated RNAs aberrantly interact with and sequester RNA-binding proteins, forming toxic RN foci.Protein toxicity: non-coding RNA repeats, lacking the canonical AUG translation initiati codon, undergo non-canonical repeat-associated non-AUG (RAN) translation, thus produci LPAC (sense) and QAGR (antisense) toxic tetrapeptides.Created with BioRender.com. Figure 1 . Figure 1.Possible molecular consequences of CCTG nucleotide repeat expansion in the CNBP gene.Loss of function: expansion of the repeats form dsDNA secondary structures that can elicit transcriptional gene silencing, resulting in partial or complete loss of the native protein encoded by the CNBP gene.Transcribed repeated RNAs can also fold into complex structures that are sequestered into the nucleus, resulting in haploinsufficiency.RNA toxicity: transcribed CCUG repeated RNAs aberrantly interact with and sequester RNA-binding proteins, forming toxic RNA foci.Protein toxicity: non-coding RNA repeats, lacking the canonical AUG translation initiation codon, undergo noncanonical repeat-associated non-AUG (RAN) translation, thus producing LPAC (sense) and QAGR (antisense) toxic tetrapeptides.Created with BioRender.com. Figure 2 . Figure 2. Examples of powerful assays used to assess neuromuscular degeneration and dysfunction in DM2 Drosophila models.Several behavioral tasks, such as (A) larval crawling and (B) adult climbing, allow monitoring the locomotor activity during Drosophila's life.(C) Drosophila lifespan assays are useful to follow the time course of neuromuscular degeneration and might be used as a readout for genetic screens, example of statistical significance **** p < 0,001 determined by long-rank test.(D) Disease genes can be expressed in the eye using specific GAL4 drivers to analyze neurodegeneration.The external eye offers a rapid readout, as the degenerative eye can show disruption of the stereotyped organization of ommatidia, leading to a rough eye phenotype.This easily observable phenotype enables genetic screens aimed at identifying modifiers (enhancers or suppressors) of eye alteration.(E) Analysis of larval muscles (adapted from[36]) or (F) adult flight muscles (adapted from[37]) are other important tools used for assessing defects in the development and function of the muscles associated with muscular dystrophies.When not specified, they are our original images. Figure 2 . Figure 2. Examples of powerful assays used to assess neuromuscular degeneration and dysfunction in DM2 Drosophila models.Several behavioral tasks, such as (A) larval crawling and (B) adult climbing, allow monitoring the locomotor activity during Drosophila's life.(C) Drosophila lifespan assays are useful to follow the time course of neuromuscular degeneration and might be used as a readout for genetic screens, example of statistical significance **** p < 0.001 determined by long-rank test.(D)Disease genes can be expressed in the eye using specific GAL4 drivers to analyze neurodegeneration.The external eye offers a rapid readout, as the degenerative eye can show disruption of the stereotyped organization of ommatidia, leading to a rough eye phenotype.This easily observable phenotype enables genetic screens aimed at identifying modifiers (enhancers or suppressors) of eye alteration.(E) Analysis of larval muscles (adapted from[36]) or (F) adult flight muscles (adapted from[37]) are other important tools used for assessing defects in the development and function of the muscles associated with muscular dystrophies.When not specified, they are our original images. Table 1 . Reference table of Drosophila melanogaster DM2 models.
2023-09-20T15:13:18.381Z
2023-09-01T00:00:00.000
{ "year": 2023, "sha1": "44dea8fa10ada5728fbd2534b2b6f8cfcd614abf", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1422-0067/24/18/14182/pdf?version=1694857879", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "98ceaadff1d32d8251d6576be0a6b8971ae7a1d5", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [] }
259311595
pes2o/s2orc
v3-fos-license
Selection of a stress‐based soil compaction test to determine potential impact of machine wheel loads The use of heavy machinery is increasing in agricultural industries, and in particular cotton farming systems in Australia, which induces an increased risk of soil compaction and yield reduction. Hence, there is a need for a technical solution to use available tools to measure projected soil compaction due to farm machinery traffic. The aim of this work was to compare the effects of static and dynamic loads on soil compaction. In this study, three Vertisols (soils commonly used for cotton production in Australia) were selected to examine soil compaction under a range of static and dynamic loads, respectively, using uniaxial compression equipment and a modified Proctor test. In general, soils behaved similarly under static and dynamic loads with no significant difference between bulk density values for all moisture contents with a high index of agreement (d = 0.96, RMSE = 0.056). The results further indicate better agreement between soil compaction produced under static and dynamic loads. Uniaxial compression test (static loads) produced greater compaction compared with the modified Proctor test (dynamic loads), in particular at moisture contents less than the plastic limit condition. The variation in soil compaction for static and dynamic loads was often evident for loads ≥600 kPa, with the greatest soil compaction induced under loads ≥1200 kPa. The findings of this study confirm the suitability of a modified Proctor method to assess soil compaction as an alternative tool under a range of moisture contents and machinery loads for Vertisols. | INTRODUCTION Soil compaction is generally defined as a reduction in soil total porosity and increase in bulk density (ρ d ) due to mechanical loads applied to surface soil during farm traffic (Chamen et al., 2015).Soil compaction poses a major soil constraint on soil health, limiting root penetration, crop development, water availability, and gas exchange, leading to reduced crop yields (Antille et al., 2016;Ferreira et al., 2022;Raper, 2005).It is a significant constraint for plant growth in agriculture (Roberton et al., 2021;Shaxson & Barber, 2003), involving in an estimated annual agricultural production loss of A $1330 million due to subsoil constraints in Australia (Orton et al., 2018;Rengasamy, 2002).In the last decades, soil compaction concern has become much higher globally, due to the development of the modern and increased mass of agricultural machinery.This trend is particularly notable in the Australian cotton industry, where the introduction of advanced machinery has exacerbated soil compaction issues.Increased machinery mass or axle weight simultaneously increases the risk of soil compaction, in particular, subsoil compaction, due to high load at the wheel (Antille et al., 2016;Chamen et al., 2003;Keller et al., 2007).The axle mean weight of modern machinery varies depending on the type of agricultural machinery (Keller et al., 2019).Soil bearing capacity depends on soil structure, texture and moisture, with coarse-textured soils typically having higher capacity at moderate moisture levels, while moist clay soils may show poor capacity.Sitespecific testing is crucial for accurate assessment in construction or foundation design (Alakukku et al., 2003). The increased axle weight on the soil interface for modern and heavy machineries tends to exceed the bearing capacity of most soils, and farm traffic becomes a major threat to land degradation due to compaction (Batey, 2009;Schjønning et al., 2009;Techen et al., 2020).This indicates the irreversible damage caused by heavy machinery and small machinery which confirms the concerns about soil compaction (Alakukku, 1999;Chamen et al., 2003;Keller & Arvidsson, 2004).Håkansson (1990) suggests the maximum load at the soil interface should be less than 200 kPa to prevent soil compaction risks.However, the degree of compaction induced by heavy machinery may vary from one soil to another depending on the soil strength, the specifications of the traction device (i.e., tyre vs track, tyre inflation pressure and wheel load, and tyre size and type), the travel speed (loading time) and the frequency of wheeling (i.e., the number of passes) (Antille et al., 2013;Augustin et al., 2020;Bennett et al., 2015;Suzuki et al., 2013).Crop residue mulching and standing stalks can also help relieve soil compaction caused by heavy machinery (Blanco & Lal, 2023).The potential for soil compaction due to heavy machinery is reasonably soil-specific and depends on the land condition, root depth, moisture content and organic matter (Bennett et al., 2019;Correa et al., 2019;Suzuki et al., 2013). The accurate determination of potential compaction for using any machinery in a particular soil is essential and often requires specific tools and equipment in the soil engineering laboratory.Uniaxial compression equipment remains a common tool, but its accessibility is limited.Various soil compaction tests, including cone penetration tests, dynamic cone penetrometers, and static cone penetrometers, offer a more accessible means to assess soil compaction effects under static loads (Beckett et al., 2018;Chukka & Chakravarthi, 2012).These soil compaction tests play distinct roles in assessing soil conditions.Uniaxial compression tests, while providing standardised measurements, often require specialised equipment (Keller et al., 2011).Cone penetration tests, whether dynamic or static, offer expedited assessments under static loads, yet may not fully replicate field conditions (Lunne et al., 2002).The Proctor test, recognised as a widely accepted standard, demands specific equipment and controlled conditions (Kodikara et al., 2018).The Proctor test stands out for its accessibility and simplicity, enabling expedient assessments of soil compaction under both dynamic loading conditions.Each test carries its own set of advantages and limitations, and the selection process hinges on factors such as accessibility, standardisation and equipment requirements (White, 2005).The accessibility of these tools is often challenging, for instance, uniaxial compression equipment as a common tool for soil compaction determination under static loads might not always be available in many soil engineering laboratories in Australia.The Proctor test is also approved as a universal standard test for soil compaction under dynamic loads and is often available in most soil engineering laboratories.The substitution of the uniaxial test by the Proctor test (static load to dynamic load) would potentially assist soil scientists and landholders to test the soil strength against specific heavy machinery loads during sowing or harvesting traffic seasons.This further allows land managers to quickly ascertain the safe selection of traffic and the potential for soil compaction to occur using particular machinery.Therefore, this study aims to compare the bulk density induced by the effect of dynamic and static loads at different levels of moisture contents to test the hypothesis that the modified Proctor test is proportional to a specific uniaxial load in terms of the resulting compaction magnitude (Equation ( 1)). Highlights • Similar soil compaction occurs under the static and dynamic loads at various soil moisture contents and applied loads.• Maximum soil compaction occurs in soils with 15%-20% moisture content at any applied load.• Static loads generally produce higher compaction compared with dynamic loads for all soils. where ρ Dyna is the bulk density induced by dynamic loads (from modified Proctor test) and ρ Stat is the bulk density induced by static loads (from uniaxial compression loads).Should the hypothesis hold, then a modified Proctor test can be utilised in place of a uniaxial compression test for soil compaction determination under projected loads and moisture contents. | Site description and soil sampling The study was conducted in three sites situated in South East Queensland, Australia, near Goondiwindi for Sites 1 and 2, and in Yalangur for Site 3, all characterised by a humid subtropical climate according to the Köppen Climate Classification (Figure 1).The region features a predominantly flat terrain comprising plains and gentle undulations, with extensive agricultural land surrounding the towns.Altitude measurements for Sites 1 and 2 were recorded at 208 metres, while Site 3 was at 435 metres, with slope angles ranging from 0.09% to 0.55% (Table 1).Soil samples were collected from the surface, through the common plough depth (0-30 cm) from each site using a stratified random sampling approach, with multiple soil cores obtained using soil augers.Soils are classified as Vertisols (IUSS Working Group WRB, 2014) which are usually used for cotton production in Queensland, Australia (Table 1).The soils were air-dried and crushed with sufficient energy to break down the aggregates to pass through a 2.3 mm sieve; care was taken to not apply energy greater than required in order to F I G U R E 1 Map of study sites in southeast of Queensland state, Australia. T A B L E 1 Particle size distribution, Atterberg limit moisture content for used soils, altitude (above sea level) and slope angle of the land areas from Queensland. Soils Location Clay% Silt% Sand% maintain the physical bonds of the aggregates <2.3 mm.Published methodologies were used to determine soil particle size distribution (Gee & Bauder, 1986) using the hydrometer method, with Atterberg limits, liquid limits and plastic limits determined following standard procedures (AS 1289.3.1.1-2009and AS 1289 3.1.1, 3.1.2).Liquid limit was determined using the Casagrande apparatus, measuring the moisture content at which soil exhibits specific flow behaviour, while plastic limit is identified by the moisture content at which soil can be moulded into a 3 mm thread without crumbling.The soil characteristics are presented in Table 1. | Experimental design The | Uniaxial compression Soil bulk density was determined in a drained uniaxial compression test using the modified method of Håkansson (1990) (Figure 1).The test was modified procedurally to suit a single pass of heavy machinery and to provide a comparison to Suzuki et al. (2013).The soil was moistened to targeted moisture contents via a fine spray bottle and left to equilibrate overnight ($16 h) in a sealed container.The applied stresses were monitored using a load cell (Anyload, 100 kN, USA), and Vishay System 5000 StrainSmart software was used to record the measured data.The soil was placed in the uniaxial cell (90 mm in diameter, and 165 mm in height) (Figure 1) and dropped three times from the height of 50 mm to attain uniform packing.The soil was then loaded from small to large loads (from 200 to 3200 kPa) for 5 min and allowed to rebound for 1 min for each sequential load before the sample height and volume were determined. The deformation of the sample was then measured at five points on the surface of the sample and the average was taken.Finally, the soil was removed from the uniaxial cell, weighed, and dried at 105 C for at least 48 h to calculate the exact moisture content.The dry reference bulk density was then calculated for the sequential static loads. | Modified proctor test The modified loading of the Proctor test was achieved by altering the number of blows per layer.The testing procedure was conducted in accordance with the Australian standard for Proctor tests (AS1289.5.1.1).In the Australian standard Proctor test, a soil specimen undergoes compaction at various moisture contents using a standard effort, typically 25 blows per layer, to determine the maximum dry bulk density and optimum moisture content, providing crucial insights into the soil compaction characteristics for engineering purposes.The number of blows was changed to match the static loads as detailed in Table 2.The test was repeated three times for each Static pressure load (Static pressure equivalence [kPa]). The Static pressure for the Proctor test is described by Raghavan and Ohu (1985) (Equation ( 2)). where SPE is Static Pressure Equivalence in kPa, and ProcB represents the Proctor test blow number. The same amount of moist soil was placed in the mould similar to the uniaxial test (Figure 1).Both Uniaxial and Proctor tests were conducted on the same day for each soil moisture content to avoid inconsistency from moisture content.The Proctor hammer was then used to compact the soil to produce various dynamic loads from 200 to 3200 kPa (Table 2).These blows were spaced evenly over the surface of the soil.The manual adjustment was made around the edge of the mould to ensure an even soil surface during the application of blows.The height of the soil was calculated, and soils were removed from the moulds, weighed and dried at 105 C for at least 48 h.The dry reference bulk density was then calculated for the sequential dynamic loads using the Proctor test. | Relative bulk density The comparison between reference bulk density for static and dynamic loads can be denoted by the ratio of compaction or the relative compaction defined as follows in this study (Equation (3)). Relative compaction where relative compaction is the percentage difference in soil bulk density produced by static and dynamic loads.ρ Dyna is bulk density produced by dynamic load (Proctor test), and ρ Stat is the bulk density produced by static load (uniaxial compression test). | Statistical analysis The reference bulk density of static and dynamic load results was analysed using a calculated Pearson's product-moment correlation coefficient and the analysis of variance.The root mean square error (RMSE), index of agreement (d) (Willmott et al., 2012), the coefficient of determination (R 2 ) where predicted values fitted to y = x line were used to assess the level of agreement between the reference bulk density of static and dynamic load results.Three replicates were conducted for each laboratory measurement, and a probability level of P-value < 0.05 was accepted for assessing the model's performance using R 2 and d index.Relationships between static and dynamic loads were considered very good when R 2 was greater than 0.7 and d was 0.8. | Static versus dynamic stresses The relationship between soil compaction obtained from both static and dynamic stresses is presented in Figure 2 and Table 3.In general, there was a greater agreement between soil compaction for static and dynamic loads despite some discrepancies in higher bulk densities >1.6 g. cm À3 .This agreement was greatest for Soil 1 and Soil 3 with high coefficients of determination (R 2 = 0.82 and 0.88), respectively.There was also a very great agreement index (d = 0.96) between reference bulk density values for compression and Proctor tests (Figure 3).No significant difference was observed (p value = 0.13) between ρ Stat and ρ Dyna values with high coefficients of determination (R 2 = 0.83, RMSE = 0.056) for all soils (Figure 2).The dataset was further split into two datasets for bulk density greater and less than 1.6 g.cm À3 .The results indicate that ρ d produced under dynamic and static is in greater agreement (R 2 = 0.8, d = 0.94 and RMSE = 0.03) compared to ρ d larger than 1.6 g.cm À3 (R 2 = 0.34, d = 0.83 and RMSE = 0.088) (Figure 3 and Table 3).Figure 4 demonstrates that this agreement is greater with increasing moisture content ≥19%. However, the compression test (static loads) generated greater compaction compared to the Proctor test (dynamic loads) in particular at moisture contents smaller than the plastic limit (Figure 5).Thus, the static and dynamic loads are effectively equivalent to producing soil compaction at various soil moisture content and loads. Soil compaction values for all loads and moisture contents were further analysed to predict static loads from produced bulk density by dynamic loads.Equations ( 4)-( 6) can be used for bulk densities ≤1.6 g.cm À3 , >1.6 g.cm À3 and all bulk density values, respectively. T A B L E 2 Applied loads to determine soil compaction under static and dynamic loads at different moisture content.F I G U R E 2 Schematic diagram of uniaxial compression tool used for determining bulk density under static loads. where ρ Stat is bulk density produced by static loads (uniaxial hydraulic press), ρ Dyna is bulk density produced by dynamic loads (modified Proctor method), and MC is the gravimetric moisture content in percentage. | Effect of soil moisture on stress agreement between methods The moisture content of soil samples used in the uniaxial compression and Proctor tests to obtain the bulk density curves is shown in Figure 5.Given that the compressive behaviour of the soil is highly dependent on soil moisture, soil samples with different moisture reached greater bulk densities under increasing both static and dynamic stress, resulting in a different degree of compaction.In general, the obtained bulk density for both methods indicates that stresses are largely dependent on the soil moisture contents.Stress agreement between static and dynamic loads was dependent on the moisture content level, this agreement was relatively poor for smaller moisture contents (Figure 4). High moisture contents (≥18%) generally resulted in better agreement between both static and dynamic loads. The results showed that the greatest compaction occurred for 15% and 20% MC values where 15% MC resulted in optimum compaction for all stresses and soils in particular for the loads greater than 1000 kPa (Figure 5).The $15% MC (optimum MC) resulted in a significant difference in bulk density values compared with other moisture contents (p value < 0.001), where there was no significant difference among other moisture contents.High moisture contents generally resulted in better agreement between both static and dynamic loads (Figure 4).In wet soils (above plastic limit moisture content), water acts as a lubricant between soil particles, resulting in relatively consistent compaction under stresses (Hamzaban et al., 2019). | Stress selection to obtain the bulk density The bulk density values obtained from the compression curve and Proctor tests for different levels of stresses and moisture content are presented in Figure 5.There was no T A B L E 3 Bulk density (g.cm À3 ) according to the applied load in the static (compression test) and dynamic load (Proctor test) tests.significant difference between bulk density values obtained from the compression curve and applying stresses from the Proctor test ( p value = 0.13, RMSE = 0.056) for all soils.The static loads generally produced greater compaction compared with dynamic loads for all soils.The statistical analysis of obtained bulk density values indicates that there was a significant difference for stress ≤600 kPa and ≥800 kPa ( p-value < 0.001).There was also no significant difference for large stresses ≥1200 kPa and further compaction occurred with increasing loads for both static and dynamic loads.All soils behaved similarly under static and dynamic loads where there was no significant difference between reference bulk density values for all moisture contents. Loads (kPa) F I G U R E 3 Relationship between soil bulk density obtained from compression test (static load) and Proctor test (dynamic test) under different stresses and moisture contents.The diagonal solid line represents the 1:1 line, and the dotted line is the regression fit for the observed data.Root mean square error (RMSE) is root mean square error relative to the 1:1 line, R 2 is the coefficient of determination, d is index of agreement (Willmott et al., 2012) and the p-value is the probability that the null hypothesis is true obtained from analysis of variance.The presented coloured lines represent average values of bulk densities obtained from both static and dynamic loads (200-3200 kPa). The results indicated that static bulk density can be predicted from dynamic bulk density produced by the modified protector test (Equations ( 4)-( 6), Tables 4 and 5).For ρ d ≤ 1.6 g.cm À3 greater accuracy of ρ Stat can be predicted compared with ρ d > 1.6 g.cm À3 (Figure 6). | DISCUSSION Soil compaction is a critical factor in various engineering and environmental applications, influencing soil properties and performance.Understanding the mechanisms and effects of compaction induced by different load types is essential for effective soil management and design.This discussion section aims to compare the effects of dynamic and static loads on soil compaction, investigating their respective influences on soil structure, pore characteristics, and mechanical behaviour.However, the actual pore water pressure, relevant matric potential and shearing behaviour of the soil that defines the actual strength and resilience of the soil and their properties are not discussed.By examining the similarities and differences between these load types, valuable insights can be gained to optimise compaction techniques and mitigate potential adverse effects on soil quality and functionality. | Mechanisms of soil compaction for static versus dynamic loads Excessive soil compaction that results from the impact of the wheels of agricultural machines and other traffic is one of the major concerns of modern agriculture.Soil compaction is generally dependent on the soil strength and applied loads by machinery traffic in agricultural lands.The soil strength is impacted by moisture content, soil texture, soil structure and organic matter content (Alakukku, 1999;Bennett et al., 2019;Chamen et al., 2003;Suzuki et al., 2013).The frequent passage of machinery (dynamic loads) over soil can increase bulk density and compaction risk in both topsoil and subsoils and produce less suitable physical conditions for water storage, aeration, microbial activity and seedling emergence (Assouline et al., 1997;Augustin et al., 2020;Bennett et al., 2019;Botta et al., 2006;Chamen et al., 2015;Liu et al., 2022).However, during wheeling, shearing and compaction occur simultaneously.Unlike compaction that causes volume changes, shearing results in minor deformation while altering the soil's shape.Excessive shearing without proper drainage can lead to soil homogenisation and reduced strength (Horn & Peth, 2011;Huang et al., 2022). The results of this study confirmed that soil compaction occurs almost equally under both static and dynamic loads (p value = 0.13, RMSE = 0.056), with slightly greater compaction for static loads (single pass of heavy 3. F I G U R E 5 Relative compaction obtained from the ratio of reference bulk density of Proctor test to reference bulk density of compression test at different levels of moisture content.STDEV is standard deviation and RMSE is root mean square error.machinery), in particular, for moisture contents less than the plastic limit.This indicates that multiple passes of light machineries and a single pass of heavy modern machineries can have a non-significant influence on soil compaction depending on the soil moisture content and organic matter content.Compared with static loading, cyclic loading resulted in further deformation and dilative shear behaviour in soils (Huang et al., 2022).However, Silva et al. (2008) reported that major soil compaction is caused by the first passage of machinery, or early movement of machinery, and increasing subsoil compaction with increasing number of passes.Previous studies also stated that there is no significant difference between ρ d values induced under static and dynamic loads with greater accuracy under static loads (Al-Radi et al., 2018;Hafez et al., 2010;Lebert et al., 1989).This study further confirms that the agreement between soil compaction produced from static and dynamic loads differs slightly depending on the degree of compaction where greater agreement observed for ρ d > 1.6 g.cm À3 .However, there was no significant difference between both approaches for ρ d < 1.6 g.cm À3 ( p value = 0.067, RMSE = 0.088).Therefore, the static and dynamic loads are effectively equivalent to produce soil compaction and soil moisture, and loads are major factors in its severity. Given that the compressive behaviour of the soil is largely dependent on soil moisture, soil samples with different moisture result in a different degree of compaction.Soil moisture content smaller than plastic limit generally resulted in greater compaction, where $14% gravimetric moisture content produced optimum and significant bulk density (p value < 0.001) for both static and dynamic loads.It can be noted that soil strength increases with increasing ρ d values while it decreases with decreasing soil moisture content.Therefore, one should be prudent when using machinery on farms because moisture content varies between the seasons due to different climates.It was further found that soil compaction was much more sensitive to the varying moisture content than changing applied loads.Similar results were observed from previous studies and advising to limit traffic to avoid compaction in wet seasons (Jamali et al., 2021;Raghavan et al., 1979;Raper, 2005). Soil compaction occurs from static and dynamic loads induced from farm trafficking, animal trampling in grazing lands and military exercises (Nawaz et al., 2013;Silva et al., 2008;Webb, 2002).The moisture condition of the soil needs to be considered along with the applied loads (i.e., axle loads) of machinery or any other activities on agricultural lands.Therefore, precautions are necessary Note: n%, number of observations as a percentage; Min and Max, the minimum and maximum value for the datasets, respectively; AED, average Euclidean distance from the line y = x; 2σ ED , two standard deviations of the Euclidean distance; RMSE, root mean square error of the datasets, R 2 is coefficient of determination for x = y; d is index of agreement and p-value is the probability value of significant difference at the 95% confidence interval (α = 0.05). T A B L E 5 Statistical characteristics pertaining to Equations (4, 5) and ( 6) from validation data. Statistic Unit Equation ( 4) Equation ( 5) Equation ( 6 Note: R 2 is the explained variance of the response by the predictors; R 2 ADJ is the adjusted R 2 to compare the explanatory power of regression models; R 2 PRED is the predicted R 2 ; Cp is Mallow's measure of precision; DWS is Durbin-Watson statistics to detect the presence of autocorrelation; and PRESS is the predicted residual sum of squares.The p-value was <0.0001 for both regression analysis.to avoid soil compaction with considering soil moisture, axle loads and the degree of compaction in order to sustain agricultural yield. | Prediction of soil compaction under dynamic loads This study sought to test the hypothesis that the soil compaction produced by static loads was equivalent to the soil compaction caused by dynamic loads via the use of bulk density as a criterion for soil compaction.In our case, the soil compaction was tested under a uniaxial hydraulic press as a source of static loads and dynamic loads stimulated by a modified approach of protector test for various moisture content levels.Results allowing the acceptance of the hypothesis was obtained (Figures 2 and 3, and Tables 3 and 4), with soil compaction in great agreement for both static and dynamic loads (R 2 = 0.83, d = 0.96 and RMSE = 0.056).The bulk density slightly diverged when ρ d > 1.6 g.cm À3 , suggesting that compacted soil may behave separately under static and dynamic loads.This indicates that the first or early passages of machinery can cause soil compaction regardless of the type of applied loads (Silva et al., 2008).However, the strength required to form further compacted soil can differ slightly for static and dynamic loads.The findings of this study provide confidence for the substitution of the hydraulic press method with the modified Proctor test depending on the availability of these tools in soil engineering laboratories. A range of standard compaction tests are available for determining soil compaction and its relationship with soil moisture and loads.The choice of test mainly depends on the availability of tools and soil type.The Proctor test is one of the earliest tests that was developed by Ralph Proctor in California in 1933 (Wiltshire, 2004).The Proctor test is considered a conventional method and is often available in most soil and geotechnical laboratories while the uniaxial hydraulic press might not always be accessible.Given the results presented in this study for accepting the hypothesis, the use of the modified Proctor test can be considered as an alternative method for the determination of soil compaction under different loads and moisture content.Our data further suggest the prediction of bulk density for static loads using the modified Proctor method (Equation (4), R 2 = 0.92 and RMSE = 0.05), with greater accuracy for ρ d < 1.6 g.cm À3 (Equation ( 6), R 2 = 0.82 and RMSE = 0.028).This implies that one could undertake soil compaction determination using the modified Proctor method and obtain equivalent soil compaction for static loads. | Management implications The comprehensive investigation into the interaction between static and dynamic stresses on soil compaction, as outlined in this research, holds significant implications for soil and agricultural management strategies.These findings include profound consequences for the sustainability of agriculture and the effectiveness of engineering practices in this field. The selectivity of a high level of concordance between soil compaction under static and dynamic loads highlights the importance of gaining a nuanced understanding of compaction dynamics (Al-Radi et al., 2018).This finding underlines the need for a holistic approach to soil management, where the choice between static and dynamic loading methodologies is not only dictated by operational limitations but rather informed by soil bearing capacity and soil behaviour under varying loading regimes (Hafez et al., 2010).Moreover, the delineation of the influence of soil moisture content on stress agreement shows the complex relationship between environmental conditions and soil compaction dynamics (Bennett et al., 2019).The agreement between static and dynamic loads under higher moisture levels (>18%) underlines the fundamental role of moisture management in shaping soil compaction outcomes, offering a compelling rationale for the integration of sophisticated irrigation and drainage schemes in soil management practices (Augustin et al., 2020).The identification of optimal moisture content levels, such as around 15% gravimetric moisture content, transcends simple empirical observation to constitute a strategic imperative for sustainable soil and agriculture management (Jamali et al., 2021).This finding not only underlines the criticality of precision moisture control but also leads the need for adaptive management strategies that account for temporal and spatial variability in moisture levels (Raghavan et al., 1979). Furthermore, the development of prediction models for estimating bulk density under static loads using dynamic loading tests represents a paradigm shift in soil engineering methodologies (Silva et al., 2008).By leveraging predictive analytics, land managers can transcend the limitations imposed by equipment availability, thereby guiding in a new era of accessibility and applicability in compaction-testing methodologies (Taffese & Abegaz, 2022;Webb, 2002). The implications attained from this study drive a reassessment of traditional paradigms in soil and agriculture management, progressing a shift towards holistic, data-driven approaches that integrate dynamic loading methodologies, moisture management strategies, and predictive analytics.By embracing these insights, land managers can navigate the complex terrain of soil compaction with confidence, adopting a cooperative relationship between agricultural productivity, engineering efficacy, and environmental sustainability. | CONCLUSION This study was conducted to test the hypothesis that soil compaction generated by static loads is equivalent to soil compaction under dynamic loads.By examining three clay soils, we evaluated bulk density using both the uniaxial hydraulic press compression and modified Proctor method across varying moisture contents and loads.Our findings robustly confirm a great degree of concordance between bulk density values for both static and dynamic loading conditions, despite some minor discrepancies observed at elevated bulk density levels.This supports the initial hypothesis.Moreover, we developed predictive models to estimate soil compaction for static loads based on data derived from the modified Proctor method. The results emphasise the viability of the modified Proctor test as a reliable alternative for soil compaction assessment in agricultural settings.This methodology not only enhances accessibility but also ensures the accuracy of soil compaction determinations.Overall, our study provides valuable insights for land managers and researchers, emphasising the importance of adopting an accessible approach to soil compaction assessment in agricultural contexts.These insights help for informed decision making and effective soil management practices in agricultural lands. F I G U R E 4 Bulk density produced by dynamic loads (Proctor test) and bulk density produced by static loads (uniaxial test), plotted against the line y = x (red line), with the line y = Àx (black line) intercepting the data at the threshold of increasing variability (y = 1.6, x = 1.6).Statistics are presented in Table F I G U R E 6 Bulk density obtained by the compression curve (static loads) and Proctor test (dynamic loads) under 200, 400, 600, 800, 1200, 1600, 2400 and 3200 kPa) for (a) Soil 1, (b) Soil 2 and (c) soil 3 at a range of moisture contents.Upper-case pronumerals represent Tukey's honest significant difference, differing pronumerals indicate significant changes in bulk density due to the change in applied loads. T A B L E 4 Descriptive statistics for the full dataset from Figure3of used soils, and where the bulk density (ρ d ) is used to split the dataset.
2023-07-03T09:16:50.603Z
2024-05-01T00:00:00.000
{ "year": 2024, "sha1": "d16a4829a18ca6eeba871c41da2440187ba51e65", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/ejss.13501", "oa_status": "HYBRID", "pdf_src": "Wiley", "pdf_hash": "fc39e1595ba0cea5792b313d3f7df91bc4a924c2", "s2fieldsofstudy": [ "Agricultural and Food Sciences", "Environmental Science", "Engineering" ], "extfieldsofstudy": [] }
16885658
pes2o/s2orc
v3-fos-license
Electronic Dietary Intake Assessment (e-DIA): Comparison of a Mobile Phone Digital Entry App for Dietary Data Collection With 24-Hour Dietary Recalls Background: The electronic Dietary Intake Assessment (e-DIA), a digital entry food record mobile phone app, was developed to measure energy and nutrient intake prospectively. This can be used in monitoring population intakes or intervention studies in young adults. Objective: The objective was to assess the relative validity of e-DIA as a dietary assessment tool for energy and nutrient intakes using the 24-hour dietary recall as a reference method. Methods: University students aged 19 to 24 years recorded their food and drink intake on the e-DIA for five days consecutively and completed 24-hour dietary recalls on three random days during this 5-day study period. Mean differences in energy, macro-, and micronutrient intakes were evaluated between the methods using paired t tests or Wilcoxon signed-rank tests, and correlation coefficients were calculated on unadjusted, energy-adjusted, and deattenuated values. Bland-Altman plots and cross-classification into quartiles were used to assess agreement between the two methods. Results: Eighty participants completed the study (38% male). No significant differences were found between the two methods for mean intakes of energy or nutrients. Deattenuated correlation coefficients ranged from 0.55 to 0.79 (mean 0.68). Bland-Altman plots showed wide limits of agreement between the methods but without obvious bias. Cross-classification into same or adjacent quartiles ranged from 75% to 93% (mean 85%). Conclusions: The e-DIA shows potential as a dietary intake assessment tool at a group level with good ranking agreement for energy and all nutrients. (JMIR mHealth uHealth 2015;3(4):e98) doi: 10.2196/mhealth.4613 Introduction The collection of accurate dietary consumption data is important in the field of nutritional epidemiology in order to establish true relationships between nutrition and health status. The food record (weighed or estimated portions) is a traditional method used to record amounts and types of foods and beverages consumed prospectively, thus limiting recall bias [1,2]. However, one of the main limitations of food records is the high burden placed upon respondents to record this detailed dietary information [1,2]. For researchers, food record entries must be manually entered for analysis with food and nutrient software programs which takes significant time. Thus, improvements to methods for prospective dietary recording would be beneficial for research participants and researchers alike. With 81% of Australians regularly using a mobile phone [3], the collection of dietary intake records using a mobile phone app has the potential to be more convenient for recording entries than conventional paper-based food records [4,5]. Mobile phone apps that use image-based food records rather than digital entry of foods are also increasingly available [6][7][8][9]. A recent review by our group concluded that mobile phone use to record dietary intake was preferred by users over conventional methods and offers the potential to reduce research costs through automated coding [6]. A number of commercial mobile phone apps such as MyFitnessPal and Lose It provide a platform for users to digitally record foods and beverages consumed and have these records integrated with food composition databases to calculate nutrients [10]. Only one, Easy Diet Diary, uses an Australian database of foods. However, the feedback display of nutrient intakes by these apps might elicit unintended behavior changes. We aimed to purposely design a mobile phone app (the electronic Dietary Intake Assessment, e-DIA) that would allow digital recording of all foods and beverages consumed, either weighed or estimated, but provide no nutrient content feedback. The aim of this study was to compare the energy and nutrient intakes collected with e-DIA against 24-hour dietary recalls and evaluate e-DIA's potential as a dietary assessment tool in research. Study Sample Students enrolled in a study aimed at assessing university students' dietary intakes were invited to participate in this validation study. Recruitment methods for the larger study included email and poster advertisements on the university campus, which included a weblink to an online screening survey. Out of 313 students who completed the survey, 170 were eligible and 113 students were enrolled at an interview during which the study protocol was explained and written informed consent was obtained. From the enrolled students, 66 agreed to participate in the validation study and 57 completed both e-DIA and 24-hour dietary recalls from March to April 2014. To boost sample size, an additional 23 students were recruited in August 2014 by the same methods. This resulted in a final sample of 80 students (Figure 1). Inclusion criteria included being a full-time student aged 19 to 24 years, being enrolled in the second, third, or fourth year of study within the Science or Engineering departments, and owning a mobile phone. Nutrition and health science students were excluded. As an incentive to participate, all students were entered in a drawing to win an Apple iPad Mini after completion of the study. The study was conducted in agreement with the National Statement on Ethical Conduct in Human Research [11], and ethical approval was obtained from the university's Human Research Ethics Committee (2014/136). e-DIA Mobile Phone App Students downloaded the e-DIA app using an Android or iOS platform on their own mobile phone. To record intake, the user selects the meal occasion during which the food or beverage is consumed (breakfast, lunch, dinner, or other) which opens the Edit/Delete screen ( Figure 2). On this screen the user selects the Food/Drink field to search for and choose the food or drink they consumed. A search-as-you-type function which begins to show a string of options once three letters are typed was built into the app, as was a favorites function for entry of foods commonly consumed by the participant. These additional navigation functions were added after usability testing of a previous prototype of e-DIA (results unpublished). The list of foods for this search function was based on the 2007 Australian Food, Supplement, and Nutrient Database (AUSNUT 2007)-the most recent food composition database at the time this research was conducted [12]. To log foods that were not listed or could not be found in the AUSNUT 2007 database, participants were asked to enter these manually into e-DIA. The amounts of foods and beverages consumed and location of consumption were also recorded ( Figure 2). Data were uploaded to the research administrator's website each day at midnight, after which the user could no longer access or view the record. Procedure At an initial clinic appointment on the university campus, anthropometric data were collected by the study investigators. Height was measured to the nearest 0.5 cm, weight to the nearest 0.1 kg (without heavy clothing or shoes), and waist circumference to the nearest 0.5 cm, according to the Anthropometry Procedures Manual from the National Health and Nutrition Examination Survey (National Center for Health Statistics, US Department of Health and Human Services) [13]. Participants were instructed to complete five consecutive days of food records including three weekdays and two weekend days using e-DIA. Participants practiced selecting and entering food items and weights, and written instructions were included on how to choose foods from the database, how to enter mixed recipes, and how to estimate portion sizes when eating away from home. Participants were asked to weigh foods using the scales supplied (Salter 1066WHDR); an instruction booklet was provided. If participants were unable to weigh the foods, they were instructed to estimate portion sizes using metric cups and spoons supplied. Starting days were staggered so that all days of the week were represented across the sample. Participants were sent a text message reminder prior to each collection day which encouraged them to maintain their usual diet. As a reference measure, three 24-hour dietary recalls were collected on three random days (including weekend days) during the five-day study period. Appropriate calling times were established at the convenience of the participants. The standard 24-hour dietary recall interview multi-pass script adapted from the Five-Step Multiple-Pass Method by the US Department of Agriculture [14] was used for the 30-minute telephone interviews, and participant responses were recorded on a standardized 24-hour dietary recall form. In addition to the metric cups and spoons, a food model booklet [15] was provided to aid in the estimation of food and beverage portion sizes for the 24-hour dietary recalls. Data Coding and Cleaning All entries were checked the following day by study investigators, and participants were contacted to clarify manually entered food items and obvious inconsistencies such as gross data entry errors and skipped meals. Data collected using the e-DIA mobile web app were stored in a cloud-based database, and records were linked to food items in the AUSNUT 2007. If the nutrient composition of manually entered food items was known, study investigators added the information to the database; if unknown, investigators coded to the closest match. Food intake data from the 24-hour dietary recalls were manually entered by trained study investigators into FoodWorks 7 Premium [16], a nutrient analysis software system using the AUSNUT 2007 database [12]. Energy and nutrient intakes from the 24-hour dietary recalls and e-DIA were examined for outliers and checked against the original 24-hour dietary recall for obvious errors in data entry. Errors made by the participant in the e-DIA were left unaltered, and no outliers were removed to provide a more accurate indication of the relative validity of the e-DIA method. Vitamin and mineral supplements were excluded from analysis. Statistical Analysis Mean or median intakes of energy and nutrients from three days of 24-hour dietary recalls and five days of e-DIA were calculated and differences determined using paired t tests (normally distributed data including energy and macronutrients) or Wilcoxon signed-rank test (skewed data for alcohol and micronutrients). Correlations between the two methods were measured using Pearson product-moment correlation coefficients (or Spearman rank correlation coefficients for skewed data) for unadjusted, energy-adjusted and deattenuated data. Energy-adjusted nutrients were obtained by applying the residual method [2]. Deattenuated nutrient intakes corrected for within-person variation in both 24-hour dietary recalls and e-DIA were estimated using the Multiple Source Method [17]. Cross-classification and Bland-Altman plots [18] were used to assess the agreement between the 24-hour dietary recalls and e-DIA for energy and nutrients. Cross-classification examined the proportions of participants classified into the same, same or adjacent, or extreme quartiles of energy-adjusted intakes. Bland-Altman plots were presented to assess bias within the intake range. All data were analysed using SPSS Statistics version 22.0 (IBM Corp) [19] and a P value <.05 was considered statistically significant. Results A sample of 80 students (30 male) completed five days of e-DIA and three days of 24-hour dietary recalls (Figure 1). The main reason given for not participating or dropping out of the study was due to time restraints and heavy workloads. Mean body mass index (BMI) was 22.6 kg/m 2 (SD 3.8) with 63 participants (79%) in the healthy weight range (BMI 18.5-24.9), nine overweight (BMI 25.0-29.9), four obese (BMI>30.0), and three underweight (BMI<18.5). Mean waist circumference was 70 cm (SD 6.6) for females and 81 cm (SD 10.5) for males. One participant did not consent to disclosing her anthropometric data. The majority of participants lived at home with family (70%), with English being the most commonly spoken language at home (75%). Mean and median intakes of energy and nutrients reported by 24-hour dietary recall and e-DIA are shown in Table 1. Differences between energy and nutrient intakes were mostly small, and none were statistically significant. Table 2 shows the correlation coefficients between the 24-hour dietary recalls and e-DIA. All correlation coefficients were statistically significant (P<.001). Correlations for unadjusted intakes were in the range 0.50 to 0.79 (mean correlation of 0.66 for all nutrients), energy-adjusted correlations were in the range 0.40 to 0.78 (mean 0.63), and deattenuated correlations were in the range 0.55 to 0.79 (mean 0.68). The highest correlations were found for protein and saturated fats while the lowest correlation was found for polyunsaturated fats. Deattenuated correlation coefficients were generally higher than unadjusted or energy-adjusted coefficients but differences were small. Quartile cross-classification of nutrients with the 24-hour dietary recalls and e-DIA placed 75% to 93% (mean 85%) of the participants into the same or adjacent quartile, with the highest ranking agreement for fiber and the lowest for iron. Cross-classification into extreme quartiles ranged from 0% to 9% (mean 1%) with monounsaturated fatty acids (MUFA), thiamine, and iron having the greatest proportion of extreme misclassification. Bland-Altman plots illustrating the agreement between the 24-hour dietary recalls and e-DIA for energy and selected nutrient intakes are shown in Figures 3-7. For energy intake, the mean difference between e-DIA and 24-hour dietary recall was minimal (−34 kJ) but the 95% limits of agreement were wide (−4062 kJ to 4130 kJ). No systematic bias was detected with random scatter of data points. Similar results were found with other nutrients with small mean differences, with no obvious systematic bias but wide limits of agreement between the two methods. Principal Findings This study is the first to compare the energy and nutrient intakes using a mobile phone food diary app with 24-hour dietary recall as reference measure using an Australian food composition database. Mean intakes of energy and all nutrients were similar in both methods, with no consistently higher or lower values for either method. Correlation coefficients were moderate to strong ranging from 0.55 to 0.78. Cross-classification into quartiles revealed good agreement for energy and all nutrients. In addition Bland-Altman plots showed robust agreement between the e-DIA and 24-hour dietary recalls for energy and all nutrients, without bias and with most data points located within two standard deviations of the mean. The wide limits of agreement suggest that e-DIA is unsuitable to accurately estimate intake at an individual level. However, collectively the results suggest the potential of e-DIA as an assessment tool for dietary analysis at the population level. These findings are consistent with those of other researchers. Carter et al recently validated a mobile phone app (My Meal Mate) designed to support weight loss [20]. Mean intakes of energy, protein, carbohydrate, and fat were similar using 2-day 24-hour dietary recalls and 7-day electronic food records. Pearson correlations of 0.69 to 0.86 were found for energy and macronutrients, and Bland-Altman analysis of energy intake showed minimal bias but wide limits of agreement between the methods. Comparisons between 24-hour dietary recalls and food records collected using personal digital assistants (PDAs) also produced consistent results with no significant differences between mean intakes of energy, protein, carbohydrate, or fat [21,22]; moderate to strong Pearson correlations (1-day PDA vs 24-hour dietary recalls r=0.51-0.80, 7-day PDA vs 24-hour dietary recall r=0.72-0.85) [21]; and minimal bias as demonstrated using Bland-Altman plots [21,22]. Mobile phones are also being used for digital imaging to record food and beverage intake [8,9,[23][24][25][26][27][28][29][30]. The advantage of these over the digital entry food record is that the respondent burden is considerably reduced with only images recorded and no searching and selection of foods from display lists. With a fiducial marker or reference card, the researcher uses manual or automated methods to assign the food identity and portion size to the image before automatic nutrient analysis. Examples of the use of images with human input into the assignment of foods and quantities include the remote food photography method and the Nutricam dietary assessment method [8,[23][24][25][26]. Both have been shown to have validity in a free living situation using doubly labelled water to measure energy intake [8,24]. These methods are semiautomated and still require humans to correctly identify foods and amounts. The mobile device food record is an automated system for food identification and volume estimation and offers the recorder the opportunity to see the classifications and correct mislabelled food [9,[27][28][29][30]. Further development of the process includes increasing correct food recognition and decreasing errors in volume estimation with the automated method. Completely automated systems using digital images provide obvious advantages over digital recording by easing both respondent and researcher burden. Limitations and Strengths Although the use of 24-hour dietary recall was the preferred choice of reference method, it introduces several limitations to the study design. Reliance on memory is a well-documented limitation with participants likely to forget foods consumed the previous day, although the use of the multiple pass method and portion size aids are designed to minimize the impact of errors related to memory. As the 24-hour dietary recall was administered on days that the participants digitally recorded their food records into the e-DIA, there was potential for the recording process to have improved their recall of food and beverages. However, it should be noted that records were deleted from the app at midnight and recalls were conducted up to 22 hours after their deletion. As both methods relied on self-report, more objective measures of dietary intake such as biomarkers are needed to further validate the e-DIA. Compared with the 2011-2012 Australian Health Survey [31], energy intakes were 8% and 11% lower for males and females, respectively, indicating some degree of under-reporting. This was primarily due to lower reported intakes of carbohydrates (especially sugar) and alcohol in the validation study. University students are a unique population group and are not representative of all young adults, as they are skewed towards higher socioeconomic backgrounds and may have higher digital and computer literacy [32]. The use of the e-DIA also has limitations, including the burden of recording foods prospectively for a prolonged period of time and trouble navigating within the e-DIA tool itself. When entering a food into the e-DIA, participants were presented with a long list of food options that was challenging to navigate. However, the presence of the favorites function relieves some of this burden by prioritizing the food options according to individual preferences. Commercial apps may have shorter lists but this is likely to result in less accurate food records and resulting nutrient intakes. One of the main strengths of the study is the ability of e-DIA to collect dietary intake data without alerting the participants to their ongoing caloric intake. The app is linked to the Australian national food composition database compiled by Food Standards Australia New Zealand which consists of over 4500 foods [12]. This greatly reduced the need for coding although careful checking of all foods and beverages recorded each day may be useful to obtain reliable nutrient outputs. Another advantage of using the national food composition database is the inclusion of a large range of macronutrients, micronutrients, and other food components. The app is used to record food and beverages consumed in real time and therefore does not rely on memory. Conclusions This validation study demonstrated good agreement between the e-DIA and 24-hour dietary recalls at a group level, and no evidence of bias for energy, macro-, and micronutrients was noted. With the growing popularity of mobile phones among young adults this method of collecting dietary intake is highly acceptable in this population group. Future studies should explore the validity of the e-DIA in larger, more representative samples and employ external biomarkers to reflect usual intakes. Studies assessing the e-DIA's sensitivity to changes in dietary intake are also required. This would confirm its value as a tool to monitor dietary intake in intervention studies in public health and clinical trials.
2017-04-19T18:22:09.055Z
2015-10-27T00:00:00.000
{ "year": 2015, "sha1": "17fe0e746e0636ebc1a77ac39564fab546842a3e", "oa_license": "CCBY", "oa_url": "https://mhealth.jmir.org/2015/4/e98/PDF", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "b76efb87657103f3b46860af512142ad0e113f75", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
1368460
pes2o/s2orc
v3-fos-license
Autologous Fat Grafting in the Treatment of Painful Postsurgical Scar of the Oral Mucosa Background. Persistent pain as a consequence of surgical treatment has been reported for several common surgical procedures and represents a clinical problem of great magnitude. Material and Methods. We describe the case of a 47-year-old female who presented a retractile scar that adhered to deep planes at the upper right of the vestibule due to surgical removal of maxillary exostosis, which determined important pain symptoms extending till the right shoulder during both chewing and rest. We subsequently treated her with autologous fat grafting according to Coleman's technique. Results. Clinical assessments were performed at 5 and 14 days, 1, 3, and 6 months, and 1 year after surgical procedure. We observed a progressive release of scar retraction together with an important improvement of pain symptoms. Conclusion. The case described widens the possible application of autologous fat grafting on a new anatomical site as buccal vestibule and in one specific clinical setting confirming its promising biological effects. Introduction Exostoses are nodular protuberances of mature bone whose precise designation depends on anatomic location. Buccal exostoses occur along the buccal aspect of the maxilla or mandible, usually in the premolar and molar areas. Palatal exostoses are found on the palatal aspect of the maxilla, and the most common location is the tuberosity area. These entities could become symptomatic when they reach such a volume to interfere with feeding and speaking or to alter facial mimic and contour. The histologic features of exostoses are described as hyperplastic bone, consisting of mature cortical and trabecular bone [1]. Their etiology is still under debate. When being symptomatic, exostosis allows the possibility of surgical removal that could be related to different drawbacks. Postsurgical scar retraction should be mentioned together with pain sensation during chewing, which can compromise dramatically the quality of life of patients. Up to now no support therapy has been described to treat these weakening symptoms in this group of patients. Moving from these evidences, we decided to adopt autologous fat grafting for the treatment of postsurgical scar retraction and pain sensation related to exostoses surgical removal, in order to verify its possible beneficial effects in this new approach. of maxillary exostosis in 1999 and related to the surgery a scar adhered to deep planes at the upper right of the vestibule which determined important pain symptoms extending till the right shoulder both during chewing and at rest, interfering with feeding and speaking. Case Presentation Moreover the patient revealed chronic assumption of analgesic medication (Ibuprofen 600 mg) to control pain sensation. She did not assume any other medications. Clinical history revealed a diagnosis of Sjogren syndrome and corrective surgery of cleft palate in 1996; no other pathologic conditions were present. Clinical examination showed a postsurgical scar area of about 2 cm in length, retracted and adherent to deeper planes just at the upper right of the vestibule, which causes pain at digital pressure. After collection of both clinical history and examination we proposed to our patient surgical scar tissue correction with autologous fat grafting. Our patient was informed about surgical procedure in particular regarding fat grafting unpredictable reabsorption rate and clinical results in this particular case. Both informed consent form and preoperative images were collected ( Figure 1). After routine preoperative examination and clinical assessment, the patient underwent liposuction under sedation and local anesthesia. The adipose tissue was harvested from the right flank, which is an easy accessible and abundant reservoir of adipose tissue. Following Coleman's procedure [11], the obtained fat was processed by centrifugation at 3000 rpm for 3 minutes. The fat graft was injected using an 18-gauge angiographic needle with a snap-on wing (Cordis, a Johnson & Johnson Company, NV, Roden, Netherlands) under mucous membrane in the scar area at the upper right vestibule (Figure 2). A total of about 5 cc of adipose tissue was injected. Following surgery pressure dressing was applied over donor site for 5 days and antibiotic therapy was recommended for 5 days (cefixime 400 mg 1 pill per day). Clinical assessment was performed after surgical procedure at 5 and 14 days, 1, 3, and 6 months, and 1 year. During the clinical meeting we observed progressive release of scar retraction and quality improvement measured with POSAS scale [12], together with an important decrease of pain symptoms which lasts for all the postoperative followup controls (Figure 3). After 3 months from surgical operation we performed an MRI of the facial skeleton and we appreciated a soft tissue volume increase in the area of previous fat grafting ( Figure 4). No local or systemic signs of infection were found, and no complications occurred. Moreover the patient declared that she stopped analgesic drug assumption immediately after operation. Discussion Exostoses have been described as nodular protuberances of mature intraoral bone. Palatal exostoses revealed a prevalence of 30% while buccal exostoses presented a prevalence rate of 0.9 per 1000 persons [13]. These clinical entities could become invalidating for patients especially when they reach such a volume to interfere with feeding and speaking or to alter facial mimic and contour. In these advanced cases surgical treatment is needed to remove bony prominences and to restore oral function and contour. Nonetheless also surgical removal could be related to different drawbacks. In fact postsurgical scar retraction could determine chronic pain in oral region, in particular during chewing, with an overall reduction of patients' quality of life. No support therapy has been described until now for these weakening symptoms in this group of patients who, most of the time, are analgesic medications dependent. Persistent pain as a consequence of surgical treatment has been reported for several common surgical procedures and represents a clinical problem of great magnitude. Our team adopts autologous fat grafting in the treatment of multiple pathological status beyond all scar treatment and pain syndromes. Supported by our promising results we approached with positive results also chronic headaches of cervical origin, both chronic cervicogenic and occipital neuralgia [10]. Moving from these evidences we consider autologous fat grafting as an innovative solution for pain syndromes related to scar retraction although the exact mechanism of action is still unclear. Laboratory findings have demonstrated the presence of mesenchymal multipotent stem cells in the adipocyte cell fraction of fat graft [14]. We hypothesize that autologous fat graft due to the regenerative role of the stromal fraction of adipose tissue grafted could promote a reorganization of fibrotic tissue together with soft tissue regeneration, leading to scar release and reducing nerve excitatory pattern with consequent positive clinical results on pain control. In addition to that, surgery-associated tissue injury leads to an inflammatory reaction accompanied by increased production of proinflammatory cytokines, which can induce peripheral and central sensitization with a failed nociception system, leading to pain augmentation. Mesenchymal stem cells and adipose-derived stem cells could efficiently reduce T-cell activation inhibiting the proliferation of CD4 and CD8 T lymphocytes [15]. It could be hypothesized that fat grafting could inhibit inflammation and determine pain reduction and analgesia. The case reported showed a positive final outcome. Buccal vestibule scar retraction showed a release after one procedure of autologous fat grafting and our patient referred to clinical remission of pain which did not need any analgesic treatment with a total follow-up of one year. The case described widens the possible application of autologous fat grafting on a new anatomical site as buccal vestibule and in one specific clinical setting confirming its promising biological effects. In fact our experience could open a new route of regenerative surgery addressing a mucosal tissue as buccal one.
2017-10-30T18:14:37.032Z
2015-05-12T00:00:00.000
{ "year": 2015, "sha1": "315eaead876964fdf281a07dd8689ebd9c9acd00", "oa_license": "CCBY", "oa_url": "http://downloads.hindawi.com/journals/crim/2015/842854.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "934c1187994e54fd3a8487b53876720719a9c31a", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
54633312
pes2o/s2orc
v3-fos-license
Determination of Nonlinear Creep Parameters for Hereditary Materials This work proposes an effective algorithm for description of nonlinear deformation of hereditary materials based on Rabotnov’s method of isochronous creep curves. The notions have been introduced for experimental and model rheological parameters and similarity coefficients of isochronous curves. It has been shown how using them, one can find instantaneous strains at various stress levels for description of nonlinear deformation of hereditary materials at creep. Relevant equations have been determined from the nonlinear integral equation of Yu. N. Rabotnov for the application cases of Rabotnov’s fractional exponential kernel and Abel’s kernel for nonlinear deformation of hereditary materials at creep. The improved methods have been given for determination of creep parameters α, ε0, δ, β, and λ. By processing and using test results for material Nylon 6 and glass-reinforced plastic TC 8/3-250, the process has been shown for sequential implementation of the developed methods for description of linear and nonlinear deformation of these materials at creep. From the results of the experimental investigation performed by the authors of this paper, it has been determined that fine-grained, dense asphalt concrete at the temperature of 20± 2 ◦C and stresses up to 0.183 MPa at direct tension is deformed considerably in a nonlinear way. It has been shown in an illustrative way by construction of isochronous creep curves at various load durations and curves of experimental rheological parameter at various stresses. Nonlinear deformation of asphalt concrete at creep is adequately described by the proposed methods. Introduction Many natural (soils, rocks, wood, natural asphalt) and artificial (metals, their alloys, polymers, concretes, composites) materials, depending on temperature and load level, to one extent or another show their viscoelastic properties.Currently there are sufficiently developed viscoelastic theory and methods [1][2][3][4][5][6] which allow the determination and description of viscoelastic properties of the materials.One can distinguish linear and nonlinear viscoelasticity [4][5][6][7] in viscoelastic theory, as well as in elastic theory [8,9].One can say that currently the theory and methods of linear viscoelasticity have been developed sufficiently.In spite of the fact that as far back as 1913 V. Volterra [10] proposed to describe nonlinear viscoelasticity by binary integral equation, to date, nonlinear viscoelasticity theory and methods are on the stage of development and currently calculations for strength, stability, and longevity in many branches of engineering activity are made without consideration or with poor consideration of viscoelastic properties for the materials and elements of structures. One simple but efficient means for consideration of deformation nonlinearity for hereditary materials was proposed in 1948 by Yu.N. Rabotnov [11].It was based on the similarity of isochronous creep curves of the materials.Meanwhile, the process of creep strain is described by the equation of Boltzmann-Volterra for linear viscoelastic materials, but strain in the left part of the equation has been replaced by the so-called "curve of instantaneous deformation", which is determined experimentally.Yu.N. Rabotnov, as the author of the method, considers that the curve of instantaneous strain is a kind of ideal imaginary curve, which is impossible to obtain in reality, as in real conditions the strain rate is always a finite value [12,13].It can be obtained from isochronous creep curves at finite values of time, considering them similar. It is known that the selection of kernel of the integral equation and determination of its parameters is one of the most responsible actions in the description of mechanical behavior of viscoelastic materials. As it has been said in our previous work [14], the most universal is creep kernel in the form of the fractional exponential function of Yu.N. Rabotnov [4][5][6]11].The fractional exponential function is well studied and a special table [15] has been developed for simplification of calculations. This work proposes an effective algorithm for description of nonlinear deformation of hereditary materials based on Rabotnov's method of isochronous creep curves, and also the methods have been improved for determination of creep kernel parameters, developed in our previous work [14]. The term "hereditary" was introduced by V. Volterra and means delay of elastic strain in time [13].A.S. Lodge explains that the materials are considered "hereditary", described by creep kernel, depending on the difference of an argument: K(t − τ) [25]. Short Description of Method An important matter in the analysis of mechanical behavior of any material is the clarification of its deformation character: linearity or nonlinearity, which is usually determined by construction of strain dependence on stress according to the results of experimental tests.A more efficient and illustrative method is known for determination of deformation nonlinearity degree for materials, proposed by Yu.N. Rabotnov [4][5][6]11].In accordance with this method, the so-called isochronous creep curves are constructed according to the test results of samples for the materials for creep at some constant stresses. Figure 1 represents creep curves for a material at various constant stresses.Let us draw several, for example, four vertical lines, corresponding to time moments t 1 , t 2 , t 3 , and t 4 .Each vertical line crosses four creep curves corresponding to stresses σ 1 , σ 2 , σ 3 , and σ 4 .Drawing horizontal lines from specified cross points, one can find four strain values ε 1 (t 1 ), ε 2 (t 1 ), ε 3 (t 1 ), and ε 4 (t 1 ) on the vertical axis, corresponding to time t 1 .According to known four values of stress and four values of strain, one can construct an isochronous creep curve corresponding to time moment t 1 .Making similar actions, one can construct isochronous creep curves for the material at different load durations (Figure 2).Isochronous curves allow the visual evaluation of the character (linearity or nonlinearity) and nonlinearity degree of deformation for material.It is often in practice that isochronous curves are similar.The similarity property provides an opportunity to obtain all other isochronous creep curves of a material, if one of them is known. Nonlinear Equation and Creep Kernel Considering the similarity property of isochronous creep curves, Yu.N. Rabotnov proposed the following nonlinear integral equation to describe the process of nonlinear deformation of hereditary materials [4][5][6]11]. Nonlinear Equation and Creep Kernel Considering the similarity property of isochronous creep curves, Yu.N. Rabotnov proposed the following nonlinear integral equation to describe the process of nonlinear deformation of hereditary materials [4][5][6]11]. Nonlinear Equation and Creep Kernel Considering the similarity property of isochronous creep curves, Yu.N. Rabotnov proposed the following nonlinear integral equation to describe the process of nonlinear deformation of hereditary materials [4][5][6]11]. where ε(t) is strain at time moment t; 1) represents by itself the so-called "instantaneous deformation curve", determination of which will be given below. Methods of Determination of Linear Creep Parameters for Hereditary Materials Nearly always one can accept that the relationship between stress and strain in materials is a linear one at weak stresses.Therefore, one can use the creep curve of a material at weak stress and apply the linear viscoelasticity approach [14] for determination of creep parameters. Parameters α, ε 0 , and δ As it is known, creep curves of materials, depending on stress level and temperature, can have two or three characteristic strain sites [26][27][28]: site I with unstabilized creep, site II with stabilized creep, and site III of accelerating creep. Then the method is proposed, according to which it will be sufficient to consider only site I of creep curve for determination of creep parameters of a material α, ε 0 , and δ.Having accepted n = 0, from Equation (3) we will find: One can see that the right part of the obtained equation contains a well-known Abel's function with unknown parameters α and δ. Having divided both parts of the Equation (4) into instantaneous elasticity modulus E 0 for the case of linear deformed material, we will obtain: where ε 0 is conditionally instantaneous strain.Equation (5) contains three unknown parameters ε 0 , α, and δ.As it has been said above, the right part contains a known Abel's function with parameter of singularity α, which has the value within the interval (0, 1).Then parameter α will be considered as known, and unknown parameters ε 0 and δ will be determined with the use of the least squares method.According to the least squares method, the values of parameters ε 0 and δ should meet the following condition: where S(ε 0 , δ) is the sum of squares of deviations; ε ei are values of creep strain determined experimentally; m is number of creep strain. Calculating under the formula the deviations of the calculated values for creep strain from those obtained experimentally, one can select optimum values for parameters α, ε 0 , and δ, providing the least value of ∆ε i . Parameters β and λ Having divided both parts of Equation ( 3) into instantaneous elasticity modulus E 0 , for the case of linear strain of material, we obtain: Let us rewrite Equation (10) in the following form, inserting notation where Using the least squares method, we write the extremum condition, similar to condition of (6): The following expressions are obtained for determination of parameters β and λ from two equations, composed on expressions Appl.Sci.2018, 8, 760 where The value of parameter β is determined from Equation ( 14).If the equation has an unambiguous solution, then it will be the target value for parameter β > 0. It is obvious that during determination of values for parameters β and λ from Equations ( 14) and ( 15), the previously calculated values are used for parameters α and ε 0 . As it is known, one of the factors which limited the wide use of creep kernel with Rabotnov's fractional exponential function has been the fact that it was not an easy way to determine its parameters [22].A special table has been developed [15], and with its use, the fractional exponential function has been calculated under the argument x = β t (1−α) at x ≤ 4. At x > 4 it is recommended to use asymptotic formulas of B. D. Anin [29].Calculations have shown that the infinite series in the right part of Equation ( 10) are poorly converged at small values of argument x < 4. Introduction of special notation and consideration in series ( 12) and ( 16) of not time t i , but relative time (t i /t k ) have essentially increased the convergence of calculations and allowed the quick obtaining of more accurate values of the fractional exponential function practically at any values of argument. Algorithm for Description of Nonlinear Deformation of Hereditary Materials This section of the paper proposes a new method for description of nonlinear deformation of hereditary materials.This method is realized in several steps and performed in the following sequence: a.A new parameter is introduced called the experimental rheological parameter: where ε e (t) is the value of creep strain at time moment t, determined experimentally; ε e 0 is instantaneous strain at time moment t = 0, determined experimentally.Experimental rheological parameter k e (t) represents by itself a normalized time function in relation to experimental instantaneous strain.It has the value equal to 1 at t = 0 and more than 1 at time values of t > 0. It shows in how many times the experimental values of creep strain are more at different time values compared with instantaneous strain, which has been also obtained experimentally. b.The values have been calculated for the experimental rheological parameter k e (t) at different time values t and stresses σ.The graphs for k e (t) at various stresses σ are constructed according to the calculation results.All curves k e (t) at various stresses coincide (for example, as in the work [30]) for physically linear material, i.e., we have only one mean curve k e (t) for all stresses (Figure 3a).There are separate curves for various stresses k e (t) (Figure 3b) for physically nonlinear material.Coincidence for some of them is not excluded. according to the calculation results.All curves () e k t at various stresses coincide (for example, as in the work [30]) for physically linear material, i.e., we have only one mean curve As it has been said above, the construction of isochronous creep curves is an illustrative way for the evaluation of physical nonlinearity (linearity) of the material.Therefore, we have two ways of visual evaluation for nonlinearity (linearity) of deformation for materials: first-through experimental rheological parameter () e k t ; second-by construction of isochronous creep curves.c.Using Rabotnov's fractional exponential function (3) and Abel's function (4), we approximate the experimental values of creep strain for material, obtained at minimum stress σ1.Meanwhile, we find corresponding values of creep parameters α, ε0, δ, β, and λ according to the abovementioned methods. d.In case of creep, i.e., at σ = const from integral Equation ( 1) we obtain: The right part of this integral equation is denoted through km(t): As it has been said above, the construction of isochronous creep curves is an illustrative way for the evaluation of physical nonlinearity (linearity) of the material.Therefore, we have two ways of visual evaluation for nonlinearity (linearity) of deformation for materials: first-through experimental rheological parameter k e (t); second-by construction of isochronous creep curves. c. Using Rabotnov's fractional exponential function ( 3) and Abel's function ( 4), we approximate the experimental values of creep strain for material, obtained at minimum stress σ 1 .Meanwhile, we find corresponding values of creep parameters α, ε 0 , δ, β, and λ according to the abovementioned methods. d.In case of creep, i.e., at σ = const from integral Equation ( 1) we obtain: The right part of this integral equation is denoted through k m (t): and it is named as model (theoretical or calculation) rheological parameter. As it is seen from Equation (19), model rheological parameter k m (t) depends only on time t.Similar to experimental rheological parameter k e (t), the model rheological parameter has the value equal to 1 at t = 0 and more than 1 at t > 0. It shows in how many times the calculated values of creep strain are bigger compared with the instantaneous strain, determined by calculations.Whereas the experimental rheological parameter can be determined for all stresses, the model rheological parameter is determined only for minimum stress σ 1 . e.According to Yu. N. Rabotnov's isochronous creep curves method, we find the calculated values of instantaneous strains at various stresses under the following formula: where ε e (t s ) is the experimental value of creep for material at fixed time t s at stress σ; k m (t s ) is the value of model rheological parameter at fixed time t s (similarity coefficient of isochronous creep curves according to Yu. N. Rabotnov). f.We calculate theoretical values of creep strain at various stresses under the formula: where ε m 0 (σ) is the calculated value of instantaneous strain at stress σ, obtained under the formula (20).g.We compare calculated ε m (t) and experimental ε e (t) values of creep strain at various stresses σ. If the comparison has shown unsatisfactory accuracy, we determine again the values of instantaneous strains at various stresses under the following formula: where ε m 0 (σ) is a mean value of instantaneous strain at stress σ, obtained at some fixed times t s ; N is total number of fixed times t s .Then the calculated values of creep strain at various stresses are made under the formula: Material Nylon 6 The works [5,20] contain test results of material Nylon 6 at stresses 5, 10, and 15 MPa.The duration of the experiment was 100 h for all stresses.Values of creep strain for material Nylon 6 at specified stresses, obtained by processing of experimental results, are represented in Table 1.It has been also determined from the data of mentioned works that the instantaneous deformation curve of material Nylon 6 has been satisfactorily approximated by power function: 1.3334 (24) Approximation of creep strain values at minimum stress (σ = 5 MPa) using Rabotnov's fractional exponential kernel gave the following values of creep parameters: α = 0.85, ε 0 = 0.1682, β = 0.18, and λ = 1.6682. Then the creep curve of material Nylon 6 at stress σ = 5 MPa will be described by equation From Equation ( 24) at σ = 5 MPa we find that ε m 0 = 0.1682%.Considering this fact, we determine the model rheological parameter: Appl.Sci.2018, 8, 760 9 of 17 Now, one can calculate the value of creep strain at stresses 10 and 15 MPa under the formula (21).For these stresses, the values of instantaneous strain, obtained under formula (24), are equal to 0.4238% and 0.7277%, respectively. Experimental and calculated values of creep strain for material Nylon 6 at all three stresses are shown in Figure 4.As it is seen, convergence of calculated strains to experimental ones is good. From Equation ( 24 Now, one can calculate the value of creep strain at stresses 10 and 15 MPa under the formula (21).For these stresses, the values of instantaneous strain, obtained under formula (24), are equal to 0.4238% and 0.7277%, respectively. Experimental and calculated values of creep strain for material Nylon 6 at all three stresses are shown in Figure 4.As it is seen, convergence of calculated strains to experimental ones is good. Glass-Reinforced Plastic TC 8/3-250 Samples of glass-reinforced plastic TC 8/3-250 have been tested for creep at the temperature of 23.5 ± 2 • C in the works [21,31].As the glass-reinforced plastic is an anisotropic material, the samples have been cut along textile warp (Θ = 90 • ) and at an angle of Θ = 45 • to them. Θ = 90 • Values of creep strain for glass-reinforced plastic (Θ = 90 • ) at four stresses are represented in Table 2. Test results have shown that the sample of glass-reinforced plastic, cut at an angle of Θ = 90 • to textile warp, is deformed linearly at a specific temperature and up to stress 349 MPa.Instantaneous strain has been satisfactorily approximated by equation: Creep strains at minimum stress, equal to 104.7 MPa, have been approximated by Rabotnov's fractional exponential kernel.Creep parameters have the following values: α = 0.8, ε 0 = 0.3501, β = 0.3, and λ = 0.0462. The experimental and calculated values of creep strain for glass-reinforced plastic (Θ = 90 • ) at all four stresses are shown in Figure 5 for comparison.As it is seen, convergence of calculated creep strains to experimental ones is good. Θ = 45 • Values of creep strain for glass-reinforced plastic (Θ = 45 • ) at six stresses are represented in Table 3.According to test results, it has been determined that the samples of glass-reinforced plastic, cut at the angle of Θ = 45 • to textile warp, have been deformed nonlinearly at the tested temperature and applied stresses. The model rheological parameter has a form of: Using the abovementioned values of instantaneous strain, one can calculate values of creep strain at all other stresses under formula (21). Figure 6 shows experimental and calculated values of creep strain for glass-reinforced plastic (Θ = 45 • ) at all considered stresses.It is seen that convergence of experimental strains to the calculated ones is good.Processing of test results allowed determination of the following values of instantaneous strain for stresses 20.3 MPa, 40.6 MPa, 60.9 MPa, 81.2 MPa, 101.5 MPa, and 121.8 MPa: 0.1093%, 0.1982%, 0.5911%, 1.3872%, 2.4874%, and 3.9975%, respectively. The model rheological parameter has a form of: Using abovementioned values of instantaneous strain, one can calculate values of creep strain at all other stresses under formula (21). Figure 6 shows experimental and calculated values of creep strain for glass-reinforced plastic (Ѳ = 45°) at all considered stresses.It is seen that convergence of experimental strains to the calculated ones is good. In this paper, bitumen of grade 100-130 has been used which meets the requirements of the Kazakhstan standard ST RK 1373-2013 [34].The bitumen grade on Superpave is PG 64-40 [35]. In this paper, bitumen of grade 100-130 has been used which meets the requirements of the Kazakhstan standard ST RK 1373-2013 [34].The bitumen grade on Superpave is PG 64-40 [35].Bitumen has been produced by Pavlodar processing plant from crude oil of Western Siberia (Russia) by the direct oxidation method.Bitumen content of grade 100-130 in the asphalt concrete is 4.8% by weight of dry mineral material. Samples of the hot asphalt concrete in the form of a rectangular prism with the length of 150 mm, width of 50 mm, and height of 50 mm were manufactured in the following way.Firstly, the asphalt concrete samples were prepared in the form of a square slab by means of the Cooper compactor (UK, model CRT-RC2S) according to the standard EN 12697-33 [36].Then the samples were cut from the asphalt concrete slabs in the form of a prism.Deviations in sizes of the beams didn't exceed 2 mm. Tests of the asphalt concrete samples on creep at the temperature of 22 ± 2 • C were carried out according to the direct tensile scheme in specially contracted equipment.The detailed information regarding characteristics of bitumen, asphalt concrete, and equipment can be found in the work [28]. The stresses were constant during the tests and equal to 0.041, 0.074, 0.111, 0.148, and 0.183 MPa.Durations for tests have been accepted as similar ones and equal to 570 s. 10 parallel asphalt concrete samples have been tested at each constant stress. Mean values of creep strain for the tested 10 asphalt concrete samples are represented in Table 4. Creep curves of asphalt concrete at various stresses, constructed according to data of Table 4, are represented in Figure 7.As it is seen, the creep curves represent by themselves the lines which change monotonously with the increase of stress duration.The value of instantaneous strain (at t = 0) and slope of creep curves increase with the increase of stress.Creep curves of asphalt concrete at various stresses, constructed according to data of Table 4, are represented in Figure 7.As it is seen, the creep curves represent by themselves the lines which change monotonously with the increase of stress duration.The value of instantaneous strain (at t = 0) and slope of creep curves increase with the increase of stress.Then, using the values of stresses and strains, which correspond to various load durations, one can construct the isochronous creep curves (Figure 8).As it is seen, the asphalt concrete is deformed Then, using the values of stresses and strains, which correspond to various load durations, one can construct the isochronous creep curves (Figure 8).As it is seen, the asphalt concrete is deformed considerably in a nonlinear way at the given temperature, and the nonlinearity degree of relationship between stress and strain has been increased with the stress and strain duration increase.Nonlinearity of strain for asphalt concrete has been noted by other researchers [37,38].It should be also noted that the isochronous creep curves are similar ones.The property of similarity gives an opportunity to obtain all other isochronous creep curves, if any of them is known. At present, it has been accepted in road science and practice that mechanical properties of asphalt concrete depend on the value and duration of the applied load and temperature, and this dependence can be described in a sufficiently accurate way with the use of linear viscoelasticity theory [39][40][41][42].The essential nonlinearity of asphalt concrete strain determined above shows the unacceptability of linear viscoelasticity theory and necessitates the use of nonlinear viscoelasticity theory.considerably in a nonlinear way at the given temperature, and the nonlinearity degree of relationship between stress and strain has been increased with the stress and strain duration increase.Nonlinearity of strain for asphalt concrete has been noted by other researchers [37,38].It should be also noted that the isochronous creep curves are similar ones.The property of similarity gives an opportunity to obtain all other isochronous creep curves, if any of them is known.At present, it has been accepted in road science and practice that mechanical properties of asphalt concrete depend on the value and duration of the applied load and temperature, and this dependence can be described in a sufficiently accurate way with the use of linear viscoelasticity theory [39][40][41][42].The essential nonlinearity of asphalt concrete strain determined above shows the unacceptability of linear viscoelasticity theory and necessitates the use of nonlinear viscoelasticity theory.Graphs of experimental rheological parameter at various stresses, constructed according to the calculation results under formula (17) with the use of data of Table 4, are shown in Figure 9. Graphs of experimental rheological parameter at various stresses, constructed according to the calculation results under formula (17) with the use of data of Table 4, are shown in Figure 9. Graphs of experimental rheological parameter at various stresses, constructed according to the calculation results under formula (17) with the use of data of Table 4, are shown in Figure 9.It is clearly seen that experimental rheological parameters of the asphalt concrete at various stresses do not merge into one curve; most of them diverge considerably from each other, which also shows the essential nonlinearity of the asphalt concrete strain at the considered temperatures and stresses. Creep curves of the asphalt concrete at minimum stress, equal in our case to 0.041 MPa, have been approximated by Equation ( 5) with the use of Abel's kernel and the following values have been obtained for creep parameters: α = 0.3, ε 0 = 0.058, δ = 0.0138.Experimental and calculated creep curves of the asphalt concrete at the stress are represented in Figure 10 for comparison, which shows clearly that experimental creep curve of the asphalt concrete has been satisfactorily approximated by the equation of linear viscoelasticity with the use of Abel's kernel.It is clearly seen that experimental rheological parameters of the asphalt concrete at various stresses do not merge into one curve; most of them diverge considerably from each other, which also shows the essential nonlinearity of the asphalt concrete strain at the considered temperatures and stresses. Creep curves of the asphalt concrete at minimum stress, equal in our case to 0.041 MPa, have been approximated by Equation ( 5 Then the values of instantaneous strain ε m 0 are required to calculate under formula (20) at other stresses.For this purpose, we select one value of load duration, for example, 570 s.Under the following expression, we obtain the value of calculated rheological parameter: According to the expression (32), we find that k m (570) = 2.6745 for the selected load duration and at known values of creep parameters α and δ.We have values of creep strain, equal to 0.1759%, 0.2349%, 0.3632%, and 0.5741%, respectively from Table 4 at t = 570 for stresses 0.074 MPa, 0.111 MPa, 0.148 MPa, and 0.183 MPa.According to formula (20) we find the following values of instantaneous strain of the asphalt concrete equal to 0.0658%, 0.0878%, 0.1358%, and 0.2147% at the abovementioned stresses. Now the calculated values of creep strain for the asphalt concrete at various stresses, except for minimum ones, can be determined under formula (21).It is found out that calculated values of creep strain in initial time moments at maximum stress equal to 0.183 MPa do not sufficiently converge with experimental ones.Therefore, we redetermine the value of instantaneous strain at this stress under formula (22), having included into the calculation the values of experimental creep strain and model rheological parameter at t = 90,150, 270, and 570 s.The new value of instantaneous strain at stress 0.183 MPa has been obtained, which is equal to 0.2037.Calculated and experimental creep curves of the asphalt concrete at all stresses in comparison are represented in Figure 11.As it is seen, the calculated curves converge with sufficient accuracy with experimental ones at all stresses. Conclusions • Using the schematic creep curves and isochronous curves, Yu.N. Rabotnov's isochronous creep curve method has been visually explained.The nonlinear integral equation has been shown for mathematical description of the nonlinear deformation process for hereditary materials, proposed by Yu.N. Rabotnov. • Relevant equations have been determined from the nonlinear integral equation of Yu.N. Rabotnov for the application cases of Rabotnov's fractional exponential kernel and Abel's kernel for nonlinear deformation of hereditary materials at creep.The improved methods have been given for determination of creep parameters α, ε 0 , δ, β, and λ. • The detailed method has been developed for description of the nonlinear deformation process for hereditary materials.The notions have been introduced for experimental and model rheological parameters and similarity coefficients of isochronous curves.It has been shown how using them, one can find instantaneous strains at various stress levels for description of nonlinear deformation of hereditary materials at creep. • By processing and using test results for material Nylon 6 and glass-reinforced plastic TC 8/3-250, the process has been shown for sequential implementation of the developed methods for description of linear and nonlinear deformation of these materials at creep.The accuracy of the proposed methods is high. • By the results of experimental investigation, performed by the authors of this paper, it has been determined that fine-grained dense asphalt concrete at the temperature of 20 ± 2 • C and stresses up to 0.183 MPa at direct tension is deformed considerably in a nonlinear way.It has been shown in an illustrative way by construction of isochronous creep curves at various load durations and curves of experimental rheological parameter at various stresses.Nonlinear deformation of the asphalt concrete at creep is adequately described by the proposed methods. Figure 1 . Figure 1.Creep curves of a material at various constant stresses. Figure 2 . Figure 2. Isochronous deformation curves of a material. Figure 1 . 18 Figure 1 . Figure 1.Creep curves of a material at various constant stresses. Figure 2 . Figure 2. Isochronous deformation curves of a material. Figure 2 . Figure 2. Isochronous deformation curves of a material. (Figure 3 . Figure 3. Curves of experimental rheological parameter Ke(t) at various stresses σ: (a) for physically linear material; (b) for physically nonlinear material. Figure 3 . Figure 3. Curves of experimental rheological parameter K e (t) at various stresses σ: (a) for physically linear material; (b) for physically nonlinear material. ) at σ = 5 MPa we find that 0 m  = 0.1682%.Considering this fact, we determine the model rheological parameter: Figure 7 . Figure 7. Creep curves of the asphalt concrete at various stresses. Figure 7 . Figure 7. Creep curves of the asphalt concrete at various stresses. Figure 8 . Figure 8. Isochronous creep curves of the asphalt concrete. Figure 8 . Figure 8. Isochronous creep curves of the asphalt concrete. Figure 8 . Figure 8. Isochronous creep curves of the asphalt concrete. Figure 9 . Figure 9. Graphs of experimental rheological parameter of the asphalt concrete at various stresses.Figure 9. Graphs of experimental rheological parameter of the asphalt concrete at various stresses. Figure 9 . Figure 9. Graphs of experimental rheological parameter of the asphalt concrete at various stresses.Figure 9. Graphs of experimental rheological parameter of the asphalt concrete at various stresses. ) with the use of Abel's kernel and the following values have been obtained for creep parameters: α = 0.3, ε0 = 0.058, δ = 0.0138.Experimental and calculated creep curves of the asphalt concrete at the stress are represented in Figure10for comparison, which shows clearly that experimental creep curve of the asphalt concrete has been satisfactorily approximated by the equation of linear viscoelasticity with the use of Abel's kernel. Figure 10 .Figure 10 . Figure 10.Experimental and calculated creep curves of the asphalt concrete at stress 0.041 MPa. Appl.Sci.2018, 8, x 16 of 18 Calculated and experimental creep curves of the asphalt concrete at all stresses in comparison are represented in Figure 11.As it is seen, the calculated curves converge with sufficient accuracy with experimental ones at all stresses. Figure 11 . Figure 11.Experimental and calculated creep curves of the asphalt concrete at various stresses.  Using the schematic creep curves and isochronous curves, Yu.N. Rabotnov's isochronous creep curve method has been visually explained.The nonlinear integral equation has been shown for mathematical description of the nonlinear deformation process for hereditary materials, proposed by Yu.N. Rabotnov. Relevant equations have been determined from the nonlinear integral equation of Yu.N. Rabotnov for the application cases of Rabotnov's fractional exponential kernel and Abel's kernel for nonlinear deformation of hereditary materials at creep.The improved methods have been given for determination of creep parameters α, ε0, δ, β, and λ. The detailed method has been developed for description of the nonlinear deformation process for hereditary materials.The notions have been introduced for experimental and model rheological parameters and similarity coefficients of isochronous curves.It has been shown how using them, one can find instantaneous strains at various stress levels for description of Figure 11 . Figure 11.Experimental and calculated creep curves of the asphalt concrete at various stresses. Table 1 . Creep strain values of material Nylon 6. Table 4 . Values of creep strain for asphalt concrete.
2018-12-06T07:38:12.216Z
2018-05-10T00:00:00.000
{ "year": 2018, "sha1": "feecfb17ca88b8cb35533c622cca9f792a12ef86", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2076-3417/8/5/760/pdf?version=1526538972", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "feecfb17ca88b8cb35533c622cca9f792a12ef86", "s2fieldsofstudy": [ "Materials Science", "Engineering", "Physics" ], "extfieldsofstudy": [ "Engineering" ] }
85523116
pes2o/s2orc
v3-fos-license
Synthesis and characterization of alkyd resin based on soybean oil and glycerin using zirconium octoate as catalyst A one pot synthesis of alkyd resins based on the soybean oil and glycerin with the zirconium octoate (zirconium 2-ethyl hexanoate) as a new renewable raw material was performed. The alcoholysis reaction of soybean oil and glycerin carried out in absence of nitrogen gas inlet in the presence of zirconium octoate. The alkyd resin was obtained from polycondensation of the alcoholysis products with phthalic anhydrid at 250 °C. The structure confirmed by FT-IR and H 1 -NMR spectroscopy. Flexibility, drying time, hardness, adhesion test, impact resistance, gloss test and chemical resistance of synthesized alkyd resins was invesitigated. The prepared resin was formulated in white lacqure and yellowing resistance tested with commercial resins. INTRODUCTION Alkyd resins make an important group of commercial synthetic polymers and are used widely in coating and paint industry.They have become essential raw material in the production of metals, wood and wood-based materials like furniture and floors, cement, cement-lime and gypsum plasters.The commonly used raw materials for the production of alkyd resins, besides plant oils like soybean oil and linseed oil, are synthetic pentaerythritol, glycerin and phthalic anhydride which are toxic. 1Alkyd resins can be defined in brief as polyesters modified with fatty acids, fatty oils or higher synthetic carboxylic acids.The molecules consist of a polyester backbone, which may be scarcely moderately branched depending on the raw material selected, and fatty acid groups as side chains excess (free) hydroxyl and residual carboxyl groups are also present.An alkyd resin consisting of oil solely, additional glycerol and orthophthalic acid represented in simplified form have produced industrially in 1930.Alkyd resins rapidly developed into the most important type of synthetic resin for coating chemistry.Even today they still accur over 40% of world production of synthetic coating resins.The huge success of alkyd resins can be attributed-in short-to an ideal combination of polyester and oil properties.The polyester component is responsible for physical (surface) drying and weather resistance (gloss retention, freedom from yellowing, etc).The oil component is important for the suppleness of the films (internal plasticization) and the suppleness of the films (internal plasticization) and all for the capability of oxidative crosslinking. The strength of alkyd resins, such as self curing at room temperature as one component system provides very broad compatibility and solubility spectrum, virtually unlimited variability of properties by appropriate choice of raw material and synthetic condition, good pigment wetting, attractive flow properties leading to good spreadability of paints and relatively low cost. 2 The oil type selected for the production of usually alkyds has a profound effect on the properties of the finished alkyd.The presence of fragments derived from unsaturated fatty acids in the polymer structure gives them the ability to cure solubility chemically in the solvents used to manufacture varnishes.The ability to blend with other film-forming substance alkyd resins is cured due to intermolecular reaction of unsaturated bonds contained in the fatty acid chains and hydroxyl groups derived from polyols under the influence of oxidative polymerization initiators.The chain polymerization processes occur in alkyd resins to yield cross linking intermolecular bonds C-C and C-O-C. 3-4 However, the high numbers of unsaturated bonds in the fatty acid chains cause that not all unsaturated bonds react in the alkyd curing process.The presence of surplus unsaturated bonds in the cured coating leads to its yellowing following oxidation after exposure to atmospheric oxygen..The coating chemistry generally classifies alkyd resins on the basis of their oil content or their fatty acid content calculated as triglyceride content, type of oil or fatty acid ( linseed oil alkyd, soybean oil alkyd, etc) classification according to the oil content (triglyceride content).The resin is based on the following nomenclature as (1) less than 40% oil: short oil alkyd, (2) 40-60% oil: medium oil alkyd, (3) over 60 to 70% oil: long oil alkyd, and (4) over 70-85% oil: very long oil alkyd. Long oil alkyd always dry by oxidation, and their high oil content provides good flow, high flexibility and easy manual processing.But also they lead to relatively slow drying, if conjugate oils or acids are used in the resin synthesis.Faster drying long oil alkyd resins are produced based on soybean oil used as sole film former for decorator paint. 2 Polyester amide and alkyd resins 5 have applications in different fields such as painting, coating, adhesives and binders for composites.The vegetable oils and other green renewable raw materials are common sources used in the organic coating industry especially for the synthesis of alkyd resin in preference to petroleum products due to increased worldwide awareness of environmental concerns. 6Alkyd resins have acquired great importance because of their economy, availability of raw materials, biodegradability, durability, flexibility, good adhesion and ease of application. 7 The traditional oils such as soybean 8 oil, linseed oil 9 , sunflower oil 10 and coconut oil 11 are used in the synthesis of alkyd resin.This paper describes the synthesis of alkyd resins which are based on the new renewable raw material catalyst zirconium octoate (zirconium 2-ethyl hexanoate) as a new catalyst which act as base-catalyzed alcoholysis of soybean oil.The zirconium octoate in alcoholysis reaction prevent oxidation of oil so the formation of monoglyceride by using zirconium octoate as catalyst in this synthesis useful for the color of monoglyceride.In the present work, the physico-chemical and film performance properties of the synthesized alkyd resins were studied and compared with commercial resins. Alkyd resin synthesis Alkyd resins were synthesized using monoglyceride fusion technique, 12 for this reason soybean oil, glycerin and lithium hydroxide (as catalyst) were charged into a four neck round bottom 1000 ml flask equipped with a mechanical stirrer, condenser and thermometer.The temperature was raised by slow heating to 290-300°C.The monoglyceride is formed by the alcoholysis process as seen in Scheme 1.The mechanism of the basecatalyzed alcoholysis of soybean oil for monoglyceride formation is depicted in Scheme 2. 13- 14 The completion of monoglyceride process was monitored by testing the solubility of one volume of sample dissolve with two volume of methanol to give clear solution where the triglyceride can not dissolve in methanol but monoglyceride can dissolve in methanol.Then the reaction mixture was cooled to 180°C and phthalic anhydride was added the monoglyceride mixture.Then, the temperature was raised to 240-250°C and maintained at this range.The reaction was monitored by periodic determination of acid value (AV) of the mixture to desired number (10-14) mg KOH/g of the resin.The alkyd resin synthesis is shown in Scheme 3. The constituents of two resins along with some necessary characteristics are shown in Table 1. This process was repeated with zirconium octoate as catalyst but the temperature used in the preparation of monoglyceride was 290-300°C, and there was not nitrogen gas inlet.There was not nitrogen gas inlet.But nitrogen gas was used only in the stage of addition of ophthalmic anhydride to aid removal of water from reaction.In case of using lithium hydroxide as catalyst, nitrogen gas inlet was used in all stages of reaction. CH2 O COR1 The base-catalyzed mechanism of alcoholysis reaction of vegetable oil. The preparation of coating films In order to study drying time, hardness, gloss and chemical resistance, two alkyd resins were applied onto cleand glass plates of 100 mmx100 mm x 3mm.The application was confirmed by applicator with slot width of 120 µm.The sample was prepared at 55% solution of each blend in the described solvent system with drying catalyst in the following amount (100 gm of resin dissolved in mineral spirit (cobalt octoate, zirconium octoate).Adhesion, mechanical were carried out on coated steel substrate.The cleand steel plates were used for this purpose.All coated plates were kept under stander conditions. The drying time test The drying time was determined by "set-to-touch" and " tack-free" stages at regular interval of time.The test was confirmed according to ASTM D 1640(1995). The hardness test Two resins were applied on glass plates and were allowed to dry for one week of application.The test was confirmed according to ASTM D 4366(1997). The adhesion test The cross-hatch adhesion test was performed on the coated steel plates after one week of application.The test was confirmed according to ASTM D 3359(1997). The flexibility test The flexibility of the dried films was evaluated using the mandrel test according to ASTM D 1737.The coated steel film after a week of application was placed over the 118 in manderal with the uncoated side in contact with mandrel, and was bent 180 degree around it.The bended plate was examined visually for cracks or loss of adhesion.If the film passed through in mandrel then it was accepted to pass the flexibility test. The impact resistance test The impact resistance was measured by falling weight impact procedure according to ASTM D 2794.The test was accomplished on the coated steel plates after a week of film application. The gloss test The test was confirmed according to ASTM D 523 (1989).The sixty-degree gloss reading was taken on each of the films after the application in white lacquer. The chemical resistance test The chemical resistance test was accomplished according to ASTM D 1647(1996), D870(1997) and D1308(1998) on coated glass and steel plates after a week from the application.Alkali and acid resistance tests were accomplished on glass substrate immersed vertically in separate beakers containing distilled water, dilute HCl (10%), aqueous NaCl (10%) and KOH (4N) solutions at room temperature.The samples checked by eye.The appearance of the film at regular intervals within 1 day became changes. The FT-IR and H 1 -NMR analysis The structure of two resin were confirmed according to the FT-IR and H 1 -NMR spectra for resin at 500 MHz NMR spectrometer using CDCl 3 as the deuterated solvent. The synthesis of alkyd resin The alkyd resins were synthesized using lithium hydroxide and zirconium 2-ethylhexaneoate by alcoholysis process where soybean oil undergoes transesterification when heated with glycerol at 250 °C in case of lithium hydroxide as catalyst.Also, the alkyd resins were synthesized with nitrogen gas inlet until monoglyceride forms in case of the using of zirconium octoate. 13 -14The soybean oil undergoes transformation to the monoglyceride by heating at temperature 290-300°C without any further oxidation and there was not nitrogen gas inlet.Then esterification was carried out with addition of ophthalmic anhydride.The nitrogen gas inlet was used as inert blanket when catalyst was lithium hydroxide to prevent the oxidation of oil.The nitrogen gas inlet was used in the second stage with zirconium octoate with addition of phthalic anhydride which facilitates the removal of water produced during mm condensation reaction.The reaction was controlled by measuring acid value at different intervals of time.The reaction was stopped as soon as the desired level of acid value was attained. The structural analysis of the alkyd resins (A, B) The FT-IR spectra of alkyd resins (A, B) are shown in Figures 1 and 2, respectively.These spectra indicate indicate the presence of important linkage of ester group, olefinic double bonds and other characteristic peaks. Characteristic peaks in FT-IR spectra of alkyd resins (A, B) are listed in Table 2. Table 2. Characteristic peaks in FT-IR spectra of alkyd resin (A, B) For alkyd resin (A), the peak for C=O appears at 1735 cm -1 in case of synthesized resin.The peak at 3472 cm -1 indicates the presence of hydroxyl group.For alkyd resin (B), the polyesterification reaction is confirmed by FT-IR analysis and C=O observed at 1737 cm -1 .The peak at 3514 indicate the presence of O-H group, the peak at 1590 cm -1 indicates C=C stretching for unsaturation of fatty acids and aromatics. . The drying time for resins (A, B) Table 3 describes the time required to two resin dried where the first is set-to-touch and the hard drying of resin was tack free time.The alkyd resin film driers by autooxidation process. 23due to intake of oxygen from atmosphere mechanistic studies of autoxidation drying process of coating based on alkyd resin have concentrated on methylene interrupted fatty acids 24 , but there are many other theortical and empirical findings that are valid for other compounds .it is likely that a carbonyl group kinetically facilitates insertion of a transition metal ion into α-carbon-carbon bond. 25The hydrogen atoms attached to carbon atoms in α-position of carbonyl group are more active(acidic)compared to ordinary alkyl hydrogen and similar that hydrogen atoms attached to ester group in alkyd resin , these hydrogen atoms are deprotonated by zirconium octoate salt, providing an enolate anion (Michael donor), the enolate anion then reacts an addition to olefin of the fatty acids emerged during the oxidative crosslinking of alkyd resin by Russell mechanism. 26-27 All these interesting possibilities speed up the network formation and result in lower drying times. The adhesion test for resin (A, B) Results are given in Table 3.The results showed a desired adhesion for resins (A, B) where two resin prepared from the same oil (soybean oil) and glycerin which give the same polyester component which responsible for adhesion. The hardness Results are given in Table 3.The results show the hardness increases by using the zirconium catalyst than toward the use of lithium hydroxide catalyst in the synthesis of resin.This indicates when alkyd resin prepared using zirconium catalyst and zirconium octoate used in drying resin with octoate cobalt, the hardness of dried film increases with small amount of hydrocarbone resin in varnish industry.The film hardness depends on crosslinking density of the surface of the film, but presence of stable and rigid aromatic moiety in the backbone chain of phthalic anhydridebased film also showed the property zirconium octoate as drier which made resin B gave good hardness than resin A. But the two values of hardness are close to each others. The impact resistance The impact resistance results are given in Table 4. Two resin prepared using lithium hydroxide and zirconium 2-ethylhexanoate catalyst dried films show significant improvements in impact resistance.The resin prepared from zirconium catalyst gives a good impact resistance due to the zirconium element. The gloss Results of gloss test are given in Table 4.The gloss values of resins prepared from zirconium catalyst and lithium hydroxide is like approximately each other, which means the two resin contains the same percentage of oil and polyester component responsible for gloss retention. The chemical resistance Chemical resistance of dried film are given in Table 5.The resin A and resin B was good resistance to distilled water this due to low hydroxyl value present in alkyd resin and more cross-linked network results and two resin completely unaffected with NaCl 10% for the same reason.Poor alkali resistance of alkyd resin is due to the presence of alkali hydrolysable ester linkages and the alkyd resin containing free acid groups which react with alkali.Two alkyd resin unaffected with HCl 10% this due to two resin contain acid number and ester linkages poor affected by HCl 10%.The poor rsistivity to alkali is probably due to hydrolsable ester group present in the two resin. The yellowing resistance The tendency for alkyd based coatings to yellowing is a common concern all over the organic coatings industry. 15- 16 The oils containing linolenic acid are subject to discoloration because it is known this acid is the main cause of discoloration. 17- 18 The soybean oil contains 54% linoleic acid, therefore, soybean oil is good yellowing resistance where soybean oil is widely used around world.The aim of this work the tendency of lowering yellowing across the preparation of monoglyceride using zirconium octoate to prevent autoxidation of oil and production of alkyd resin transparent after application on glass blend.The alkyd resin prepared from zirconium catalyst when applied in white lacqure was given good color and high yellowing resistance than white lacqure prepared from commercial resin.The acid value and hydroxyl number are important parameters.The hydroxyl and carboxylic group concentration is also quite important parameters.For air drying alkyds, the concentration of these groups affects their drying properties.Set-to-touch drying time of resin A took lower time than resin B, but resin B was prepared from zirconium -2-ethylhexanoate which gave tack free time in drying process lower than resin A according to Table 3.Of course this is not the only parameter that determines the properties of the resins.Many other factors also affect their properties.In this work although acid value of resin B takes more time to adjust like resin A. Because the resin A is prepared from strong base which reacted with carboxylic groups and lowered acidity, but the drying time and yellowing resistance finally of resin prepared from zirconium octoate is better than resin prepared from lithium hydroxide . CONCLUSIONS The results obtained in the present work showed the ability of zirconium octoate salt to synthesized the monoglyceride from soybean oil and glycerin by alcoholysis reaction and prevent the oxidation of oil at high temperature where the monoglceride formed in the absence of nitrogen gas inlet this is best result was obtained by this catalyst since the catalyst reduce the cost in the production alkyd resin and protected the reaction from oxidation which take place during the preparation of resin .The zirconium octoate salt can act as base-catalyzed transformation of triglyceride by glycerin to form monoglyceride under high temperature.The structure of alkyd resin was confrmed by FT-IR and H 1 -NMR spectroscopy and comparable with the alkyd synthesized LiOH catalyst.The physico-chemical characteristics reffered to the resin prepared from zirconium octoate salt was good yellowing resistance when formulated in white lacqure.The other properties were good at drying time, gloss.The zirconium octoate catalyst gave the best results in yellowing processe since the alkyd resin prepared could resist the yellowing for along time compared resins prepared from other catalysts. Figure 1 . Figure 1.FT-IR spectra of alkyd resin (A) using Lithium hydroxide as catalyst. Figure 2 . Figure 2. FT-IR spectra of alkyd resin (B) using zirconium 2-ethylhexanoate as catalyst .H 1 -NMR spectra of the alkyd resin A using lithium hydroxide (LiOH) as catalyst is shown in Figures 3. Peaks appeared at δ 0.96 ppm for the protons of terminal methyl groups of fatty acids was confirmed by this peak.The peaks next to that at δ 1.35 ppm are due to protons of all -CH 2 present in the chain of fatty acids.The peak at δ 5.42 ppm resonance due to the unsaturated carbon (olefinic hydrogen in the fatty acid chain).The proton of aromatic ring and xylene solvent used to dissolve resin can be depicted by peaks at the rang δ 7.00-7.78ppm.The peaks appeared at δ 6.95-6.97ppm for -CH present in glycerol molecule linkage to oxygen of the ester group.This may be due to the presence of anhydride groups which results in deshielding effect.The peaks appeared at δ 3.45-3.9ppm for -CH 2 present in glycerol moiety attached to -OH group in the resin.The peaks appeared at δ 4.23-4-65 ppm for -CH 2 present in glycerol moiety attached (phthalic moiety).Peaks appeared at δ 2.3-2.4 ppm for -CH 3 attached to aromatic ring this due to xylene solvent present in sample resin.H 1 -NMR analysis for resin B which the zirconium 2ethylhexanoate used as catalyst for preparation the alkyd resin are shown in Figure 4.The proton of terminal methyl groups of fatty acids was confirmed by the peak δ 1.02 ppm.The peak next to that at δ 1.40 ppm are due to protons of all -CH 2 present in the fatty acid chain.The peak at δ 5.48 ppm are due to the unsaturated carbon (olefinic hydrogen in the fatty acid chain).The proton of aromatic ring of phthalic moiety and proton of aromatic ring of xylene moiety at the range δ 7.15-7.83ppm.The peaks appeared at δ 4.26-4.70ppm for -CH 2 present in glycerol moiety attached to (phthalic moiety).The peak appearad at δ 1.72 ppm are depicted to -CH 2 group attached to ester group of zirconium 2-ethylhexanoate catalyst.The peaks appearad at the range δ 2.32-2.44 ppm for -CH 3 attached to aromatic ring this due to xylene solvent present in sample resin.The peaks appearad at δ 7.05-7.08ppm fo -CH present in glycerol molecule linkage to oxygen of the ester group.This may be due to the presence of anhydride groups which results in deshielding effect.The peaks appearad at δ 2.68-2.91 ppm for -CH 2 present in glycerol moiety attached to -OH group in the resin. Figure 3 . Figure 3.The H 1 -NMR spectra of alkyd resin A using lithium hydroxide (LiOH) as catalyst. Table 5 . Chemical resistance test results of alkyd resins 19 3. 11 . The acid value with timeFor the polyesterification reaction , the acid value change with time are shown in Figures5 and 6. Table 1 . The constituents of alkyd resins Table 3 . Adhesion and hardness test results of alkyd resins Table 4 gives the flexibility test results.Two samples passed the test and no cracking or peeling was observed.This may be due to the oil content in the prepared alkyd resin Table 4 . Flexibility, impact resistance, gloss test results of alkyd resins
2019-01-02T22:17:10.431Z
2018-05-12T00:00:00.000
{ "year": 2018, "sha1": "04eeb3fc03919de01203b16d14cca24ae3261db7", "oa_license": "CCBYNC", "oa_url": "https://dergipark.org.tr/en/download/article-file/471406", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "04eeb3fc03919de01203b16d14cca24ae3261db7", "s2fieldsofstudy": [ "Materials Science", "Chemistry" ], "extfieldsofstudy": [ "Materials Science" ] }
157462514
pes2o/s2orc
v3-fos-license
Change in Bank Equity Stakes before Merger Completion This study investigates the relationship between the changes in the shareholdings of the institutional financial/ investment professionals and the firm-specific characteristics of the acquiring companies prior to merger completion. The present study thus serves to identifying the factors dominating investment behaviors of acquiring firms. Both total and average changes in their ownership are considered to test the popular agency and signaling hypotheses. Evidence shows that commercial banks are more likely to increase their equity holdings of those businesses with increasing current liability and decreasing profitability. The former supports the signaling hypothesis but the latter suggests the agency cost hypothesis is correct. Investment banks, on the other hand, prefer those with increasing assets and a stable financial status. A competitive relation of these financial experts is also presented in terms of the pursuit of greater controlling power over the board against each other before the merger completion date. Citation: Chetthamrongchai P, Lin L, Hsiao HF, Huang YL (2017) Change in Bank Equity Stakes before Merger Completion. J Account Mark 6: 222. doi:10.4172/2168-9601.1000222 Introduction Financial institutions play an important role in the financial markets by not only serving as a key funding source for new enterprises but also, through the exchange process, monitoring a firm's operations and diagnosing a firm's financial condition. Within the realm of financial institutions, commercial banks and investment banks have a particularly strong effect on the firm's performance. Petersen and Rajan [1] pointed out that as firms and commercial banks build and maintain long-term relationships, both lenders and borrowers can reduce agency conflict and the information asymmetry problem [2][3][4][5]. Investment banks can provide professional advice to firms on investment projects including mergers and acquisitions (M&As) activities [6]. In particular, investment banks can gain access to the firm's inside information and thus more accurately estimate the true value in the underwriting process, reducing the possibility of credit risk [7,8]. In addition, an investment bank holding firm's stock can reduce underwriting fees because the firm can reduce the cost of equity financing [9]. It is a well-known fact that M&As come in waves. Jensen and Meckling [10] apply agency theory to the modern corporation and model the agency costs of outside equity. The corporate finance literature comes up with different answers to this question. Shleifer and Vishny [11] argue that ownership concentration enhances corporate control by improving the monitoring of management. With diffused ownership, shareholders have few incentives for monitoring. With concentrated ownership, the cost of shirking will be mostly borne by large shareholders who therefore have a strong incentive to monitor the firm's management. Commercial bank holding firm's stock in order to reduce the agency problem between shareholders and creditors, when the smaller size of the firm, the higher the ratio of intangible assets, greater volatility, and lower profitability, so that firms have more serious information asymmetry and agency conflicts [2,3,12,13]. The bank holding to get control of the firms through effective supervision and control of the firm's plans for the choice, reduce the conversion of assets, over-or under-investment problems, bank holding can use earning of investment plan to make up for some of the diluted value of the bank loan to the firm. This can often lead to increased bank holdings of probability. The past research only considered a single type of a bank holding firm's stock or treated the supervisor of the companies' directors as the research object. Under general conditions, a variety of financial institutions will hold the stock at the same time. The present research simultaneously considers the commercial bank and the investment bank holding firm's stock as the research object. In addition, the previous studies are mostly for a specific time to explore the external financial institutions to enter the directors of the Companies Board of Supervisors and the companies characteristics related to research [14][15][16]. However, each firm may have different, time-dependent shocks. In addition, at the point in time before the study, the financial institutions may have had early access to the firm's board of directors and the holding firm's shares. Because the use of a specific point in time may be difficult to illustrate, the of financial institutions into directors of the Companies Board of Supervisors or holding shares of the firm's motives. This study does not explore the motive of ownership of financial institutions at a particular point in time. The quarterly holding changes between the first six quarters for the merger completion date to determine whether to increase its holding and to research the change in the connection between shareholding of financial institutions and financial characteristics. The organization of the remainder of this paper is as follows. Section 2 describes our model, while Section 3 discusses the date and reports the main estimation results. Finally, Section 4 offers some concluding thoughts and discusses some implications of our findings. Model Implementation This section briefly introduces the Logistic model used to investigate the bank's holdings increases and each bank's average holdings increase on the relationship between characteristics of the bidding firms. The commercial banks and investment banks were measured using the same method described as follows: The model This paper analyses the relationship between the bank's holdings change and characteristics of the bidding firms. Following the methodology proposed by Kroszner and Strahan [14] and controlling for firm operating performance variables such as changes in financial position (ΔZSCORE i ); value of firms (Tobin's q it-6 ); fame of firms (FAME i ) and change in investment quality (ΔEROIC i ). First, based on agency theory and signaling effects, we examine the relationship between the commercial bank's holdings increase probability and changes in the bidding firm's characteristics. The basic regression model takes the following form: β 6 ΔZSCORE i +β 10 ΔLnTA i +β 7 ΔVOL i+ β 8 ΔVOL 2 i+ β 9 ΔPROFIT i+ β 11 ΔTANRATIO i +β 12 ΔDEBTRATIO i+ β 13 ΔCDRATIO i+ β 14 ΔEROIC i + FAME i + ε it (5) Second, this paper examines the relationship between the investment bank's holdings change and characteristics of the bidding firms. The basic regression model takes the following form: +β 4 CONTROL it+ β 5 TOBINQ it+ β 6 ΔZSCORE it+ β 7 ΔLnTA i +β 8 ΔINVRATIO i+ β 9 ΔEQRATIO i +β 10 ΔFAME i+ ε it (6) The coefficient α 0 is the intercept, β j is the regression coefficient, and ε it is an error term assumed to be normally distributed with a mean of zero, iis firm i, t is the six quarters before the M&A completion date, Δ is the change invariables in the six quarters before the M&A announcement date and the completion date. Because this study aims to examine changes in firm characteristics which affect a bank's holdings change, changes in the amount of use were also included as other independent variables (in addition to control variables, some of the powers, and Tobin's Q using the quarter in six quarters before M&A completion date). Variables Industry variables (IND i ): This paper uses codings in accordance with SIC CODE (Standard Industrial Classification Code) before the two-digit codes. Figure 1 shows the characteristics of the sample according to industry, which can be divided into four main categories (energy category, manufacturing sector, retail trade, and services sector). These exclude the regulatory constraints of industry such as financial sector (SIC CODE=60~69) and public utilities (SIC CODE=49) (Figure 1). IND j is a dummy variable of industry in which j=1 is equal to 1 if the manufacturing sector (SIC CODE=20~48); j=2 is equal to 1 if the retail trade (SIC CODE=50 ~ 59); j=3 is equal to 1 if the services sector (SIC CODE=70~87). Controlling power variables (CONTROL i ): This study suggests that holding a larger percentage of shares have greater controlling power before the M&A completion date. In other words, the commercial bank's holdings more than investment banks holdings in the bidding firms that commercial banks have greater controlling power, and vice versa. CONTROL i is a dummy variable of controlling power for bidding firms, which is equal to 1 if the commercial bank's holdings are more than the investment bank's holdings in the bidding firms and 0 otherwise. In samples of commercial bank holdings changes, if the coefficient of controlling power was significantly positive, that commercial bank will continue to increase its holdings to maintain controlling power; if the coefficient is significantly negative, this indicates the investment bank of lower holding will increase its stake to gain controlling power. In samples of investment bank holdings changes, if the coefficient is significantly positive, this indicates the investment bank of lower holding will increase its stake to gain controlling power; if the coefficient is significantly negative, that commercial bank of higher holding has controlling power and will not continue to increase its holdings to maintain its controlling power. Growth opportunities (TOBINQ it-6 ): Firms with high levels of growth opportunities will have more demand for investment spending, and prior studies [17,18] empirically document such a relation. TOBINQ it-6 1 is the proxy for growth opportunity. If Tobin's Q is higher, that Investors believe the companies governance and higher evaluation of asset quality, thus reducing the firm's proxy conflicts. According to the agency cost hypothesis, when Tobin's Q is greater that bank will reduce holdings. According to the signaling hypothesis, when Tobin's Q greater firms with high levels of growth opportunities [19,20]. Bank holdings may be earning higher profits for earning of the investment plan. Therefore it will increase holdings. Changes in financial position (ΔZSCORE i ): Following Altman [21], we measured criteria by the Z-score model. In general, Z-scores are the proxies for the probability of financial distress. Firms that are not financially distressed show lower credit risks and are therefore easier to finance in the market. As a result, the conflicts between the shareholders and the creditor agency are usually small. According to the agency cost hypothesis, when bank holding for loan firms in order to reduce the agency conflicts, so bank holding will reduce for non-financial distress firms. According to the signaling hypothesis, firms that are not financially distressed have lower credit risks and their liquidity is higher. If bank holdings for loan firms in order to earn profits for earning of the investment plan, banks will increase holdings for firms of lower credit risk and more debt can be secure is a dummy variable of financial position changes, which is equal to 1 if Z-score t =1 and Z-score t-6= 0 or Z-score t =1 and Z-score t-6= 1, and 0 otherwise. Changes in asset size (ΔlnTA i ) and changes in tangible assets (ΔTANRATIO i ): When the firm has more assets or tangible assets to provide a higher guarantee for a loan, the creditor may conduct an auction of the collateral to back debt even if the firm is unable to repay the debt. Therefore, firms can increase the ratio of the assets or tangible assets to reduce agency conflicts between creditors and shareholders. According to the agency cost hypothesis, a bank maintaining a creditor 2 Following Altman (1983) measured criteriabyZ-score model. Z=1.2X 1 +1.4X 2 +3.3X 3 +0.6X 4 +0.999X 5 , where X 1 is Operating capital/Total asset, X 2 is Retained earnings / Total asset, X 3 is Earnings before interest and tax/Total asset, X 4 is Equity market / Total debt, X 5 is Sales revenue / Total asset. Z-score t is dummy variable of nonfinancial crisis firms whichZ-score t is equal to 1 if Z-score t more than 2.675, and 0 otherwise. will not increase its holdings. According to the signaling hypothesis, the loan firms increase the ratio of assets or tangible assets to reduce the internal private information and increase transparency of information [22,23]. Therefore, banks holdings are negatively correlated to increase the proportion of assets or tangible assets. ΔlnTA i 3 is the change in asset size and ΔTANRATIO i 4 is the change in tangible assets, Volatility of firm (ΔVOL i ): According to the agency cost hypothesis, the greater volatility of loan firms that have higher the risk of firm and agency conflict. It is more difficult to provide the firm with equity financing so the firm will rely more on bank lending. When the bank holding increases the stock of loan firm, indicating can reduce the agency problems from the identity of the creditor banks transferred to shareholders. As a result, a bank's holdings increase is positively correlated with firm volatility. On the other hand, according to the signaling hypothesis, if bank holdings by the loan firm in order to earn profits for earning of the investment plan, then bank's holding will reduce for higher volatility of firm. Kroszner and Strahan [14] show the lower volatility of firm that bank holdings will increase. ΔVOL i is the volatility of the standard deviation of daily stock returns before the M&A completion date. ΔVOL 2 i is the volatility of daily stock returns variance before the M&A completion date. Profitability performance (ΔPROFIT i ): According to the agency cost hypothesis, a firm's improved profitability can make the shareholders or creditors earn greater profits which works to reduce agency conflict. Therefore, the bank holdings are negatively correlated with the profitability performance of the firm. According to the signaling hypothesis, the bank holdings will increase for higher profitability firms in order to earn profits for earning of the investment plan. Then the bank holdings will be positively correlated with the profitability performance of firm. E ΔPROFIT i is the profitability performance such as the return of asset (ROA). Changes in debt ratio (ΔDBRATIO i ): In general, the proportion of debt increase that the total assets of firm to use debt to buy assets in the proportion of improved while the financial risk of firm is increased, therefore, improve agency problem between shareholders and creditors. According to the agency cost hypothesis, the bank holding increases the stock of the loan firm, indicating can reduce the interest conflict problems between shareholders and the creditor. The bank's holdings increase is positively correlated with the debt ratio of firms. On the other hand, according to the signaling hypothesis, the bank holdings will increases for the loan firm in order to earn profits for earning of the investment plan. Bank holdings will then be reduced to increase the proportion of firm liabilities. Therefore, the bank holdings are negative correlated with the debt ratio of firms. ΔDBRATIO i is a change in debt ratio. Changes in short-term liabilities ratio (ΔCDRATIO i ): According to Kroszner and Strahan [14] and Stearns and Mizruchi [24], short-term liabilities ratios are the proxies for the relationship between banks and are also the borrowing source of the loan firm. Fama [2] indicates that when firms have higher agency conflicts and asymmetric information, those firms cannot collect funds by equity financing. Rather, they must collect funds through financial institutions. According to the agency cost hypothesis, the bank holdings increase for the increased short-term liabilities of the firm in order to reduce the conflict of interest between stockholders and creditors. If the firm increases the proportion of short-term liabilities, then the information transparency will decrease. According to signaling hypothesis, the bank holdings will increase in order to control the internal information of firm. ΔCDRATIO i is the change in short-term liabilities ratio. Change in investment quality (ΔEROIC i ): The firm increases the rate of investment to rapidly expand and thus create a higher return. When the firm has a high-quality investment project, it can work to reduce agency conflict between shareholders and creditors. According to the agency cost hypothesis, the bank holdings for the loan firm serve to reduce the conflict of interest between shareholders and creditors. As a result, the bank holdings decrease for firms that have high-quality investment projects. According to signaling hypothesis, the bank holdings for loan firms serve to earn profits for earning of the investment plan. Accordingly, bank holdings will reduce for i firms which have high-quality investment projects [25]. However, the firm invest lower-quality project that cannot use signaling theory to explain the direction of bank holdings rate. In general, return on investment is used as the proxy variable of the investment quality. ΔEROIC i 5 is the change in investment quality. Changes in investment ratio (ΔINVRATIO i ): Capital expenditures are increased to show increased investment opportunities, so firm need investment banks to increase invest. Therefore, changes in the investment bank holding rate are positively correlated with the investment spending of firm. Capital expenditures are the proxies for investment opportunities. ΔINVRATIO i 6 is the change in investment ratio. Changes in equity financing ratio (ΔEQRATIO i ): When the firm issues equity financing, firms need investment banking to support securities underwriting. Investment banks may increase holdings to obtain the opportunity for securities underwriting. Therefore, expected changes in the equity financing ratio and changes in investment bank holdings are positively correlated. ΔEQRATIO i 7 is the change in equity financing ratio. Data This study examines the connection between the holding changes in the commercial banks (investment banks) and the financial characteristics of the bidding firms for the six quarters between the M&A announcement date and the completion date. First, we select the date for all firms of M&A announced between 2000 and 2005 using the SDC database (1,025 firms). The top fifty holdings in firms, the quarterly holding date of the professional financial institutions from the Thomson One Banker in the board database, the daily stock returns date from CRSP database, the quarterly accounting date from Compustat database, press News from LexisNexis database. Of the original 1,025 firms, 260 were retained in the final analysis: 444 firms were deleted because the transactions do not provide the firm's complete stock data, 91 were deleted because they did not provide the status of the outside directors of companies holding, 146 firms were deleted because they were firms which the financial industry (SIC CODE=60~69) and public utilities (SIC CODE=49), and finally 84 additional firms were deleted from the analysis because there were not six quarters between the M&A announcement date and the completion date (Figures 2 and 3). As the study shows, while commercial banks and investment banks simultaneously holding the bidding firms shares. Figures 2 and 3 show nine cases from a change set of all banks shares hold proportion. Each bank's average shares hold proportion changes set in the six quarters between the M&A announcement date and the completion date. For example, there were nine cases in which the commercial bank holdings increased (decrease and no change) and the investment holdings increased (decrease and no change). Figure 2 shows the 86 commercial and investment bank firms for which holdings all increased (33.08% of the total sample). Figure 3 shows the 95 commercial and investment bank firms for which the average holdings all increased (36.54% of the total sample). However, commercial banks and investment banks simultaneously holding the bidding firms shares before the M&A completion date. Empirical Results We first report the results of our main test regarding changes in financial institution holdings for the quarters prior to the merger completion date. In Panel A of Table 1, the commercial banks quarters holding proportion in 3.41%~4.71% and average commercial banks quarters holding proportion in 0.63%~0.86%. This indicates a significant increasing trend for bank holdings from the second to fourth quarter before the M&A completion date. The investment banks quarters holding proportion is 3.63%~4.96% and average investment banks quarters holding proportion is 1.16%~1.63% in Panel B. The results show a significant decreasing trend for the investment bank's holding proportion between the fifth and sixth quarters before the M&A completion date, but a significant increasing trend between the third and fourth quarters before the M&A completion date. The average investment bank's holding proportion indicated a significant increasing trend from the second to the third quarter after the M&A completion quarter. Panel C shows the insurance companies quarters holding proportion in 1.19%~1.28% and average insurance companies quarters holding proportion in 0.79%~0.92%. The results show that the insurance company holdings proportions did not significantly change in the quarter before or after the M&A completion quarter ( Table 1). Because of commercial banks, investment banks and insurance companies quarters holding proportion not significant difference. We therefore used a difference test for comparing holdings proportion changes between the M&A completion quarter and each quarter. In Panel A of Table 2, the results show a significantly increasing trend for the commercial bank's holdings proportion in the two periods six quarters before the M&A completion quarter and from the fourth to the sixth quarter after the M&A completion quarter. The average commercial bank's holding proportion also revealed a significantly increasing trend from the third to the sixth quarter before M&A completion quarter and from the third to the sixth quarter after the M&A completion quarter. In Panel B, the results show a significantly increasing trend for the investment bank's holding proportion from the third to the fifth quarter before the M&A completion quarter and from the second to the sixth quarter after the M&A completion quarter. The average investment bank's holding proportion significantly increased from the third to sixth quarter before the M&A completion quarter and from the third to the sixth quarter after the M&A completion quarter. In Panel C, the results show that the insurance companies holding proportion did not significantly change between the M&A completion quarter and the other quarters (Table 2). Note: Panel A -I (B-I and C-I) display the quarterly holdings changes in commercial banks (investment banks and insurance companies) which five top ten holdings in the firm. T-tests were used to compare quarterly holdings. Panel A-II (B-II and C-II) is the quarterly average holdings of commercial banks (investment banks and insurance companies) which five top ten holdings in the firm. T-tests were used to compare quarterly holdings. Positive t-values indicate an increasing trend; negative values indicate a decreasing trend. *:10%, **:5%, ***:1% significance level. In Panel A of Figure 4, the results show that 91.54% complete M&A since M&A announcement date to the completion date which need about three quarters. The results show 72.69% completed within the three months between the M&A announcement date and the completion date in Panel B. These results are similar to Wansley et al. [26]. Therefore, the M & A announcement date may be the second quarter before the M&A completion date and the financial institutional holdings show no significant change between the M&A announcement date and the completion date. This study was to explore the financial institutions holding behavior before M&A completion date (Figure 4). According to the results obtained, the commercial (investment) banks have a higher holdings proportion and a significantly increasing trend before the M&A completion date. So the commercial banks and investment banks important than the insurance companies. The bidding firms acquire the M&A professional advice by investment banks holdings [27], and obtain debt expertise by commercial banks holding. Therefore, this study of commercial and investment bank holding changes before the M&A completion date is the main object of study, with the specific goal of testing the connection between the financial institutions holding proportion and the firm's financial characteristics. In Panel A and B of Table 3 provides the descriptive statistics used to analyze the financial institutions of holding proportion increase that assets size increase level higher than financial institutions of holding proportion decrease from the sixth quarter before M&A completion date to the M&A completion date. The results show that firms increase the ratio of assets to reduce agency problems between shareholders and creditors. Furthermore, the financial institutions monitor profit more than costs, and the banks choose to increase holding proportion for the firms that increase higher ratio of asset size. Therefore, bank holdings were significantly positively correlated with increases in the proportion of assets. These results are inconsistent with both the agency cost hypothesis and signaling hypothesis. The results show that the profitability of firms improved to make the shareholders or creditors increase earned profits, thus reducing the agency conflict within the firms. Therefore, bank holdings were significantly negatively correlated with changes in the profitability performance of the firm. These results are consistent with the agency cost hypothesis in Panel A-I (-II). In Panel A-II, the average commercial banks holdings by the loan firm in order to earn profits for earning of the investment plan, then banks holding will reduce for higher volatility of firm. The results show the average commercial bank's holdings are significantly negatively correlated with the volatility of the firm, consistent with the signaling hypothesis. Fama [2] found that when the firm had higher agency conflicts and asymmetric information, the firm cannot collect funds by equity financing. Instead, such funds must be collected through financial institutions. The results show that the average commercial bank's holdings increase for the increased short-term liabilities of the firm in order to reduce the conflict of interest between stockholders and creditors. The results show that the average commercial bank's holdings were significantly positively correlated with changes in the short-term liabilities ratio, consistent with the agency cost hypothesis. In Panel B-I of Table 3, capital expenditures increased to show that increase investment opportunities, so firm need investment banks to increase invest. Therefore, the results show that changes in an investment bank's holding rate are significant positively correlated with the investment spending of the firm, consistent with Panel B-II. When the firm issues equity financing, firms need investment banks to underwrite the securities. Investment banks may therefore increase holdings to obtain an opportunity for securities underwriting. Therefore, one may expect changes in the equity financing ratio and changes in the investment bank holdings to be significantly positively correlated. In Panel B-II, if Tobin's Q is greater that firms with high levels of growth opportunities [19,20]. The average investment bank's holdings may be earning higher profits for earning of the investment plan, and it will therefore increase holdings. The results show the average investment bank's holdings are significantly positively correlated with the firm's growth opportunities. These results are consistent with the signaling hypothesis (Table 3). This paper used logistic regression analysis to examine the relationship between the changes in financial institutions holding proportion and the firm's characteristics. In order to avoid collinearity problems of the explanatory variables each other and to affect regression results between dependent variables and the stability of the explanatory variables. Therefore, this paper uses correlational analysis to examine the explanatory variables for the firm characteristics in Figure 5 (a Pearson correlation coefficient matrix). The results show the variables' correlation coefficients were lower than 0.3 or -0.3 which is a low degree of correlation except for debt ratio (or Tobin's q or equity financing ratio) and the size of the firm's assets that Pearson correlation is 0.472 (or 0.446 or 0.567) ( Figure 5). The commercial bank's holdings more than investment banks holdings in the bidding firms that commercial banks have greater controlling power in Table 4. In Panel A, the coefficient of controlling power is significantly negative, indicating that investment banks with lower holdings will increase their stake to gain controlling power. In Panel B, the coefficient is significantly positive, indicating that investment banks with lower holding will increase their stake to gain controlling power. In Table 4, the results show that firms increase the ratio of assets to reduce agency problems between shareholders and creditors. Furthermore, the financial institutions monitor profits more than costs, and the banks choose to increase the holding proportion for those firms that have a higher ratio of asset size. Therefore, bank holdings are significantly positively correlated with the proportion of assets. These results are inconsistent with the agency cost hypothesis and the signaling hypothesis. In Panel A, the results show the profitability of the firm improved to make the shareholders or creditors increase earned profits to reduce agency conflict. Therefore, the bank holdings were significant negatively correlated with changes in profitability performance of the firm, consistent with the agency cost hypothesis. In (c) and (d) of Panel A, the greater volatility of the loan firms have a higher risk of firm and agency conflict. It is more difficult for the firm to get equity financing so the firm will necessarily rely more on bank lending. The average commercial bank holdings by the loan firm in order to earn profits for earning of the investment plan, then banks holding will reduce for higher volatility of firm. The results here show that the lower volatility of firm that average commercial bank holdings will increase, consistent with the signaling hypothesis. The empirical results reveal a quadratic relation between the volatility of the firm and the average commercial bank's holdings proportion in (d) of Panel A. Firms that are not financially distressed show a lower credit risk and have higher liquidity. They are therefore easier to finance in the market. In these cases, shareholder and creditor agency conflict is usually small. The investment bank's holdings for loan firms in order to earn profits for earning of the investment plan. The investment banks will increase holdings for firms with a lower credit risk and more debt can be secure compensation. These results are consistent with the signaling hypothesis (Table 4). Conclusion In this paper, we examined the effect that changes in a financial institution's holdings proportion for those quarters before the M&A completion date. The financial institution's holdings proportion for the bidding firms was the largest for investment banks, followed Note: *:10%, **:5%, ***:1% significance level. by commercial banks, and the smallest for insurance companies. Therefore, the investment and commercial banks significantly increase their holdings before the M&A completion date. The commercial banks and investment banks are therefore more important than insurance companies for the bidding firms. This paper also uses descriptive statistics and logistic regression analysis to examine the relationship between the financial institutional quarters holdings proportion and the financial characteristics of the bidding firms. The results of the both analyses methods are consistent. The empirical results show that firms increase the ratio of assets to reduce agency problems between shareholders and creditors. Furthermore, the financial institutions monitor profit more than costs, and the banks choose to increase their holdings proportion for the firms which maintain a higher ratio of asset size. Therefore, banks holdings are significant positively correlated with proportion of assets. These results are inconsistent with the agency cost hypothesis and signaling hypothesis. The results show that the profitability of the firm improved to make the shareholders or creditors increase earned profits in order to reduce agency conflict. Therefore, the commercial bank holdings are significantly negatively correlated with changes in the profitability performance of the firm, consistent with the agency cost hypothesis. The average commercial bank's holdings by the loan firm in order to earn profits for earning of the investment plan, then banks holding will reduce for higher volatility of firm. The results show that the lower volatility of the firm that average commercial bank holdings will increase. These results are consistent with the signaling hypothesis. Our findings using logistic regression analysis reveal that the commercial banks holdings are more than investment banks holdings in the bidding firms that commercial banks have greater controlling power. The results indicate that the investment banks with lower holdings will increase their stake to gain controlling power. Firms that are not financially distress have lower credit risk and their liquidity is higher. The investment banks holdings for loan firms in order to earn profits for earning of the investment plan. The investment banks will increase holdings for those firms with a lower credit risk and more debt can be secure compensation. Overall, the investment and commercial bank holdings significantly increased before the M&A completion date. We also found that the financial institutional quarters holdings proportion and the financial characteristics of the bidding firms have a significant relationship. Note: t-value in ( ).*:10%, **:5%, ***:1% significance level.
2019-05-19T13:05:39.425Z
2017-03-22T00:00:00.000
{ "year": 2017, "sha1": "55f1409a7e2e23a4c13e444e3a5f96cdb61bd446", "oa_license": "CCBY", "oa_url": "https://doi.org/10.4172/2168-9601.1000222", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "6518651e7efb9e7a41ca153e556b9a4ee3262a84", "s2fieldsofstudy": [ "Business", "Economics" ], "extfieldsofstudy": [ "Economics" ] }
5260700
pes2o/s2orc
v3-fos-license
Initial ventilator settings for critically ill patients The lung-protective mechanical ventilation strategy has been standard practice for management of acute respiratory distress syndrome (ARDS) for more than a decade. Observational data, small randomized studies and two recent systematic reviews suggest that lung protective ventilation is both safe and potentially beneficial in patients who do not have ARDS at the onset of mechanical ventilation. Principles of lung-protective ventilation include: a) prevention of volutrauma (tidal volume 4 to 8 ml/kg predicted body weight with plateau pressure <30 cmH2O); b) prevention of atelectasis (positive end-expiratory pressure ≥5 cmH2O, as needed recruitment maneuvers); c) adequate ventilation (respiratory rate 20 to 35 breaths per minute); and d) prevention of hyperoxia (titrate inspired oxygen concentration to peripheral oxygen saturation (SpO2) levels of 88 to 95%). Most patients tolerate lung protective mechanical ventilation well without the need for excessive sedation. Patients with a stiff chest wall may tolerate higher plateau pressure targets (approximately 35 cmH2O) while those with severe ARDS and ventilator asynchrony may require a short-term neuromuscular blockade. Given the difficulty in timely identification of patients with or at risk of ARDS and both the safety and potential benefit in patients without ARDS, lung-protective mechanical ventilation is recommended as an initial approach to mechanical ventilation in both perioperative and critical care settings. Introduction of positive pressure ventilation during a polio epidemic in 1952 resulted in a large reduction of mortality in patients with respiratory failure (87% to less than 15%) and marked the birth of modern intensive care medicine [2]. Better understanding of the eff ects of positive pressure ventilation on respiratory physiology and mechanics has led to an appreciation of potential side eff ects of positive pressure ventilation, in particular ventilator-associated lung injury [3]. Th e key determinants of ventilator-associated lung injury are cyclic alveolar distension (volutrauma) and recruitment/derecruitment (atelectrauma), the size of available lung ('baby lung'), with an additional contribution from preexisting sepsis, vascular pressures, respiratory rate and inspiratory fl ow [3]. Avoiding high tidal volume ventilation is the only intervention with convincing survival benefi t in patients with ARDS [4]. More recently, observational studies and a randomized clinical trial suggested a benefi t of avoiding conventional high tidal volume ventilation in all critically ill patients [5,6]. Th e systematic review by Fuller and colleagues [1] highlights the importance of the low tidal volume ventilation strategy in patients without ARDS at the onset of mechanical ventilation. Th e results from 8 out of 13 studies included in the fi nal analysis of this systematic review show that lower tidal volumes at initiation of mechanical ventilation reduce progression to ARDS. Similar fi ndings were reported in another recent systematic review that combined observational studies and clinical trials in both ICUs and perioperative settings [7]. Neither of these systematic reviews raised concerns about the safety of low tidal volume ventilation in patients without ARDS. Given the diffi culty of identifying patients with ARDS in a timely fashion and both the safety and potential benefi t of low tidal volume ventilation in patients without ARDS, the question arises whether conventional high tidal volume ventilation should ever be used in critical care or perioperative settings [8,9]. High tidal volume ventilation was recommended in operating rooms in the early 1970s to prevent atelectasis [10]. However, later studies did not support this approach and the focus has shifted towards the role of positive Abstract The lung-protective mechanical ventilation strategy has been standard practice for management of acute respiratory distress syndrome (ARDS) for more than a decade. Observational data, small randomized studies and two recent systematic reviews suggest that lung protective ventilation is both safe and potentially benefi cial in patients who do not have ARDS at the onset of mechanical ventilation. Principles of lung-protective ventilation include: a) prevention of volutrauma (tidal volume 4 to 8 ml/kg predicted body weight with plateau pressure <30 cmH 2 O); b) prevention of atelectasis (positive end-expiratory pressure ≥5 cmH 2 O, as needed recruitment maneuvers); c) adequate ventilation (respiratory rate 20 to 35 breaths per minute); and d) prevention of hyperoxia (titrate inspired oxygen concentration to peripheral oxygen saturation (SpO 2 ) levels of 88 to 95%). Most patients tolerate lung protective mechanical ventilation well without the need for excessive sedation. Patients with a stiff chest wall may tolerate higher plateau pressure targets (approximately 35 cmH 2 O) while those with severe ARDS and ventilator asynchrony may require a short-term neuromuscular blockade. Given the diffi culty in timely identifi cation of patients with or at risk of ARDS and both the safety and potential benefi t in patients without ARDS, lungprotective mechanical ventilation is recommended as an initial approach to mechanical ventilation in both perioperative and critical care settings. end-expiratory pressure, recruitment maneuvers, and the avoidance of a high fraction of inspired O 2 (FiO 2 ) as safer and more eff ective ways to prevent atelectasis than high tidal volume [11,12]. Th e second concern with regards to low tidal volume ventilation is the increase of the carbon dioxide partial pressure (PCO 2 ), but acidosis is usually easily corrected by increasing respiratory rate except in patients with severe ARDS, where permissive hypercapnia may actually be desirable [13]. Another concern regarding low tidal volume ventilation is the potential increase in the need for sedation [14]. However, there is little evidence to Kilickaya and Gajic Critical Care 2013, 17:123 http://ccforum.com/content/17/2/123 support this claim, particularly in patients without ARDS [15]. Although limited, the current evidence, including the current report by Fuller and colleagues [1], suggests that the risk/benefi t ratio of low tidal volume ventilation in patients with or without ARDS is on the side of benefi t. In Figure 1 we provide a pragmatic approach to lung protective mechanical ventilation in patients with and without ARDS. Competing interests The authors have no fi nancial or other potential confl icts of interest to disclose.
2016-05-12T22:15:10.714Z
2013-03-12T00:00:00.000
{ "year": 2013, "sha1": "a78f143d77afde0eabdf9e1d9a9d4203c6130124", "oa_license": "CCBY", "oa_url": "https://ccforum.biomedcentral.com/track/pdf/10.1186/cc12516", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "100a28fa3807057567aa61d9e08a59a6c948a756", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
245242571
pes2o/s2orc
v3-fos-license
Phase II study of DHP107 (oral paclitaxel) in the first-line treatment of HER2-negative recurrent or metastatic breast cancer (OPTIMAL study) Background: Standard intravenous (IV) paclitaxel is associated with hypersensitivity/toxicity. Alternative IV formulations have improved tolerability but still require frequent hospital visits and IV infusion. DHP107 is a novel oral formulation of paclitaxel that is approved in South Korea for the treatment of gastric cancer. Methods: This multicenter, phase II study using a Simon’s two-stage design investigated the efficacy and safety of DHP107 200 mg/m2 administered orally twice daily on days 1, 8, and 15 every 4 weeks for the first-line treatment of recurrent or metastatic HER2-negative breast cancer. Results: Thirty-six patients were enrolled and 31 were assessable for efficacy. Patient median age was 57 years (range = 34–81) and 11 (31%) had triple-negative disease. A median of seven cycles (range = 1–28) of DHP107 was administered. Objective response rate was 55% (17 patients), all partial responses, according to the investigator’s decision and independent central review (ICR), and 44% (4/9 patients) in those with triple-negative disease. Disease control rate (partial response and stable disease) was 74% (23 patients) according to the investigator’s decision and ICR. In the intention-to-treat (ITT) population of all enrolled participants, the objective response rate was 50% (18/36 patients). Median progression-free survival was 8.9 months [95% confidence interval [CI]: 5.2–12.3) and median time to treatment failure was 8.0 months (95% CI: 4.2–10.0). DHP107 had an acceptable toxicity profile. All patients experienced treatment-emergent adverse events; the most common adverse events were decreased neutrophil count (81% all grades and 78% grade ⩾ 3) followed by peripheral sensory neuropathy (61% all grades and 8% grade 3). However, there was no febrile neutropenia or sepsis. Conclusion: DHP107 showed promising efficacy and acceptable tolerability in this phase II study and is currently being investigated in the OPTIMAL phase III study (NCT03315364). Trial registration: This trial was registered with ClinicalTrials.gov identifier: NCT03315364. Introduction Female breast cancer is the second most commonly diagnosed cancer in the world after lung cancer, and the leading cause of cancer death. 1 In South Korea in 2017, breast cancer was the second most prevalent cancer after thyroid in women. 2 In recurrent or metastatic human epidermal growth factor receptor 2 (HER2)-negative Phase II study of DHP107 (oral paclitaxel) in the first-line treatment of HER2-negative recurrent or metastatic breast cancer (OPTIMAL study) breast cancer, prolongation of survival and improvement of quality of life (QoL) are important because treatment is not curative. 3 Therefore, the toxicity and tolerability of any treatment must be taken into consideration. 3 Paclitaxel is a preferred single agent for the treatment of patients with stage IV or recurrent metastatic breast cancer and triple-negative tumors and germline BRCA1/2 mutations in the National Comprehensive Cancer Network (NCCN) Clinical Practice Guidelines. 3 Paclitaxel can be administered weekly (80 mg/m 2 ) or every 3 weeks (175 mg/m 2 ) but weekly administration has been shown to be more beneficial than 3-weekly administration in terms of overall survival (OS) in patients with locally advanced or metastatic breast cancer. 4 The standard intravenous (IV) formulation of paclitaxel includes the non-ionic surfactant polyoxyl-35-castor oil (Cremophor EL or Kolliphor EL, BASF, Ludwigshafen, Germany), which is associated with hypersensitivity reactions and toxicity, [5][6][7][8] requires an infusion of 3 hours or longer, and premedication to help prevent hypersensitivity reactions. [6][7][8] In addition, Cremophor EL forms micelles within plasma that entrap the active drug, resulting in increased systemic exposure, decreased elimination, and reduced antitumor activity. [9][10][11] To overcome these issues, other formulations, such as a nanoparticle albuminbound IV formulation (nab-paclitaxel) 12 and polymeric micelle paclitaxel, 13 have been developed. Although Cremophor EL-free formulations have improved the tolerability of IV paclitaxel, treatment still involves frequent hospital visits. Oral administration of paclitaxel would be an attractive alternative that could facilitate more patientfriendly treatment. 14 DHP107 (Liporaxel, Daehwa Pharmaceutical Co. Ltd., Seoul, Korea) is a novel oral formulation composed of lipid ingredients and paclitaxel that is systemically absorbed without the need for P-glycoprotein inhibitors or Cremophor EL. 15 Based on the results of the phase III DREAM study, which showed that DHP107 was non-inferior to 3-weekly paclitaxel in terms of progression-free survival (PFS), 16 DHP107 was approved in Korea in 2016 for the treatment of advanced, metastatic, or locally recurrent gastric cancer. The aim of the current study was to investigate the antitumor activity and tolerability of DHP107 in the firstline treatment of recurrent or metastatic breast cancer. Study design and treatment This was a multicenter, open-label, single-arm, phase II study using a Simon's optimal two-stage design to minimize any disadvantage to the patients' treatment due to potential ineffectiveness of the study drug (ClinicalTrials.gov identifier NCT03315364). In stage 1, if two or more of nine patients had an objective response, the study could move to stage 2. In stage 2, if nine or more of 34 patients had an objective response, DHP107 was considered to be effective and able to proceed to the confirmatory phase III study. DHP107 200 mg/m 2 was administered orally twice daily approximately 1 hour after breakfast and dinner with no premedication for the prevention of hypersensitivity on days 1, 8 Key exclusion criteria were prior treatment with a taxane in the metastatic setting; prior chemotherapy for recurrent or metastatic HER2-negative breast cancer; the presence of cardiovascular disease or uncontrolled hypertension; pregnant or breast-feeding women. However, the following patients could participate if the last administration of adjuvant or neoadjuvant chemotherapy for breast cancer that was not recurrent or metastatic was ⩾1 year before randomization: ER+ or PR+ patients who had up to second-line endocrine therapy with or without concomitant cyclindependent kinase (CDK) 4/6 inhibitors. Statistical analysis Analyses were performed on the per-protocol set (PPS) and the safety analysis set (SAS). The PPS included patients without any major protocol violations who received at least one cycle of treatment and had at least one tumor assessment following the administration of DHP107. The SAS included all patients who received at least one dose of DHP107 and had at least one safety assessment. The PPS was used for the analysis of efficacy and the SAS for the analysis of demographics and safety. Descriptive statistics are presented for continuous variables, and frequencies and percentages are presented for categorical variables. All end points include two-sided 95% confidence intervals (CIs). The number of patients was calculated by the Simon's optimal two-stage design with a onesided significance level of 0.05 and a test power of 80%. The ORR for paclitaxel as first-line treatment is 15-62% 18 and its weighted average is approximately 37%. Therefore, this study set the poor ORR (ineffectiveness) of DHP107 as 15% and the good ORR (effectiveness) was hypothesized to be 35%, which is close to the weighted average, to have a 20% difference between the poor and good ORR. Efficacy In the ITT population of all enrolled participants, the objective response rate was 50% (18/36 patients, Table 2). In the PPS analysis, in which five patients were excluded (three patients dropped out in the beginning and two patients were excluded due to prohibited medication), the ORR was 55% (17 patients), all of which were partial responses (PRs), according to both the investigator's decision and ICR. The response rate was 44% (4/9 patients) in those with triple-negative disease ( Figure 2 (Figure 4). OS was not reached for the survival probability of 0.5 so median OS is not estimable (NE) (95% CI: 18.1 months-NE) ( Figure 5). DHP107 dose and administration The median number of cycles administered was seven (range = 1-28), with a median treatment duration of 6.3 months (range = 0.5-25.4). Nineteen patients (53%) had a cycle delay, which was due to TEAEs in 18 patients (95%), a schedule change in two patients (11%), and other reasons in three patients (16%). The median total dose of DHP107 administered was 10,600 mg Discussion For HR-positive and HER2-negative breast cancer, CDK 4/6 inhibitors, such as abemaciclib, palbociclib, and ribociclib, have changed the treatment paradigm and become a mainstay of therapy for first-line treatment in combination with either an aromatase inhibitor or fulvestrant and second-line treatment in combination with fulvestrant. 3,19 However, chemotherapy is still an important option in the treatment armamentarium for HER2-negative breast cancer. Following the positive results for oral DHP107 demonstrated in the phase III DREAM study in gastric cancer, 16 the efficacy and safety of DHP107 were investigated in this phase II study for the firstline treatment of patients with recurrent or metastatic HER2-negative breast cancer. The study met the primary end point, with an investigatorassessed ORR of 55% in the PPS and 44% in patients with triple-negative disease. Efficacy was also demonstrated by the secondary end points, with an investigator-assessed DCR of 74%, median PFS of 8.9 months, and median TTF of 8.0 months. Median OS was not estimable. There was good agreement between the investigator-assessed and ICR efficacy results: the ORR was 55% by both investigator's decision and ICR, and the DCR was 74% by both investigator's decision and ICR. Although the interpretation of the results of crossstudy comparison needs to be made with caution, the ORR for oral DHP107 in this study compares favorably with that reported by studies of first-line IV paclitaxel 18 and IV nab-paclitaxel 20,21 monotherapy in the treatment of metastatic breast cancer. In a review of studies conducted in the 1990s, Vogel and Nabholtz 18 reported ORRs of 15-62% with IV paclitaxel. Jackisch et al. 21 reported ORRs of 38-49% for studies of weekly IV nab-paclitaxel. A study of IV nab-paclitaxel in routine clinical practice reported an ORR of 9% (one of 11 patients). 20 The efficacy results for oral DHP107 also compare favorably with the results reported for a novel oral formulation of paclitaxel that uses encequidar, a P-glycoprotein pump inhibitor, to enable absorption of oral paclitaxel, 22,23 and the oral taxane tesetaxel. 24,25 In a phase III study (KX-ORAX-001) comparing oral paclitaxel plus encequidar for 3 days per week versus 3-weekly IV paclitaxel in patients with metastatic breast cancer, the ORR was 40.4% versus 25.6% in the modified ITT population 22 and PFS was 8.4 versus 7.4 months. 23 A phase II study of tesetaxel in patients with HR+/HER2-locally advanced or metastatic breast cancer reported an ORR of 45% and a median PFS of 5.7 months. 25 In the phase III CONTESSA study, in which tesetaxel plus a reduced dose of capecitabine was compared with capecitabine alone in patients with HR+/HER2metastatic breast cancer who had previously received a taxane, median PFS was 9.8 months for tesetaxel plus capecitabine versus 6.9 months for capecitabine alone. 25 Despite the positive results from these two studies, the development of tesetaxel was discontinued in March 2021 following the decision that the clinical data package was unlikely to receive US Food and Drug Administration approval. DHP107 had an acceptable tolerability profile. It should be noted that because DHP107 is an oral agent, differences in absorption between individuals can lead to differences in toxicities. The most common TEAEs were decreased neutrophil count (81%), alopecia (61%), and peripheral sensory neuropathy (61%), and the most common grade ⩾ 3 TEAEs were decreased neutrophil count (78%), anemia (17%), and peripheral sensory neuropathy (8%). Peripheral sensory neuropathy occurred in 15 patients up to and including cycle 6 and seven journals.sagepub.com/home/tam 9 patients after cycle 6. TEAEs led to permanent treatment discontinuation in four patients (11%). There was one death during the study, but this was unrelated to DHP107 treatment. As response rates in heterogeneous patient populations including Triple-negative breast cancer and HR+ subtypes can vary, we acknowledge that investigator-assessed ORR might not be the most appropriate primary end point in this study. Direct comparison of response rates from one trial to another is inherently difficult, given that studies often differ with respect to entry criteria and population characteristics. In addition, the rate of nonassessable cases [13.9% (5/36)] was relatively high because three patients withdrew in the beginning and two patients were not evaluable due to use of prohibited medication. However, these dropouts mostly occurred at the beginning of the study. It is also important to consider that, while oral treatment with taxanes can be more convenient for patients and improve cost-effectiveness, the use of oral chemotherapy is challenging because of pharmaceutical and pharmacological factors that can lead to low oral bioavailability. Currently, data on the bioavailability of DHP107 in patients from Western countries are lacking, although we hope that an ongoing study on the bioavailability of DHP107 in Caucasian patients (NCT03326102) will help to address this issue. In conclusion, DHP107 showed promising efficacy with an acceptable tolerability profile in this phase II study and is currently being investigated in the OPTIMAL phase III study with a non-inferiority design (ClinicalTrials.gov Identifier NCT03315364), which includes 476 patients and is being conducted in Korea, China, and Europe (Hungary, Bulgaria, and Serbia). In addition, a phase II clinical trial of DHP107 (OPERA study, ClinicalTrials.gov Identifier NCT03326102) being conducted in the United States and the Czech Republic is evaluating pharmacokinetics, efficacy, and safety.
2021-12-17T16:48:37.881Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "b59f856db7c10d3fbf7d7e020fc9c9c6f63ce465", "oa_license": "CCBYNC", "oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/17588359211061989", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "0947fefc9db0141e23be826d743e9a189f330cac", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
231839752
pes2o/s2orc
v3-fos-license
Tunnelling times, Larmor clock, and the elephant in the room A controversy surrounding the “tunnelling time problem” stems from the seeming inability of quantum mechanics to provide, in the usual way, a definition of the duration a particle is supposed to spend in a given region of space. For this reason, the problem is often approached from an “operational” angle. Typically, one tries to mimic, in a quantum case, an experiment which yields the desired result for a classical particle. One such approach is based on the use of a Larmor clock. We show that the difficulty with applying a non-perturbing Larmor clock in order to “time” a classically forbidden transition arises from the quantum Uncertainty Principle. We also demonstrate that for this reason a Larmor time (in fact, any Larmor time) cannot be interpreted as a physical time interval. We provide a theoretical description of the quantities measured by the clock. I. INTRODUCTION The "tunnelling time" problem which has been with us for nearly a century [1], still has its share of controversy (for a recent review see [2]), and for a good reason.A prerequisite for any constructive discussion is a possibility to define its subject in a meaningful way.For a classical particle, a duration spent in a given region of space is indeed a well established and useful concept.In quantum mechanics, the Uncertainty Principle (UP) forbids answering the "which way?" question if two or more pathways leading to the same final outcome interfere [3].By the same token a duration, readily determined for each path, must remain indeterminate for a process where interference plays a crucial role.This is particularly true in the case of tunnelling. The early attempts to define the duration a quantum particle spends in the barrier by following the evolution of the transmitted wave packet [4]- [5] yielded the so-called Wigner-Smith (WS) time delay, essentially the energy derivative of the phase of the transmission amplitude.One immediate problem with the method is that if the WS result is used to estimate the time spent by the particle in the barrier, this time turns out to be shorter than the barrier width divided by the speed of light.This apparently "superluminal behaviour" does not lead to a conflict with Einstein's relativity for the simple reason that, in accordance with the Uncertainty Principle, the WS time cannot be interpreted as a physical time interval spent by a tunnelling particle in the barrier [6].However, as was noted in [2], the argument of [6] applies to the "phase time" of [4]- [5].Would it still be true if the tunnelling time were defined in a different manner? An alternative approach was proposed by A.I. Baz' [7], who employed Larmor precession of a magnetic moment (spin) in a magnetic field, small enough not to affect tunnelling seriously [8].The interest in the Larmor (Baz') clock was recently renewed after its experimental realisation was reported in [9], and in what follows we will analyse it in some detail.By construction, such a clock probes the response of a scattering amplitude to a small variation of the potential, rather than to a variation of the particle's energy.Thus, the Larmor time was found to disagree with the Wigner and Smith result, and proposed to be the "correct" estimate of the duration of a scattering process (see the footnote on p.169 of [10]).Despite Baz's assertion in [10], the Larmor clock approach soon encountered its own difficulties.In particular, if applied to tunnelling transmission the method yielded not one but two time parameters, which Büttiker [11] proposed to combine into a single "interaction time."In [12] Sokolovski and Baskin have shown the two Larmor times to be the real and imaginary parts of a "complex time" obtained as an average, in which the usual probabilities were replaced with quantum probability amplitudes.The lack of clarity about these matters points to a more fundamental problem, which requires further attention. The purpose of this paper is to demonstrate that the difficulty in deducing the duration spent in the barrier, evident in the analysis of the Wigner-Smith time delay [6], persists also in the conceptually different Larmor clock approach [7]- [12].To do so we will again appeal to the Uncertainty Principle, a rule of primary importance for any discussion of the tunnelling time problem, yet rarely mentioned in such discussions.It will also be upon us to answer the question "does a Larmor clock measure a physical time interval and, if not, then what does it measure?" II. RESULTS To lay bare the conceptual difficulty, we start by considering a simple thought experiment, where an electron, with its spin polarised along the x-axis, enters an interferometer shown in Fig. 1 in a wave packet state |G 0 , and is detected after exiting the second beam splitter, as shown in Fig. 1.Travelling via different arms of the interferometer, the electron spends different durations, τ 1 and τ 2 , in a region containing constant magnetic field directed along the z-axis, B (in an experiment using photons and Faraday's rotation the field would be directed along the arms).An additional element (e.g., an extra potential) in the second arm ensures that an extra phase, φ is acquired there by both spin components.So how much time did the electron spend in the magnetic field? The question is more difficult that it may seem.If the wave packet travelling at a velocity v is fast, and the field is not too strong, the two spin components acquire, in each arm, phases exp(±ω L τ 1,2 ), where ω L is the Larmor frequency.Thus, beyond the second beam splitter the wave function is given by (the σs are Pauli matrices) where G 1,2 (x, t) are the parts of the original wave packets arriving at x via the first and the second arm, respectively.One notes that the sum of the rotations in the square brackets does not add up to a single rotation around the z-axis, so no duration can be deduced from Eq.(1) directly.Perhaps, making the field small could help?Indeed, sending ω L → 0 and keeping only the linear terms, one finds x|Φ now looks like an overall rotation through a small angle ω L τ .Does this mean that is a suitable candidate for the duration spent in the field?Not quite so.The quantities G i are the transition amplitudes [3] for an electron, initially in |G 0 , to reach |x via the i-th arm of the interferometer, and τ is complex valued.This new problem can be dealt with by evaluating the mean angle of precession in the xy-plane, ϕ xy , guaranteed at least to be real. The result, appears to give preference to the real part of τ , and may look satisfactory.(Note that measuring the angle of rotation in the xz-plane would yield also the value of Im[τ ], but it is not important to us here.)However, our real problems are only beginning.A non-negative probability distribution, ρ(z) ≥ 0, has many useful properties.For example, an expectation value z marks roughly the centre of the region where ρ(z) = 0, and the variance gives an estimate of the size of this region.This is no longer true for the distributions which change sign, and the "average" in Eq.( 2) is of this latter type.Adjusting the phases and lengths, one can ensure that , and make the denominator in Eq.(2) small.A similar cancellation will not occur in the numerator, and τ can be made as large as one wants.On the other hand, with both arms of about the same length, L 1 ≈ L 2 ≈ L, the electron spends in motion approximately ∼ L/v.Now the "duration" in Eq.( 3) can easily exceed the total time electron was in motion, Similarly, τ 1,2 and G 1,2 could be chosen so that τ = 0, making it look like the electron, known to move at a speed v in each arm, crosses the field infinitely fast if both arms are considered together.These are serious issues, which should not be ignored.One has to decide whether to allow a quantum particle to spend more time that it has at its disposal, and hail Eq.( 4) as a new triumph of quantum theory.The other possibility is to conclude that something is wrong with the very question asked.It is, indeed, frustrating to have two durations, τ 1 and τ 2 , and to be unable to combine them into anything meaningful if a particle passes through both arms of the interferometer in Fig. 1. The frustration is of a familiar kind.In a Young's double-slit experiment, an electron passes trough one of the two slits, but it is not possible to know which particular slit was chosen.The impossibility of answering the "which way?" question, without destroying interference, is the essence of the Uncertainty Principle, without which quantum mechanics "would collapse" [3].The experiment in Fig. 1 is a kind of a double-slit case, with the only difference that the "which way?" question has been disguised as a "how much time?"query. It is instructive to see how quantum mechanics implements the Principle in practice.Since the only a priory restriction on the in general complex valued relative amplitudes α 1,2 in Eq.( 2) is that they should add up to unity, α 1 + α 2 = 1, one can find suitable αs for any choice of a complex τ , Unable to forbid one to ask the question operationally, quantum theory gives all possible answers, suitable and unsuitable, according to the circumstances.Depending on the parameters of the interferometer, the measured real part of τ can be positive, negative, zero, coincide with τ 1 or τ 2 , or lie between them.The answer to a question that should not have an answer can be "anything at all". One can envisage a following dialogue between an experimentalist Alice (A) and a theoretician Bob (B). A: I have just measured the mean angle ϕ xy , and divided it by the Larmor frequency.It A: Let us just forget about the cases where something goes wrong.Surely, in my case it is the time an electron spends in the magnetic field. B: Just don't tell that Carol-the-engineer.What she wants, is a time scale for changing the setup slowly enough for the electron "to see" its conditions "frozen" during its journey to the detector.For your τ Alice to serve as a classical time scale you would also need to show that τ n 1 α(τ 1 ) + τ n 2 α(τ 2 ) = τ n , n = 1, 2, ....However, this happens only if one of the αs vanishes, in which case either τ 1 or τ 2 is the time scale Carol would be happy with. A: But this time scale is a very well known and useful concept.How can it not exist?B: It is also an essentially classical concept, useful when there is no interference involved.Make one arm of the interferometer much longer than the other, so that the two parts of the wave packet do not overlap at x.Then, at a given t, you will know which way the electron has travelled, and also the duration, τ 1 or τ 2 it has spent in the magnetic field.But then, of course, it would be a different experiment. A: And what if I take instead the imaginary part, or the modulus of τ , as was suggested, for example by Büttiker [11]?B: Or any real valued combination of Re[τ ] and Im[τ ].You will still encounter "times" which are too long for common sense or too short for Einstein's relativity, although with τ Alice = |τ | you would not need to worry about negative durations. A: So what is my "time" good for?B: It does describe the response of the electron to a small perturbation of a particular type, a small rectangular potential, introduced by the constant magnetic field.A different "time" would arise if the response to a small oscillating potential were to be studied instead [13]. A: So, if my time is not a "meaningful duration", what is it?It looks like one of the "weak values" we heard so much about recently [14].3) says, τ is a sum of relative probability amplitudes for reaching the detector via different arms, multiplied by the corresponding durations spent in the field, thus, also an amplitude.And so is every other "weak value" [15].Your time is just the real part of a particular probability amplitude. A: But I have just measured it. B: Not quite, you just measured the spin, and then tried to learn something about electron's translational degree of freedom.In doing so, you relied on the first-order perturbation theory.Response of a system to a small perturbation is commonly described in terms of real valued combinations of the system's probability amplitudes. A: And what is then an amplitude?B: According to Feynman [3], it is a basic concept in our description of quantum behaviour. A: This does not tell me very much.Can you be more specific?B: I am afraid not.Nor, I suspect, can anybody else, unless a radically new insight into physics of the double-slit experiment is gained in future.In Feynman's words, at the moment "no one will give you any deeper description of the situation" [3]. The case of Ref. [9] is similar to the one just discussed, if not more involved (see Methods). In Fig. 1, there are only two routes by which an electron, starting in a state |G 0 , can reach the final position x, and the corresponding amplitude has two components, For a quantum particle crossing a potential barrier, there are many possible τ s, and many components to the transition amplitude [16], The mean angle of spin's rotation in a small magnetic field, confined to the barrier, is given by an analogue of (3) = ω L Re In the classical limit, highly oscillatory A(x ← G 0 |τ ) develops a stationary region around the classical duration τ class , where it varies more slowly.This is the only region contributing to the integral in (8), and one recovers the classical result, τ = τ class .But this well defined duration disappears already if A(x ← G 0 |τ ) has two, rather than just one, stationary regions, and we are back to the situation similar to the one shown in Fig. 1. Quantum tunnelling is a destructive interference phenomenon, where A(x ← G 0 |τ ) in Eq.( 6) has no stationary regions, and rapidly oscillates throughout the allowed range 0 ≤ τ ≤ T total . The tunnelling amplitude ( 6) is extremely small for a tall or a wide barrier (see the inset in Fig. 1).This happens not because A(x ← G 0 |τ ) is itself small, but because its oscillations cancel each other almost exactly.The delicate balance is easily perturbed, and an attempt to destroy interference between different durations would also destroy the tunnelling transition one wanted to study. III. DISCUSSION Finally, if Alice were to repeat also the experiment of Ref. [9], this is what Bob would say about her result."A fundamental problem, arising each time a Larmor clock is applied to tunnelling, but often overlooked -the proverbial elephant in the room -has to do with the quantum Uncertainty Principle.According to the Principle, one can have tunnelling, and not know the time spent in the barrier, or know this duration, but have tunnelling destroyed. One faces precisely the same choice in the double slit experiment, where he/she must decide between knowing the slit chosen by the particle, or having the interference pattern on the screen, but not both at the same time.You have tried to keep tunnelling intact (your clock perturbs it only slightly), and learn something about the duration spent in the barrier.You might expect the UP to make your result always look flawed in one way or another, but this is not how the UP works.If you consider all possible experiments of this type, some of them will give seemingly reasonable outcomes, whereas other 'times' would be negative, too short, too long, etc.This is necessary, and is possible because such 'times' can be expressed as the combinations of probability amplitudes which, unlike probabilities, have few restrictions on their signs and magnitudes.Though your result of 0.61 ms does look plausible you cannot recommend using it the way you would use a classical time scale just because of this.After all, in a double-slit experiment one cannot cherry pick the points on the screen, where the 'which way?' question can be answered meaningfully, since the Uncertainty Principle applies everywhere in equal measure.You cannot say that you resolved the controversy regarding how long a tunnelling particle spends in the barrier region, or proved that this duration is non-zero.The controversy, if you wish to call it that, goes to the very heart of the quantum theory, and must be accepted, rather than resolved." IV. METHODS A. Probability amplitude to spend a given duration τ in the barrier Consider a particle with a mean momentum p 0 , prepared in a wave packet state G 0 ( = 1), where a(p − p 0 ) discribes the distribution of the particle's momenta, and W (x) is the wave packet's envelope.At t = 0 the wave packet lies to the left of a potential barrier V (x) of a width d, as shown in the inset in Fig. 1.All momenta p in (9) are such that in order to cross the barrier the particle has to tunnel.The probability amplitude to detect the particle at x close to the maximum of the transmitted wave packet, after it has been in motion for T total seconds, can be represented as a sum over Feynman paths, where a path x(t) starts in x at t = 0, and ends in x at t = T total .The action functional is given by the usual S[x(t)] = T total 0 [µ ẋ2 /2 − V (x)]dt, with µ denoting the particle's mass. Each path spends a certain amount of time in the barrier region 0 ≤ x ≤ d.Thus duration can be computed with the help of a "stop-watch" (SW) expression, where θ [0,d] = 1 for 0 ≤ x ≤ d and 0 otherwise, so that only the time intervals spent in the barrier are added to the total.It is readily seen that τ SW [x(t)] cannot be negative, nor can exceed the time the particle was in motion, hence A simple cosmetic operation turns the path sum (10) into the sum over durations spent in the barrier.Restricting the summation to the paths which spend there precisely τ seconds, yields where δ(z) is the Dirac delta, and we have This is bad news for one's effort to determine the time actually spent in the potential -all such durations interfere.We are back to the Young's interference experiment, except that instead of two paths, each going through one of the slits, we have a continuum of routes, each labelled by the value of the τ SW [x(t)].According to the Uncertainty Principle [3] the "which way?" ( "which τ ?") question has no answer.The only exception is the classical limit. Typically, A(x ← G 0 |τ ) is highly oscillatory, but in a classically allowed case, e.g., with the barrier removed, the oscillations are slowed down near the classical value τ cl = µd/p. If A(x ← G 0 |τ ) has a unique stationary phase point of this kind, τ cl will appear as the only time parameter, whenever one evaluates integrals involving A(x ← G 0 |τ ), and classical mechanics will apply as a result. The problem with tunnelling is that no such preferred time emerges for a classically forbidden transition, and all τ s must be treated equally (a similar situation is shown in Fig. 3 of [6], although for a different quantity).To make things worse, in tunnelling the amplitude A(x ← G 0 ) is very small (∼ exp[−(2µV − p 2 0 ) 1/2 d] for a rectangular barrier), while A(x ← G 0 |τ ) is not.Thus, the exponentially small tunnelling amplitude results from a highly accurate cancellation between (not small) oscillations of A(x ← G 0 |τ ).For this reason, any attempt to modify or neglect any part of the integrand in Eq.( 13) would considerably change the result, and destroy the tunnelling. B. An uncertainty relation for the duration τ Although the Uncertainty Principle hampers one's attempts to ascribe a unique barrier duration to a tunnelling transition, there is still one more thing we can do.Writing the δ-function in (13) as and inserting it into (13), we note that the new action corresponds to adding to the barrier V (x) a rectangular potential λθ [0,d] (x(t)), a well or a barrier, depending on the sign of λ.Equation ( 13) can now be written in an equivalent form, where Ã(x ← G 0 |λ) is the amplitude to reach, at t = T total , the final location x from the initial state G 0 , while moving in a combined potential V (x) + λθ [0,d] (x).In other words, to evaluate the amplitude A(x ← G 0 |τ ) one needs to know the amplitudes of transmission for all composite potentials.And vice versa, to know the amplitude for a given potential one needs to know the amplitudes for all durations spent therein. Note next that even the calculation of the full amplitude distribution of the durations spent in a region [0, d] for a free particle, V (x) = 0, is already a non-trivial task.It involves evaluation of the transmission amplitudes for all rectangular wells and barriers, and integration in Eq.( 17).However, once A 0 (x ← G 0 |τ ) is obtained, the distribution for a rectangular As we mentioned above, in the semiclassical limit, the free amplitude distribution A 0 (x ← G 0 |τ ) develops a stationary region around τ = µd/p 0 .When the barrier is raised, the factor exp(−iV τ ) destroys the stationary region, A(x ← G 0 |τ ) rapidly oscillates everywhere, and A(x ← G 0 ) becomes small for a tunnelling particle. Equation ( 17) is a kind of uncertainty relation between the duration τ and the potential in the region of interest.It implies that a device employed to measure the τ must introduce some uncertainty into the potential, the greater the uncertainty, the more accurate the measurement.Which brings us to the Larmor clock. C. The Larmor clock The clock consists of a magnetic moment, proportional to an angular momentum (spin) of a size j, coupled to a magnetic field along the z-axis via Ĥint = ω L ĵz , where ω L is the Larmor frequency.By the time t, an initial state becomes rotated by an angle ω L t around the z-axis, Suppose the spin travels with a classical particle moving along a trajectory x(t), and the field exists only in the region 0 ≤ x ≤ d.Then the spin, precessing only when the particle is in the field, 0 ≤ x(t) ≤ d, ends up rotated by ω L τ SW [x(t)] by t = T total .Quantally, for a particle in the inset of Fig. 1, the final (unnormalised) spin's state can be found simply by adding up its states, rotated by ω L τ , each multiplied by the probability amplitude of spending in the field a net duration τ .The result is In general, the r.h.s. of (21) cannot be rewritten as a single rotation around the z-axis by an angle ω L τ , |γ(T total ) = exp(−iω L τ ĵz )|γ(0) , and no unique time τ can be associated with a quantum transition in this way. With the help of Eq.( 17), one obtains an equivalent form of Eq.(21), This shows that each spin component traverses the barrier as if the potential there were V (x) + mω L , so the potential, experienced by the particle as a whole, remains uncertain within the range from −jω L to jω L .As was already noted, a viable clock has to introduce this uncertainty, and we may ask what can be learnt about the duration spent in the barrier by applying the Larmor clock. An experiment could consist in detecting, at t = T total , the particle in x and its spin in a state |β = j m=−j β m |m .From (21) the corresponding probability is so that the relative change in the probability (23) with and without the magnetic field is where Z(β, γ) ≡ β| ĵz |γ / β|γ and τ ≡ is the complex time of Sokolovski and Baskin [12].The quantity in the l.h.s. of Eq.( 27) can so the modulus of τ can also be determined directly.Now there are many real valued time parameters related to the complex time ( 27), yet none of them is a suitable candidate for a physical time interval representing the net duration spent in the barrier.The easiest way to demonstrate it is to note that for an improbable transition, A(x ← G 0 |τ ) → 0, the denominator of (27) can be very small.At the same time, the numerator does not have to be small, since multiplication of A(x ← G 0 |τ ) by τ can destroy the cancellation, characteristic of tunnelling.Thus, |τ | may, in principle, exceed the total duration of motion, |τ | >> T total .This makes little sense, especially if one recalls that each and every Feynman path in Eq.( 10) spends in the barrier no more than T total . E. The Baz' clock Finally we briefly discuss a particular type of a weak Larmor clock, employing a spin-1/2 in a weak magnetic filed.It was introduced by A.I. Baz' more than fifty years ago [17], and recently implemented by Ramos et al in [9].Now ĵz = σ z /2 (σ z is the Pauli matrix), and the spin's initial direction is along the x-axis, whose azimuthal and polar angles are φ = 0 and θ = π/2 respectively.According to (21) the final (unnormalised) state of the spin is given by As it was discussed in subsection C, this cannot in general correspond to a rotation around the z-axis.On the other hand, in any state, a spin-1/2 must point along some direction on the Bloch sphere.Thus, we expect the state (29) to be rotated not only in the xy-, but also in the xz-plane.The state of a spin, polarised along a direction making angles δφ and π/2 − δθ with the x-and the z-axis, respectively, can be written as The appearance of not one, but two rotation angles was first noted by Büttiker in [11], albeit in a slightly different context.[Ref.[11] considered transmission of a particle with a known momentum p 0 which, in our language, corresponds to replacing A(x ← G 0 |τ ) with A(p 0 ← G 0 |τ ) ≡ exp(−ip 0 x)A(x ← G 0 |τ )dx in all formulae, and making G 0 nearly monochromatic.]In [11] Büttiker defined two "times", τ y ≡ δφ/ω L and τ z ≡ δθ/ω L , which correspond to our R[τ ] and Im[τ ], respectively.Ramos et al measured both the real and the imaginary parts of τ , which can be seen in Fig. 3 of [9].The authors of [9] found both parameters positive and concluded that their results were "inconsistent with claims that tunnelling takes 'zero time'".To abide by this conclusion one needs to take for granted that the "time tunnelling takes" exists as a meaningful concept, but this is not the case. The confusion can be traced back to Büttiker [11].When faced with two times parameters instead of one, he opted for a non-negative combination of the two, τ x ≡ τ 2 y + τ 2 z .This equals the modulus of the "complex time" in Eq.( 27), τ x = |τ |.At least one point made in [11] requires a comment, if not a correction.In τ x Büttiker believed to have found (we read in the Abstract of [11]) "the time interval during which a particle interacts with the barrier if it is finally transmitted."However, neither Re[τ ] nor Im[τ ], nor any combination of the two can be interpreted as a physical time interval.A weighted sum of quantum mechanical amplitudes, τ , may not give a meaningful answer to the question "how much time does a tunnelling particle spend within the barrier region?" for the same reason the Uncertainty Principle [3] forbids identifying the particle's path in Young's double-slit experiment. FIG. 1.A particle reaches the final position x after passing through an interferometer, and a weekly coupled Larmor clock is used to determine the duration it spends in the magnetic field.The case of tunnelling across a potential barrier, shown in the inset, is more complicated, yet conceptually similar. Γ (τ |ω L , β, γ) ≡ β| exp(−iω L τ ĵz )|γ = j m=−j β * m γ m exp(−imω L τ ).(24) Thus, by measuring the probability (23), one can determine the absolute value of the integral in Eq.(23), which involves the amplitude distribution of the durations spent by the particle inside the barrier in the absence of the clock.Note that little is left of the original tunnelling transition, where the transmission amplitude A(x ← G 0 ) is typically small.As already mentioned at the end of subsection A, the presence of an additional factor such as Γ(τ |ω L , β, γ) is likely to alter destructive interference which defines tunnelling.As a result, T total 0 dτ Γ(τ |ω L , β, γ)A(x ← G 0 |τ ) could differ from the original tunnelling amplitude inEq.(14) by orders of magnitude.D. A non-perturbing (weak) Larmor clockOne can try to return to tunnelling by sending ω L → 0, and learn something about the tunnelling time from the particle's response to the clock.(This already bodes ill for one's task, since the uncertainty introduced in the potential will also tend to zero, which, according to subsection B, should lead to a large uncertainty in τ ).Nevertheless, we obtain Γ(τ |ω L , β, γ) ≈ β|γ − iω L τ β| ĵz |γ , be measured, and by choosing a different |β one can, in principle, determine the values of Re[τ ], Im[τ ], or indeed of their various combinations.Moreover, for β|γ = 0, one has ) Comparing (29) with (30) we find that the spin has rotated by the (small) angles δφ = ω L Re[τ ], (in the xy-plane) and δθ = ω L Im[τ ] (in the xz-plane).(31) We recall further that a spin travelling with a classical particle along a trajectory x class (t) would rotate only in the xy-plane by an angle ω L τ class = τ SW [x class (t)].Thus, the first of Eqs.(31) looks like the classical result, with τ class replaced by Re[τ ].The second of Eqs.(31) has no classical analogue, and should serve as a warning that a straightforward extension of the classical duration to the quantum case may not be possible.(One already knows this from the Uncertainty Principle.)
2021-02-08T02:16:07.665Z
2021-02-05T00:00:00.000
{ "year": 2021, "sha1": "10019a2d04dc40ced22f64f67c71efeb3419960a", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41598-021-89247-8.pdf", "oa_status": "GOLD", "pdf_src": "ArXiv", "pdf_hash": "10019a2d04dc40ced22f64f67c71efeb3419960a", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Medicine", "Physics" ] }
118896137
pes2o/s2orc
v3-fos-license
Results from 3D Electroweak Phase Transition Simulations We study the phase transition in SU(2)-Higgs model on the lattice using the 3D dimensionally reduced formalism. The 3D formalism enables us to obtain highly accurate Monte Carlo results, which we extrapolate both to the infinite volume and to the continuum limit. Our formalism also provides for a well-determined and unique way to relate the results to the perturbation theory. We measure the critical temperature, latent heat and interface tension for Higgs masses up to 70 GeV. WHY 3D SIMULATIONS? Perturbative calculations have been extremely succesful in describing the physics of Electroweak interactions at zero temperature. However, at nite temperatures a purely perturbative analysis fails because of infrared problems: it is well known that the e ective potential of the scalar eld cannot be computed perturbatively for small , in the symmetric phase. Thus, the calculation of the quantities characterizing the phase transition { for example, the critical temperature T c , interface tension , and latent heat L { requires the use of non-perturbative methods. A direct way to include the non-perturbative e ects is to perform 4D nite-temperature lattice simulations of SU(2)-Higgs models. However, in the interesting parameter range the theory is still weakly coupled, and we can use perturbative dimensional reduction (DR) to convert the 4D action into a 3D e ective one. This step consists of integrating out all the massive modes (not constant in imaginary time) of the theory. In this talk we present results from 3D simulations with Higgs masses up to 70 GeV (for earlier results, see 1,2]; the results presented here will be described in detail in 3]). We maintain that, in practice, 3D simulations (II) 3D theory is superrenormalizable | this gives an exact relation between the 3D lattice and continuum couplings in the limit a ! 0, and we can relate any lattice observable to the physical one for given Higgs and W masses. (III) For a given a and N x , the number of lattice variables is much less in 3D than in 4D, making the simulations easier. (IV) We can consistently include the e ects of fermions and even typical extensions of the Standard Model (for example, minimal SUSY extensions, the two-Higgs model) to the purely bosonic 3D SU(2)-Higgs simulations 5]. The dimensionally reduced 3D SU(2)-Higgs Lagrangian is formally similar to the 4D one: L = 1 4 F 2 + (D i ) y (D i ) + m 2 3 2 + 3 ( 2 ) 2 (1) where 2 = y and the 3D couplings g 2 3 and 3 have dimension GeV (Here we discuss only the case where A 0 | the temporal component of the gauge eld | is integrated over). We relate the 3D couplings to the 4D ones at 2-loop level by Green's function matching 4,5]; using this method the nonlocal 2-loop terms which plaque straightforward DR 6] do not appear at all. The Lagrangian (1) is an approximation of the exact 3D one; by systematically estimating the e ects of the neglected terms we can conclude that for m H > 60GeV the errors are less than 1%, depending on the observable. The 3D lattice action can be written as Due to superrenormalizability (II), we have an exact relation between lattice and continuum parameters ( G ; H ; R ) $ (g 2 3 a; 3 =g 2 3 ; m 2 3 =g 4 3 ) when a ! 0; for example, G = 4=(g 2 3 a) directly connects the coupling constant G to the lattice spacing a. In 4D, the corresponding relation contains the RG constant Latt , which has to be xed by measurements. The 3D parameters are parametrized as (h = m H =80.6GeV) 3,5] For each G , we convert the V = 1 value of H;c to transition temperature T c . These are in turn extrapolated to the continuum limit, as shown in g. 3. For m H = 60 we have high precision data for G = 5,8,12 and 20, and a good t requires that we use a quadratic t in 1= G . For m H = 35 and 70GeV linear ts are acceptable. The nal results are given in table 1; in all cases the transition is unambiguosly of rst order. Numerically, the T c values from the simulations are quite close to the perturbative ones, but due Table 1 The critical temperature T c , the interface tension and the latent heat L for di erent Higgs masses. The value of at m H = 35GeV comes only from G = 8 simulations. m H /GeV 35 60 70 T c /GeV 92.64 (7) 138.38 (5) to the very high accuracy, they still di er at 10 level, signaling signi cant non-perturbative and higher order perturbative e ects. The interface tension and the latent heat We measure the interface tension with the histogram method : at the critical temperature the distribution of the order parameter develops a double-peak structure ( g.1). The interface tension can be extracted from the limit where A is the area of the interface and P max and P min are the distribution maximum and the minimum between the peaks. To use eq. (4) nite size corrections are needed; for details, see 3]. A crucial requirement is the \ at minimum" in the distribution between the peaks; this excludes all but the largest cylindrical volumes from the analysis. In g. 4 we show the V ! 1 extrapolation of for m H = 60GeV. These values are then further extrapolated to G ! 1; the nal value is = 0:0023(5) T 3 c . This is substantially smaller than the perturbative result 0.008T 3 c , and signals the presence of non-perturbative e ects for . For m H = 35GeV we cite only G = 8 result (table 1), since we do not have \ at" histograms The latent heat L can be extracted from the discontinuity of R 2 at T c . For details, we again refer to 3]; in contrast to , the continuum limit can be taken for all m H , and the results are remarkably close to the perturbative values, as can be seen from table 1. The Higgs and W masses In order to measure m H (T) and m W (T) we perform a separate series of simulations around T c for m H = 60 GeV. We observe a good scaling between G = 8 and 12. Both m H (T) and m W (T) have a discontinuity at T c , and the masses are higher in the symmetric phase. In g. 5 we show m W (T ) in units of g 2 3 = g 2 T. The value of m W (T > T c ) contradicts the analytical limit m W =g 2 3 < 0: 29 13]. Similar behaviour has been observed in 4D 9] and 3D 10] simulations at smaller m H .
2019-04-14T02:30:11.159Z
1995-09-26T00:00:00.000
{ "year": 1995, "sha1": "56464e65bae4da222b7495918a7e8d1832225112", "oa_license": null, "oa_url": "http://arxiv.org/pdf/hep-lat/9509086", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "faf8e60b2e0b5f3b29c48878d93507a7c123b9c6", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
41018287
pes2o/s2orc
v3-fos-license
China ’ s Energy Transition in the Power and Transport Sectors from a Substitution Perspective Facing heavy air pollution, China needs to transition to a clean and sustainable energy system, especially in the power and transport sectors, which contribute the highest greenhouse gas (GHG) emissions. The core of an energy transition is energy substitution and energy technology improvement. In this paper, we forecast the levelized cost of electricity (LCOE) for power generation in 2030 in China. Cost-emission effectiveness of the substitution between new energy vehicles and conventional vehicles is also calculated in this study. The results indicate that solar photovoltaic (PV) and wind power will be cost comparative in the future. New energy vehicles are more expensive than conventional vehicles due to their higher manufacturer suggested retail price (MSRP). The cost-emission effectiveness of the substitution between new energy vehicles and conventional vehicles would be $96.7/ton or $114.8/ton. Gasoline prices, taxes, and vehicle insurance will be good directions for policy implementation after the ending of subsidies. Introduction China's carbon emissions from fossil fuel combustion and cement production were 9 Gt CO 2 in 2013, making it the country with the largest emissions in the world [1].The power and heating generation sectors and transport sector contributed 49% (4416.9MT) and 8.4% (760.2MT) of the total emissions respectively [2].Clean energy consumption in China is still at low levels, although coal consumption's share of total energy consumption has been decreasing in recent years [3].For the transport sector, shifting part of the vehicle fleet from fuel to electricity and natural gas (EVs and NGVs) is one of the current strategies to control the transportation sector's impacts due to the relatively low carbon content of EV and NGV emissions [4,5].High costs are an obstacle to the wide implementation of clean energy technologies in many areas of China.Despite the lower levelized cost of electricity (LCOE) from hydropower, at approximately USD 0.04/KWh, other renewable energy (RE) power had high costs, ranging from USD 0.05/KWh to USD 0.11/KWh in 2014 [6].As an emerging option, the market share of electric cars in 2015 was close to 1%.Financial incentives and the availability of charging infrastructure emerged as factors that were positively correlated with the growth of electric vehicle market shares [7]. Traditionally, natural gas had served as a bridge fuel for the transition from exhaustible fossil energy to zero emission renewables.Impacts on decadal-scale climate change from the increased use of natural gas are not consistent with existing studies: switching from coal to natural gas can reduce CO 2 emission, while other green gases from large-scale development and usage of shale gas make its lifecycle emissions higher than that of coal [8].In the power sector, conclusions show climate benefits ranging Energies 2017, 10, 600 2 of 25 from less than 6% (unconventional gas replacing conventional gas) to more than 30% for the switch to natural gas from coal [9].Compared to coal energy systems, natural gas systems can provide a part of the flexibility and storage and act as a robust backbone in the energy system.Gas-fired generation can provide flexible and controllable electricity at both centralized and decentralized levels [10].The flexible and storable nature of natural gas play a key role in the progress of the energy transition. Between 2005 and 2014, total annual natural gas consumption climbed from 46.4 billion cubic meters (bcm) to 185.5 bcm with an average annual growth rate of 11.6% [11].However, China did not make significant strides in the natural gas share of its primary energy supply which increased from 2.6% to approximately 6% in 2013, much lower than the global average (23.7% in 2013) [12].According to the National Bureau of Statistics of China (Beijing), the industrial sector accounted for 34% of the total natural gas consumption in 2012, which ranks the first, followed by the residential sector (20%), power sector (18%), transport sector (10%), and commercial sector (6%) [13].The left 10% was used for the non-energy purposes.China has made significant progress in diversifying its natural gas supplies, and China´s natural gas supply is mainly from imported LNG, imported pipeline gas, domestic conventional gas, shale gas, coal bed methane, and tight gas production.Domestic production rose 164% from 2004 to 2014, to 135 billion cubic meters (bcm).Natural gas imports, which did not begin until 2006, grew from 1 bcm that year to over 58 bcm in 2014 [11]. Based on flows instead of exhaustible stocks, the role of RE will grow in importance over time from a long-term energy security perspective.In general, having more RE in the system can increase its diversity, making it less sensitive to some types of disturbances [14].In addition to contributing to the social and economic development, energy access, and a secure energy supply [15], the transition of the energy system from fossil to renewable energies provides a chance to mitigate global warming and risks for ecosystems and human health [16].Regarding nuclear power, it is as competitive in terms of both effective productivity and steady operation as fossil-fired electricity and has similar environmental benefits as RE [17].According to the results of an EU case study [18], a 1% increase in renewable energy could achieve a 0.03% reduction of carbon emissions and a 1% increase in non-RE, mitigating environmental impacts by 0.44%.This will also happen in China.From 2010 to 2020, existing renewable electricity targets contributed to a 1.8% reduction in cumulative CO 2 emissions compared to a no policy scenario [19]. The objective of this study is to calculate the life cycle costs and emission reduction costs of energy transition in the power sector and transport sectors.Natural gas, renewable energies, and nuclear power are chosen to compete with traditional coal-dominated energy systems in both the power and transport sectors.Which kind of energies or technologies will play the more important role depends on their potential, availability, and accessibility, and on the policies that promote their development.In Section 2, resource endowment, accessibility, and policies in both sectors are reviewed based on existing literature.For the power sector, this paper focuses on a comparison of unit costs of different technologies in the mid to long term using the learning curve approach.For the transport sector, the replacement of natural gas and electricity vehicles lead to cost and emission changing.In Section 3, we introduce the learning curve approach to analyze the LCOE and the cost-emission effectiveness method to investigate the changes in costs and emissions from the energy substitution in the transport sector.In Section 4, we present the results of the calculation with the methodology introduced in Section 3 with a high RE scenario and "New Policies Scenario".Governmental subsidies and other variables that impact the costs of lowering 1 ton of GHG emissions are discussed. Natural Gas The results of natural gas resource endowments evaluated by peer experts are shown in Table 1.Based on a national survey published in 2005, conventional natural gas resources amounted to 56 trillion cubic meters (tcm) prospectively and 35 tcm geologically [20].Mohr and Evans [32] assessed the ultimately recoverable resources (URRs) of China's conventional natural gas, and the results showed that conventional gas URRs were about 5.28 tcm and 12.82 tcm in low and high scenarios, respectively. EIA estimates that China possesses 31.6 tcm (1115 trillion cubic feet) of risked technically recoverable shale gas resources [21].The risked oil and natural gas in-place estimates are derived by first estimating the volume of in-place resources for a prospective formation within a basin, and then factoring in the formation's success factor and recovery factor.Shale gas production has accelerated over the past two years, with total output reaching 5.72 bcm.In 2015 alone, shale gas production soared 258.5% from one year earlier to 4.47 bcm [33].The results of national oil and gas resource assessment show that geological resources of CBM were 36.8 tcm with about two-thirds in eastern China, and recoverable resources of 10.9 tcm [25].Coalbed methane URRs were estimated by Mohr and Evans [32] in low, best guess, and high scenarios, and the results were 2.77, 8.05, and 31.68 tcm respectively.Jia et al. used the analogy method to provide a preliminary evaluation of the tight gas resource potential in China, and proposed that the geological resources of tight gas in China are 17.4-25.1 tcm with recoverable resources of tight gas in China of 8.8-12.1 tcm [25].In the estimation of Mohr and Evans [32], China's tight gas URRs were 1.51, 4.02, and 10.31 tcm in low, best guess, and high scenarios, respectively.Wang et al. [34] developed three alternative scenarios of how unconventional gas resources are supplied, using the Geologic Resources Supply-Demand Model (Figure 1).The results of natural gas resource endowments evaluated by peer experts are shown in Table 1.Based on a national survey published in 2005, conventional natural gas resources amounted to 56 trillion cubic meters (tcm) prospectively and 35 tcm geologically [20].Mohr and Evans [32] assessed the ultimately recoverable resources (URRs) of China's conventional natural gas, and the results showed that conventional gas URRs were about 5.28 tcm and 12.82 tcm in low and high scenarios, respectively. EIA estimates that China possesses 31.6 tcm (1115 trillion cubic feet) of risked technically recoverable shale gas resources [21].The risked oil and natural gas in-place estimates are derived by first estimating the volume of in-place resources for a prospective formation within a basin, and then factoring in the formation's success factor and recovery factor.Shale gas production has accelerated over the past two years, with total output reaching 5.72 bcm.In 2015 alone, shale gas production soared 258.5% from one year earlier to 4.47 bcm [33].The results of national oil and gas resource assessment show that geological resources of CBM were 36.8 tcm with about two-thirds in eastern China, and recoverable resources of 10.9 tcm [25].Coalbed methane URRs were estimated by Mohr and Evans [32] in low, best guess, and high scenarios, and the results were 2.77, 8.05, and 31.68 tcm respectively.Jia et al. used the analogy method to provide a preliminary evaluation of the tight gas resource potential in China, and proposed that the geological resources of tight gas in China are 17.4-25.1 tcm with recoverable resources of tight gas in China of 8.8-12.1 tcm [25].In the estimation of Mohr and Evans [32], China's tight gas URRs were 1.51, 4.02, and 10.31 tcm in low, best guess, and high scenarios, respectively.Wang et al. [34] developed three alternative scenarios of how unconventional gas resources are supplied, using the Geologic Resources Supply-Demand Model (Figure 1). Nuclear and Renewable Power China has 36 nuclear power reactors in operation, 21 under construction, and more about to start construction.The total nuclear capacity is set to rise to 80 GW by 2020, 200 GW by 2030 and 400 GW by 2050 [35].In 2014, the electricity generation from hydropower increased by 144.05 TWh, corresponding to an increase of 15.65% [11].The wind generated 156.3 TWh in 2014 and accounted for 2.8% of total electricity generation in China (a marginal increase from 2.6% in 2013) [36].Davidson et al. [37] developed a model to predict how much wind energy can be generated and integrated into China's electricity mix, and estimated a potential production of 2.6 petawatt-hours (PWh) per year in 2030.Wind power can provide nearly three-quarters of the target of producing 20% of primary energy from non-fossil sources by 2030 if operational flexibility of China's coal fleet is increased [37].China is one of the countries with the highest solar technically potential and it has been estimated at 6900-70,100 TWh per year with a potential stationary solar capacity from 4700 GW to 39,300 GW and 200 GW of distributed solar capacity based on potential/demand ratio at provincial level [38].Table 2 shows the potential of RE sources.Total renewable electricity potential can reach at least 12,666 TWh/year, which is even higher than total demand in 2030 (11,900 TWh/year) [39] and can supply almost 88% of electricity demand in 2050 (14,300 TWh/year) [36].Table 2 lists the RE sources in China, for more information see in [40]. Energy Policy for Energy Substitution China's clean energy promotion policies include national and urban targets; regulations on utilizations; electricity pricing; and subsidies, tax relief and feed-in tariffs for electricity generation.An energy policy that promotes energy transition in the power and transport sectors is discussed in two groups: (1) law and regulations; and (2) finance and taxation. Power Sector According to the "Mid-to Long-Term Nuclear Development Plan (2011-2020)" issued in October 2012, China aims to have 58 GW (net) in operation by 2020, and 30 GW under construction at that time [42].The National Energy Development Strategy Action Plan (2014-2020) [43], Intended Nationally Determined Contribution (INDC) [44], and the 13th Five-Year Plan (2016-2020) planned the similar development targets for cleaner energy [45].The 13th Five-Year Plan significantly raises the installed capacity of the wind and solar power to 250 GWe and 150 GWe, respectively, and each has 50 GW more than in the 2014 action plan.In addition, installed nuclear power capacity is to reach 58 GW and hydropower reach 350 GW by 2020.Geothermal energy, bio-energy, and maritime energy will also be proactively developed (see in Tables 3 and 4). Policies Description References Natural Gas Utilization Policy Natural gas saving and improving energy efficient [46] Revised Natural Gas Utilization Policy Highlight the role of natural gas share in primary energy consumption and encourage natural gas used as fuel preferentially in residential, manufacturing, electricity and transportation sectors [46] Renewable Energy Law Enlarge the share of renewable energy, safeguard energy security and achieve the goal of sustainable economic development [47] Amendments to the Renewable Energy Law Further, strengthen the process through which renewable electricity projects are connected to the grid and dispatched efficiently [48] Related Transport Sector The General Office of the State Council of China (Beijing) has issued the "Energy Saving and New Energy Vehicle Industry Development Plan (2012-2020)" which is a major component of the national economy stimulus package during the period of global economic depression [58].The central government subsequently adopted a target of 500,000 cumulative EV sales by 2015, and 5 million by 2020 [59]. In order to help China, adjust its economic structure toward resource savings and move in an environmentally friendly direction, and recognizing the strategic impacts of new energy vehicles (NEVs) on the auto industry in the future, the State Council issued Decisions on Accelerating the Cultivation & Development of Emerging Strategic Industries in October 2010, and it selected NEVs as one of the seven strategic industries.In the policy, plug-in hybrid and pure electric vehicles were further highlighted as the focus of demonstration and commercialization [60]. The Administration Rules on Access to the Production of New Energy Vehicles (2007) has significant instructional function for the new energy vehicle industry's development in China [61].In June 2012, the State Council published the Planning for the Development of Energy-Saving and New Energy Automobile Industry (2012-2020) [58], which established the guidelines for the development of the new energy automobile industry in China. In March 2012, the Tax and Fee Notice of New Energy Cars and Ships was issued by the Ministry of Finance, State Administration of Taxation and Ministry of Industry and Information Technology of China (Beijing).According to this notice, the tax of energy saving cars and ships is reduced by half, and the tax of new energy cars and ships is waived from January 2012 [61].For more information about subsidies and tax incentives to encourage new energy vehicle consumptions, see [62,63]. The LCOE Formula for the Power Sector The overnight capital cost is the cost of building a power plant, assuming no interest occurs during construction.Because it does not take into account the cost of financing, it is a very useful cost measure that can compare the cost of technology and different countries without having to consider different power generation technologies and different countries' engineering capacity for different leverage, interest rates and construction time.According to the US Energy Information Administration (EIA) (Washington, DC, USA), the basic costs include: civil and structural costs, machinery and equipment supply and installation, electrical and instrument control, indirect project costs, owner costs.Under normal circumstances, the basic cost can be calculated as follows: where TI is the total investment in the project, and IDC is the benefit of the construction period (this paper assumes that there is no interest).The above basic investment equation is used to calculate the static base investment at a given point in time.In order to predict future infrastructure investments, it is divided into regional and global components: where OCC g is the global base investment, OCC l is the regional basis investment component, GLR cum is the global cumulative learning rate, LLR cum is the local cumulative learning rate. As shown in Equation ( 2), OCC is decomposed into local and global components.The local cost component is related to local learning speed, while global cost components are related to global learning speed.For non-price countries, the local cost component is a relatively small part of the total OCC, usually including non-trade items such as labor, land, permits and permits, electricity, and so on.The global cost component includes equipment, research and development, etc. and moves at the same rate worldwide. The cumulative learning rate is expressed by the product of the cumulative capacity of each technique and the learning rate.Thus, the global learning rate is the product of the global installed capacity cumulative deployment and the global learning rate for each technology, and the regional learning rate is the product of the local installed capacity cumulative deployment and the regional learning rate. Simply adding the product of the cost component to its learning rate is still insufficient because it does not take into account the varying price levels expressed by purchasing power parity (PPP) at the real exchange rate.Therefore, in order to calculate the future OCC, the cost component will be adjusted through its relative learning rate and local or global price level.The actual price level in some countries will rise, while the price levels of other countries may fluctuate.Therefore, OCC will not always be reduced.Some countries purchasing power evaluation data as shown in Table 5. Larsson et al. reviewed existing assessments of electricity production costs and the Formula (1) below is a widely-accepted approach for calculating the LCOE [65].Overnight capital cost (OCC) is a typical parameter that is used in the power generation industry to describe the cost of building a power plant overnight.We thus have: where OCC is the overnight capital cost, M f are the fixed operating and management (O&M) expenses, M t the variable O&M expenses at year t, F t the fuel costs at year t, E t the electricity generated in year t, i the interest rate (WACC), and n the lifetime of the power plant. By far, the most common model used in the energy literature to forecast changes in technology cost is the "one-factor learning curve" (or "experience curve") [66].The learning curve is based on empirical observations that present the relationship between the unit cost of the technology and its cumulative output (production) or installed capacity.A widely-used formulation is [66]: where C i is the unit cost of the technology, x represents cumulative experience, and a is the cost for the first unit, and b is an experience index.For power generation technologies the latter term is commonly quantified as cumulative installed capacity (MW).If the cumulative experience is doubled, the fractional reduction is given by: where the factor 2 b in Equation ( 5) is the progress ratio.For example, when LR is 0.15 it means that after one doubling of the cumulative installed capacity, the unit cost can be reduced by 15% of the original costs. Total Ownership Costs (TOC) and Life Cycle Emissions Method for Transport Sector We have: where TOC is the total ownership cost, I represent the initial cost, MSRP the manufacturer's suggested retail price, S the governmental subsidy, P the purchase tax, C t the maintenance and operating costs, C e is the energy costs, C m is the cost of maintenance, C T&I are the tax and insurance costs, i the discount factor and R the resale price.Maintenance and operating costs include energy cost, vehicle use tax and insurance, and battery replacement cost or machine oil change cost.Rose et al. [67] established a cost-effectiveness function to compare the costs of conventional and alternative vehicles, and in [5,67], natural gas vehicles were used to substitute for diesel as a vehicle fuel: In this equation, FCR a and FCR c are the vehicle fuel consumption ratings of alternative vehicles and conventional vehicles; F a and F c are the fuel or electricity prices; VKT y the vehicle vehicle kilometers traveled per year; P/F and P/A are factors to calculate the present value of a future cost and the present value of a series of future constant annual costs assuming an economic lifetime n and a discount rate i; OM are the constant annual non-fuel operating and maintenance costs over the life ($); P a − P c the difference in purchase price of alternative-fueled and conventionally fueled vehicles; C 0 any other miscellaneous costs; GHG c − GHG a the total lifetime GHG emission difference between the two fuels and VKT t the total vehicle kilometers traveled over its lifetime.From the energy substitution perspective, the differences in maintenance and operating costs, and the resale price between alternative vehicles and conventional vehicles should be taken into consideration.OM costs are different between the CVs and alternative vehicles.This study establishes the following equation to evaluate the cost-emission effectiveness: Since the GHG emission reduction is used for normalization of the cost, the result of the calculation (in USD per unit CO 2 equivalent) will be either positive or negative.Should the alternative fuel/vehicle have lower emissions than the conventional one, which is generally the expectation, positive and negative numbers would indicate increased and reduced life cycle cost, respectively. Regarding life cycle emissions from electricity is shown as Equation ( 11): where, r i is the share of generation from technology i in total generation, and GHG i represents life cycle greenhouse emissions of one-unit generation from technology i that includes fuel energy extraction process transport and generation.Life cycle GHG emissions of gasoline include crude oil extraction and processing, transportation, oil refining, and diesel used for transportation to the end-use location. Life cycle GHG emissions for this calculation includes the three primary types of GHG emissions (CO 2 , CH 4 and N 2 O) emitted during the life cycle process.Each of the GHG emissions is then converted to CO 2 equivalents (CO 2 e) according to their global warming potential (GWP) value [68]: Generation Cost Assumptions This sector analyzed the levelized cost of electricity (LCOE) trends of six generation technologies, and our assumptions are shown in Tables 6-10.Basic assumptions and learning rates used to forecast the LCOE of power production are shown in Table 6, respectively.Two generation mix scenarios (Tables 7 and 8) are considered to calculate the GHG emissions per unit generation of the generation mix.And the generation portfolios for 2015 represent observed portfolios.Table 10 is the global cumulative capacity deployments with the forecasted data from IEA 450 scenario due to the high installed capacity of RE in China.The generation portfolios for 2015 in Tables 7 and 8, represent observed portfolios.In order to reflect the impact of changes in fuel prices on future generation costs more accurately, this section refers to EIA's projected growth rate of nuclear fuel (−0.8%) and biomass price growth (0.2%) [69].On the basis forecast of the EIA, Li et al gave four scenarios of China's coal and natural gas prices growth Energies 2017, 10, 600 9 of 25 rates.Considering the rapid development of RE, this paper adopts the low growth rate scenario in the paper, namely, annually growth rate of the two fuels prices are both 1% [70].New Policies Scenario of the World Energy Outlook broadly serves as the IEA baseline scenario.It takes account of broad policy commitments and plans that have been announced by countries, including national pledges to reduce greenhouse-gas emissions and plans to phase out fossil-energy subsidies, even if the measures to implement these commitments have yet to be identified or announced.Two alternative vehicle types are included in this sector.First is a battery electric vehicle (BEV), which exclusively uses a large battery to power an electric motor to drive the wheels.The battery is recharged by plugging the vehicle into the electric grid; Second is a plug-in hybrid gasoline-electric vehicle (PHEV), which uses an electric motor to power the wheels.The battery can be charged by plugging the vehicle into the electric grid or by use of an on-board gasoline engine. Two BEVs and two PHEVs sold in the China market with governmental subsidies were selected for analysis, and corresponding similarly sized conventional models, are used as their counterparts (Table 11). Total energy costs depend on energy consumption rate, vehicle kilometers traveled per year and fuel prices.This study used electricity prices and gasoline prices in 2015, assumed to be $0.13KWh and $0.9768/L, respectively [73].The amount of the annual vehicle-use tax is related to engine displacement, and the newest rules for vehicle-use tax calculation were applied [74].According to China's conventional vehicle purchase tax is 8.55% of the MSRP [75] and the battery and fuel-cell electric passenger vehicles are exempt from the purchase tax (see Section 4).According to Bit Auto, the liability coverage costs are the same for all vehicles, and the annual collision coverage cost is equal to 1.088% of the MSRP plus 459 CNY ($75) for basic insurance fees [76].The resale value is 15% of the MSRP for an ICEV and 10% for a BEV [76].The discount rate is an average of [73,75,76] at 7.33%.Machine oil is assumed to be changed every 5000 km and to cost approximately 320 CNY ($52) per change [76].Owing to the long lifetime of batteries, battery change costs do not accrue during the BEV and PHEV lifetimes.Life cycle emissions of gasoline are shown in Table 12.Note: Source: http://chinaautoweb.com/ [77] with an exchange rate at 1 RMB = 0.165 USD [78]. Natural gas could be used as a primary energy for the transport sector in six ways: compressed natural gas (CNG), liquefied natural gas (LNG), methanol, gas-to-liquid (GTL), H 2 , and electricity pathways [79].As NGVs are usually used as taxis and conventional heavy duty vehicles, CNG-taxis and LNG-heavy duty vehicles were selected for the analysis of the cost efficiency of substitution between NGVs and conventional vehicles.The acquisition cost of a private CNGV in China is only $990-$1650 higher than that of an equivalent gasoline car and the cost of a liquefied natural gas vehicle (LNGV) is $3300 to $11,500 more than the cost of an equivalent conventional vehicle [80].It is difficult to select a vehicle mode, so for this study we made the assumptions listed in Table 12.Assumptions for CNGVs and LNGVs is shown in Table 13. Power Sector It can be seen from the Table 14 that the emission of greenhouse gas (GHG) emissions is the smallest, only 12 g CO 2 e/KWh, followed by biomass power generation, on shore wind power to the life cycle of GHG emissions (207 g CO 2 e/KWh) higher than the nuclear life cycle of GHG (114 g CO 2 e/KWh), mainly depends on the high capacity factor of nuclear power stability.The life cycle of GHG emissions from circulating natural gas is 520 g CO 2 e/KWh, which is about the life cycle of pulverized coal.GHG (1020 g CO 2 e/KWh) Half.In the two power generation scenarios, the combined GHG emissions were 485 g CO 2 e/KWh and 592 g CO 2 e/KWh, respectively.Non-coal power generation ratio increased by about 10%, the unit power generation life cycle GHG emissions decreased by 18% (more details can be seen in Tables A1-A7 in Appendix A). When the carbon tax reaches $30/ton, the LCOE of coal-fired power generation increases from $69/MWh to $73/MWh from 2015 to 2030, natural gas-fired generation from $97/MWh to $111/MWh, nuclear power decreases from $58/MWh to $56/MWh, biomass power costs remains at $66/MWh (Figure 2).In 2019, the LCOE of PV is close to that of natural gas-fired generation.The LCOE of photovoltaic power generation in 2023 is $68/MWh, which is lower than the LCOE of coal-fired power generation ($71/MWh).In 2024, the LCOE of PV is $63, lower than that of biomass fired generation ($66/MWh) in the same period; the LCOE of photovoltaic power generation will reach to $56/MWh, lower than that of nuclear power costs ($57/MWh).The LCOE of on shore wind power technology in 2016 can reach 57 US dollars/MWh, almost always the lowest cost of power generation technology, onshore wind power costs will reach 44 US dollars/MWh by 2030.In the new policy scenario with the carbon tax ($30/ton), the LCOE of coal-fired power generation will increase from $75/MWh to $79/MWh during 2015-2030, natural gas-fired generation from $97/MWh to $109/MWh, nuclear will drop from $59/MWh to $57/MWh, and biomass power will remain at $80/MWh.The LCOE of photovoltaic power generation in 2020 will be $101/MWh, equivalent to that of natural gas generation ($101/MWh); the LCOE of photovoltaic power generation wiil be $82/MWh by 2030, higher than the LCOEs of the other four power generation technologies.By 2030, onshore wind power costs will be $44/MWh, the lowest cost of power generation technology.In High RE scenario, the LCOE of coal-fired power generation will increase from $39/MWh to $42/MWh during 2015-2030, natural gas-fired generation cost from $82/MWh to $96/MWh; the LCOE of nuclear power will drop from $58/MWh to $56/MWh, and biomass power LCOE will remain at $60/MWh.In 2020, the LCOE of PV power will be $82/MWh lower than that of NCGG.In 2024, the LCOE of PV will be $61, close to that of biomass fired generation ($60/MWh).In 2025, the LCOE of PV will be $57/MWh, close to that of nuclear power ($56/MWh).The LCOE of photovoltaic power generation in 2030 is $45/MWh, which is still higher than that of onshore wind power and coal-fired power generation.In 2015, the LCOE of on shore wind power generation is $56/MWh.By 2020, the LCOE of onshore wind generation will be close to coal-fired power generation costs ($41/MWh); since then, land-based wind power is maintained at $41/MWh.In the new policy scenario without carbon tax, the cost of photovoltaic power generation in 2024 is $89/MWh, which is the same as the cost of natural gas-fired generation in the same period; in 2030, the LCOE of PV ($80/MWh) will be still higher than that of the other four power generation technologies.Onshore wind power generation LCOE will fluctuate less and will be $40/MWh by 2030.In the new policy scenario with the carbon tax ($30/ton), the LCOE of coal-fired power generation will increase from $75/MWh to $79/MWh during 2015-2030, natural gas-fired generation from $97/MWh to $109/MWh, nuclear will drop from $59/MWh to $57/MWh, and biomass power will remain at $80/MWh.The LCOE of photovoltaic power generation in 2020 will be $101/MWh, equivalent to that of natural gas generation ($101/MWh); the LCOE of photovoltaic power generation wiil be $82/MWh by 2030, higher than the LCOEs of the other four power generation technologies.By 2030, onshore wind power costs will be $44/MWh, the lowest cost of power generation technology.In High RE scenario, the LCOE of coalfired power generation will increase from $39/MWh to $42/MWh during 2015-2030, natural gas-fired generation cost from $82/MWh to $96/MWh; the LCOE of nuclear power will drop from $58/MWh to $56/MWh, and biomass power LCOE will remain at $60/MWh.In 2020, the LCOE of PV power will be $82/MWh lower than that of NCGG.In 2024, the LCOE of PV will be $61, close to that of biomass fired generation ($60/MWh).In 2025, the LCOE of PV will be $57/MWh, close to that of nuclear power ($56/MWh).The LCOE of photovoltaic power generation in 2030 is $45/MWh, which is still higher than that of onshore wind power and coal-fired power generation.In 2015, the LCOE of on shore wind power generation is $56/MWh.By 2020, the LCOE of onshore wind generation will be close to coal-fired power generation costs ($41/MWh); since then, land-based wind power is maintained at $41/MWh.In the new policy scenario without carbon tax, the cost of photovoltaic power generation in 2024 is $89/MWh, which is the same as the cost of natural gas-fired generation in the same period; in 2030, the LCOE of PV ($80/MWh) will be still higher than that of the other four power generation technologies.Onshore wind power generation LCOE will fluctuate less and will be $40/MWh by 2030. Figure 3 shows the sensitivity analysis of photovoltaic power generation and onshore wind power generation.The selected sensitivity factors include capacity factor, learning rate and GHG emission tax.The results show that, in both scenarios, the capacity factor is the most important factor affecting LCOE, followed by learning rate and GHG emission tax.In the new policy scenario, the change of capacity factor has the greatest influence on the prediction result of LCOE.In the new policy scenario, the onshore wind power capacity factor increased from 15.6% to 36.4%, LCOE dropped from $68/MWh to 33/MWh; the PV capacity factor increased from 12% to 28%, LCOE dropped from $136/MWh to 59/MWh. BEVs and PHEVs versus Conventional Vehicles The TOCs for vehicles without and with a subsidy are presented in Figures 4 and 5, respectively.The color bars represent different cost contributions to the TOCs, and the line indicates the TOC level Figure 3 shows the sensitivity analysis of photovoltaic power generation and onshore wind power generation.The selected sensitivity factors include capacity factor, learning rate and GHG emission tax.The results show that, in both scenarios, the capacity factor is the most important factor affecting LCOE, followed by learning rate and GHG emission tax.In the new policy scenario, the change of capacity factor has the greatest influence on the prediction result of LCOE.In the new policy scenario, the onshore wind power capacity factor increased from 15.6% to 36.4%, LCOE dropped from $68/MWh to 33/MWh; the PV capacity factor increased from 12% to 28%, LCOE dropped from $136/MWh to 59/MWh.Figure 3 shows the sensitivity analysis of photovoltaic power generation and onshore wind power generation.The selected sensitivity factors include capacity factor, learning rate and GHG emission tax.The results show that, in both scenarios, the capacity factor is the most important factor affecting LCOE, followed by learning rate and GHG emission tax.In the new policy scenario, the change of capacity factor has the greatest influence on the prediction result of LCOE.In the new policy scenario, the onshore wind power capacity factor increased from 15.6% to 36.4%, LCOE dropped from $68/MWh to 33/MWh; the PV capacity factor increased from 12% to 28%, LCOE dropped from $136/MWh to 59/MWh. BEVs and PHEVs versus Conventional Vehicles The TOCs for vehicles without and with a subsidy are presented in Figures 4 and 5, respectively.The color bars represent different cost contributions to the TOCs, and the line indicates the TOC level BEVs and PHEVs versus Conventional Vehicles The TOCs for vehicles without and with a subsidy are presented in Figures 4 and 5, respectively.The color bars represent different cost contributions to the TOCs, and the line indicates the TOC level after balancing out the negative resale value that provides a comparison between alternative vehicles and conventional vehicles.The comparisons of the four sets of results show that though BEVs and PHEV have higher resale prices and lower operating costs, the consumer should pay another $19,733, $7106, $26,234, and $13,944 more than the conventional vehicles cost to buy new energy vehicles.Except for MRSP, tax and insurance costs rank the second for almost all new energy vehicles, while energy costs rank the second for all conventional vehicles. Energies 2017, 10, 600 14 of 24 after balancing out the negative resale value that provides a comparison between alternative vehicles and conventional vehicles.The comparisons of the four sets of results show that though BEVs and PHEV have higher resale prices and lower operating costs, the consumer should pay another $19,733, $7106, $26,234, and $13,944 more than the conventional vehicles cost to buy new energy vehicles.Except for MRSP, tax and insurance costs rank the second for almost all new energy vehicles, while energy costs rank the second for all conventional vehicles.Compared to the acceptable costs, the significantly higher MSRPs of BEVS and PHEVs require a large amount of government subsidy (Figure 5), even higher than the purchase price of CVs.PHEVs.As shown in Figure 6, based on the electricity mix in the two selected scenarios and on the life cycle emissions of power and gasoline, this study analyzed the unit GHG emissions of the four sets.Substitutions from BEVs could reduce life cycle emissions significantly, while that from PHEVs depend on the driving behavior.Compared to the "New Polices Scenario", a decrease of 12% in the share of coal-fired electricity does not show any prominent, noteworthy impacts on GHG emission reduction per km.As shown in Figure 5, this paper analyzes four sets of vehicle emissions based on two power combinations (as shown in Figure 6) and the life cycle emissions (including manufacturing and fuel consumption) of gasoline.Electric vehicles instead of conventional car emissions significantly, PHEV alternative conventional car emission reduction depends on driving habits.In the case of high-RE scenarios, four alternative emission reductions were 156 g CO2 e/km, 74 g CO2 e/km, 194 g CO2 e/km and 126 g CO2 Energies 2017, 10, 600 14 of 24 after balancing out the negative resale value that provides a comparison between alternative vehicles and conventional vehicles.The comparisons of the four sets of results show that though BEVs and PHEV have higher resale prices and lower operating costs, the consumer should pay another $19,733, $7106, $26,234, and $13,944 more than the conventional vehicles cost to buy new energy vehicles.Except for MRSP, tax and insurance costs rank the second for almost all new energy vehicles, while energy costs rank the second for all conventional vehicles.Compared to the acceptable costs, the significantly higher MSRPs of BEVS and PHEVs require a large amount of government subsidy (Figure 5), even higher than the purchase price of CVs.PHEVs.As shown in Figure 6, based on the electricity mix in the two selected scenarios and on the life cycle emissions of power and gasoline, this study analyzed the unit GHG emissions of the four sets.Substitutions from BEVs could reduce life cycle emissions significantly, while that from PHEVs depend on the driving behavior.Compared to the "New Polices Scenario", a decrease of 12% in the share of coal-fired electricity does not show any prominent, noteworthy impacts on GHG emission reduction per km.As shown in Figure 5, this paper analyzes four sets of vehicle emissions based on two power combinations (as shown in Figure 6) and the life cycle emissions (including manufacturing and fuel consumption) of gasoline.Electric vehicles instead of conventional car emissions significantly, PHEV alternative conventional car emission reduction depends on driving habits.In the case of high-RE scenarios, four alternative emission reductions were 156 g CO2 e/km, 74 g CO2 e/km, 194 g CO2 e/km and 126 g CO2 Compared to the acceptable costs, the significantly higher MSRPs of BEVS and PHEVs require a large amount of government subsidy (Figure 5), even higher than the purchase price of CVs.PHEVs.As shown in Figure 6, based on the electricity mix in the two selected scenarios and on the life cycle emissions of power and gasoline, this study analyzed the unit GHG emissions of the four sets.Substitutions from BEVs could reduce life cycle emissions significantly, while that from PHEVs depend on the driving behavior. Compared to the "New Polices Scenario", a decrease of 12% in the share of coal-fired electricity does not show any prominent, noteworthy impacts on GHG emission reduction per km.As shown in Figure 5, this paper analyzes four sets of vehicle emissions based on two power combinations (as shown in Figure 6) and the life cycle emissions (including manufacturing and fuel consumption) of gasoline.Electric vehicles instead of conventional car emissions significantly, PHEV alternative conventional car emission reduction depends on driving habits.In the case of high-RE scenarios, four alternative emission reductions were 156 g CO 2 e/km, 74 g CO 2 e/km, 194 g CO 2 e/km and 126 g CO 2 e/km.In the new policy scenario, four groups of alternative emission reductions are 143 g CO 2 e/km, 62 g CO 2 e/km, 173 g CO 2 e/km and 112 g CO 2 e/km, and overall GHG emissions per kilometer half.The high-RE scenario is 12% less than the new policy scenario.This reduction is less noticeable in terms of unit mileage.Relying on new energy vehicles to replace conventional power vehicles to reduce GHG emissions depends on a significant reduction in coal power generation. Energies 2017, 10, 600 15 of 24 e/km.In the new policy scenario, four groups of alternative emission reductions are 143 g CO2 e/km, 62 g CO2 e/km, 173 g CO2 e/km and 112 g CO2 e/km, and overall GHG emissions per kilometer half.The high-RE scenario is 12% less than the new policy scenario.This reduction is less noticeable in terms of unit mileage.Relying on new energy vehicles to replace conventional power vehicles to reduce GHG emissions depends on a significant reduction in coal power generation.The cost-emission effectiveness (Ec) of substitutions between BEVs or PHEVs and CVs is shown in Figure 7.Despite the large amount cost of government subsidies, the cost-emission effectiveness (Ec) of the replacement of CVs of BEVs is $87 and $114/ton, $224 and $266/ton, and $201 and $238/ton in the last three cases in the figure.In the E150 EV case, 1 ton of GHG emissions reduced by energy substitution could save $142-$162.While as the total ownership costs considered in this study are private costs, and GHG emissions are fuel life cycle emissions, the costs of GHG emission reduction from substitutions between alternative vehicles and CVs will be distributed to other sectors. Natural Gas as an Alternative Energy to Replace Gasoline or Diesel-Fueled Vehicles Owing to the large VKT per year, CNGVs and LNGVs can provide large cost savings; for example, a CNG-taxi could save $8307 and reduce 40 tons of GHG emissions (Table 15); and a LNGheavy duty vehicle could save $52,833 and reduce 10 tons of GHG emissions during its lifetime.Compared to the case of BEVs and PHEVs case, energy costs dominate the cost-emission efficiency due to its longer VKT.The cost-emission effectiveness (E c ) of substitutions between BEVs or PHEVs and CVs is shown in Figure 7. Energies 2017, 10, 600 15 of 24 e/km.In the new policy scenario, four groups of alternative emission reductions are 143 g CO2 e/km, 62 g CO2 e/km, 173 g CO2 e/km and 112 g CO2 e/km, and overall GHG emissions per kilometer half.The high-RE scenario is 12% less than the new policy scenario.This reduction is less noticeable in terms of unit mileage.Relying on new energy vehicles to replace conventional power vehicles to reduce GHG emissions depends on a significant reduction in coal power generation.The cost-emission effectiveness (Ec) of substitutions between BEVs or PHEVs and CVs is shown in Figure 7.Despite the large amount cost of government subsidies, the cost-emission effectiveness (Ec) of the replacement of CVs of BEVs is $87 and $114/ton, $224 and $266/ton, and $201 and $238/ton in the last three cases in the figure.In the E150 EV case, 1 ton of GHG emissions reduced by energy substitution could save $142-$162.While as the total ownership costs considered in this study are private costs, and GHG emissions are fuel life cycle emissions, the costs of GHG emission reduction from substitutions between alternative vehicles and CVs will be distributed to other sectors. Natural Gas as an Alternative Energy to Replace Gasoline or Diesel-Fueled Vehicles Owing to the large VKT per year, CNGVs and LNGVs can provide large cost savings; for example, a CNG-taxi could save $8307 and reduce 40 tons of GHG emissions (Table 15); and a LNGheavy duty vehicle could save $52,833 and reduce 10 tons of GHG emissions during its lifetime.Compared to the case of BEVs and PHEVs case, energy costs dominate the cost-emission efficiency Despite the large amount cost of government subsidies, the cost-emission effectiveness (E c ) of the replacement of CVs of BEVs is $87 and $114/ton, $224 and $266/ton, and $201 and $238/ton in the last three cases in the figure.In the E150 EV case, 1 ton of GHG emissions reduced by energy substitution could save $142-$162.While as the total ownership costs considered in this study are private costs, and GHG emissions are fuel life cycle emissions, the costs of GHG emission reduction from substitutions between alternative vehicles and CVs will be distributed to other sectors. Energies 2017, 10, 600 16 of 25 4.2.2.Natural Gas as an Alternative Energy to Replace Gasoline or Diesel-Fueled Vehicles Owing to the large VKT per year, CNGVs and LNGVs can provide large cost savings; for example, a CNG-taxi could save $8307 and reduce 40 tons of GHG emissions (Table 15); and a LNG-heavy duty vehicle could save $52,833 and reduce 10 tons of GHG emissions during its lifetime.Compared to the case of BEVs and PHEVs case, energy costs dominate the cost-emission efficiency due to its longer VKT. Subsidy Impacts According to the new subsidy policy for alternative energy vehicles that will be implemented from 2016 to 2020, the subsidy for passenger vehicles, except for fuel-cell vehicles, will be reduced to some extent from 2017 to 2020.Compared to that issued in 2016, the subsidy issued from 2017 to 2018 will decrease by 20% from 2017 to 2018 and that issued from 2019 to 2020 will decrease by 40%.China will end subsidies for new energy vehicles (NEVs) after 2020.When the subsidy decreases, the cost for avoiding GHG emissions increases approximately 10 times.NEVs can never be economical compared to conventional vehicles.In the E6 400 set, the consumer should pay more than $900 to reduce 1 ton of GHG emissions in new energy policy scenarios.This policy may lead to existing new energy car models, but it will slow down the trend of reducing GHG emissions.Other policies that could promote GHG emission reduction should be imposed if subsidies will decrease in the near future (Figure 8). Subsidy Impacts According to the new subsidy policy for alternative energy vehicles that will be implemented from 2016 to 2020, the subsidy for passenger vehicles, except for fuel-cell vehicles, will be reduced to some extent from 2017 to 2020.Compared to that issued in 2016, the subsidy issued from 2017 to 2018 will decrease by 20% from 2017 to 2018 and that issued from 2019 to 2020 will decrease by 40%.China will end subsidies for new energy vehicles (NEVs) after 2020.When the subsidy decreases, the cost for avoiding GHG emissions increases approximately 10 times.NEVs can never be economical compared to conventional vehicles.In the E6 400 set, the consumer should pay more than $900 to reduce 1 ton of GHG emissions in new energy policy scenarios.This policy may lead to existing new energy car models, but it will slow down the trend of reducing GHG emissions.Other policies that could promote GHG emission reduction should be imposed if subsidies will decrease in the near future (Figure 8). Fuel and Electricity Prices As was calculated in Section 4, the LCOE of PV and wind power will decrease in the near future, but the share of renewable power is increasing in both scenarios.Furthermore, the electricity price depends on the governmental policies.Future electricity prices are difficult to forecast.As was analyzed in Section 4.1, both wind power and solar power costs will decrease from 2015 to 2030, but they will still be higher than coal-fired generation costs.Owing to the higher share of RE in the power sector, electricity prices will be higher than they are now.A wide range of electric prices, from $0.13/KWh to $0.26/KWh, was used and the gasoline price was $0.825/L-$1.98/L,with an average of $0.9768/L [76].The number of miles traveled every year is limited, and a variation of ±20% was assumed, as was the resale value, tax, and insurance costs.Negative values mean that its increase will lead to a lower cost-per-unit emission reduction.The analysis presented here has a sensitivity on the cost-emission effectiveness (Ec) of substitution between NEVs and CVs and was carried out using a Monte Carlo simulation and @Risk software based on Microsoft Excel. The sensitivity analysis was conducted on the key variables that affect the cost-emission efficient (Figure 9).In this study, the sensitivity of cost-emission effectiveness (Ec) of substitution between new energy vehicles and their counterparts was determined by testing the coefficient values corresponding to the probability distribution of relative variables.Negative coefficient values mean that its increase will result in a lower cost-per-unit emission reduction.In Figure 9, we present the results of the sensitivity analysis for the four sets.There is no doubt that the increase in gasoline price Fuel and Electricity Prices As was calculated in Section 4, the LCOE of PV and wind power will decrease in the near future, but the share of renewable power is increasing in both scenarios.Furthermore, the electricity price depends on the governmental policies.Future electricity prices are difficult to forecast.As was analyzed in Section 4.1, both wind power and solar power costs will decrease from 2015 to 2030, but they will still be higher than coal-fired generation costs.Owing to the higher share of RE in the power sector, electricity prices will be higher than they are now.A wide range of electric prices, from $0.13/KWh to $0.26/KWh, was used and the gasoline price was $0.825/L-$1.98/L,with an average of $0.9768/L [76].The number of miles traveled every year is limited, and a variation of ±20% was assumed, as was the resale value, tax, and insurance costs.Negative values mean that its increase will lead to a lower cost-per-unit emission reduction.The analysis presented here has a sensitivity on the cost-emission effectiveness (E c ) of substitution between NEVs and CVs and was carried out using a Monte Carlo simulation and @Risk software based on Microsoft Excel. The sensitivity analysis was conducted on the key variables that affect the cost-emission efficient (Figure 9).In this study, the sensitivity of cost-emission effectiveness (E c ) of substitution between new energy vehicles and their counterparts was determined by testing the coefficient values corresponding to the probability distribution of relative variables.Negative coefficient values mean that its increase will result in a lower cost-per-unit emission reduction.In Figure 9, we present the results of the sensitivity analysis for the four sets.There is no doubt that the increase in gasoline price (PHEVs and BEVs have the same gasoline price), CV/tax and insurance, VKT, and the resale value of new energy vehicles have the positive effect of lowering the costs of reducing GHG emissions.According to the results, the cost-emission efficiency of the substitutions is most sensitive to gasoline price in both scenarios and all sets.Among all other variables in the high-renewable-energy penetration scenario, tax and insurance costs for new energy vehicles, and electricity price have significant impacts after the gasoline price.In addition, BEVs and PHEVs produced by BYD rely much more on their VKT than other vehicle models, and VKT is the second among variables that impact cost-emission efficiency.Their high MSRPs lead to the high purchase tax and insurance expenses of BEVs and PHEVs and result in high resale values of new energy vehicles.Thus, the cost-emission efficiency value is more correlated to the resale value of new energy vehicles than that of CVs. Energies 2017, 10, 600 17 of 24 new energy vehicles have the positive effect of lowering the costs of reducing GHG emissions. According to the results, the cost-emission efficiency of the substitutions is most sensitive to gasoline price in both scenarios and all sets.Among all other variables in the high-renewable-energy penetration scenario, tax and insurance costs for new energy vehicles, and electricity price have significant impacts after the gasoline price.In addition, BEVs and PHEVs produced by BYD rely much more on their VKT than other vehicle models, and VKT is the second among variables that impact cost-emission efficiency.Their high MSRPs lead to the high purchase tax and insurance expenses of BEVs and PHEVs and result in high resale values of new energy vehicles.Thus, the costemission efficiency value is more correlated to the resale value of new energy vehicles than that of CVs. ( Conclusions LCOE of six generation technologies in China was projected from 2015 to 2030 using a learning curve approach including coal-fired, natural gas-fired, biomass-fired, nuclear power, the wind, and solar PV under two scenarios.The results show that the LCOE of onshore wind power generation and photovoltaic power generation can reach the level equivalent to the cost of thermal power when there is a high carbon tax.Although the learning curve method used in this paper is predicted, the results of the sensitivity analysis show that capacity factor is the most influential factor to LCOE, so increasing the utilization of terrestrial wind power and photovoltaic power generation and accelerating the development of these two power generation capacity of the installed capacity is essential to reduce the LCOE; in the scenarios of the paper, by 2030 the LCOE of photovoltaic power generation will reach 36-82 US dollars/MWh, onshore wind power LCOE can reach 39-43 US dollars/MWh.The paper chooses four groups of RE vehicles and conventional vehicles to carry out energy reduction cost research, the selected four groups of vehicles, the conventional car life cycle per kilometer GHG emissions 262-343 g CO2 equivalent, new energy vehicle life cycle per kilometer GHG emissions 116-201 g CO2 equivalent.The results show that new energy vehicles replace conventional cars to reduce GHG emissions have a positive effect, but the cost is higher, it is recommended that the government can put the cost of GHG emission reduction to the entire energy production and consumption process.In the vehicles selected, TOCs of NEVs were $1101, $6909, and $4163 higher than their counterparts and E150 EVs will save $3577 with large subsidies.Substitutions between BEVs or PHEVs and CVs (except for the E150 EV set) show that the costs to lower 1 ton of GHG emissions are $87-$266.After 2020, this will increase to as high as $900/ton of GHG emission.This will be a heavy burden for NEV consumers and will obstruct the progress toward reducing emissions in the transport sector.The government should initiate policies that will allocate the cost of the entire life cycle of energy production and consumption.Longer VKT will expand the impact of alternative energy on the cost-emission efficiency when comparing the EV case and NGV case.If there are no subsidies, EVs should be considered as viable vehicles in the public passenger and freight transport sectors. Conclusions LCOE of six generation technologies in China was projected from 2015 to 2030 using a learning curve approach including coal-fired, natural gas-fired, biomass-fired, nuclear power, the wind, and solar PV under two scenarios.The results show that the LCOE of onshore wind power generation and photovoltaic power generation can reach the level equivalent to the cost of thermal power when there is a high carbon tax.Although the learning curve method used in this paper is predicted, the results of the sensitivity analysis show that capacity factor is the most influential factor to LCOE, so increasing the utilization of terrestrial wind power and photovoltaic power generation and accelerating the development of these two power generation capacity of the installed capacity is essential to reduce the LCOE; in the scenarios of the paper, by 2030 the LCOE of photovoltaic power generation will reach 36-82 US dollars/MWh, onshore wind power LCOE can reach 39-43 US dollars/MWh.The paper chooses four groups of RE vehicles and conventional vehicles to carry out energy reduction cost research, the selected four groups of vehicles, the conventional car life cycle per kilometer GHG emissions 262-343 g CO 2 equivalent, new energy vehicle life cycle per kilometer GHG emissions 116-201 g CO 2 equivalent.The results show that new energy vehicles replace conventional cars to reduce GHG emissions have a positive effect, but the cost is higher, it is recommended that the government can put the cost of GHG emission reduction to the entire energy production and consumption process.In the vehicles selected, TOCs of NEVs were $1101, $6909, and $4163 higher than their counterparts and E150 EVs will save $3577 with large subsidies.Substitutions between BEVs or PHEVs and CVs (except for the E150 EV set) show that the costs to lower 1 ton of GHG emissions are $87-$266.After 2020, this will increase to as high as $900/ton of GHG emission.This will be a heavy burden for NEV consumers and will obstruct the progress toward reducing emissions in the transport sector.The government should initiate policies that will allocate the cost of the entire life cycle of energy production and consumption.Longer VKT will expand the impact of alternative energy on the cost-emission efficiency when comparing the EV case and NGV case.If there are no subsidies, EVs should be considered as viable vehicles in the public passenger and freight transport sectors. Energies 2017, 10 , 600 4 of 24 Figure 1 . Figure 1.Three scenarios of China's future unconventional gas production [34].(a) High scenario; (b) Medium scenario; and (c) Low scenario.2.1.2.Nuclear and Renewable Power China has 36 nuclear power reactors in operation, 21 under construction, and more about to start construction.The total nuclear capacity is set to rise to 80 GW by 2020, 200 GW by 2030 and 400 GW by 2050 [35].In 2014, the electricity generation from hydropower increased by 144.05 TWh, corresponding to an increase of 15.65% [11].The wind generated 156.3 TWh in 2014 and accounted for 2.8% of total electricity generation in China (a marginal increase from 2.6% in 2013) [36].Davidson Figure 3 . Figure 3. LCOE ($/KWh) sensitivity analysis of onshore wind and solar PV in the "high renewable energy penetration scenario" (ERI& NDRC) and "New Policies" (IEA) scenario.(a) High RE on shore wind; (b) New policy onshore wind; (c) High RE solar PV; and (d) New policy solar PV 2 . LCOE ($/MWh) of different electricity technologies in the "high renewable energy penetration scenario" (ERI& NDRC) and "New Policies" (IEA) scenario.(a) High RE with GHG tax; (b) New Policy scenario with GHG tax; (c) High RE without GHG tax; and (d) New Policy scenario without GHG tax. Figure 3 . Figure 3. LCOE ($/KWh) sensitivity analysis of onshore wind and solar PV in the "high renewable energy penetration scenario" (ERI& NDRC) and "New Policies" (IEA) scenario.(a) High RE on shore wind; (b) New policy onshore wind; (c) High RE solar PV; and (d) New policy solar PV Figure 3 . Figure 3. LCOE ($/KWh) sensitivity analysis of onshore wind and solar PV in the "high renewable energy penetration scenario" (ERI& NDRC) and "New Policies" (IEA) scenario.(a) High RE on shore wind; (b) New policy onshore wind; (c) High RE solar PV; and (d) New policy solar PV. Figure 4 . Figure 4.The TOCs for conventional vehicles, battery vehicles, and plug-in hybrid gasoline-electric vehicles (without government subsidy). Figure 5 . Figure 5.The TOCs for conventional vehicles, battery vehicles and plug-in hybrid gasoline-electric vehicles (with government subsidy). Figure 4 . Figure 4.The TOCs for conventional vehicles, battery vehicles, and plug-in hybrid gasoline-electric vehicles (without government subsidy). Figure 4 . Figure 4.The TOCs for conventional vehicles, battery vehicles, and plug-in hybrid gasoline-electric vehicles (without government subsidy). Figure 5 . Figure 5.The TOCs for conventional vehicles, battery vehicles and plug-in hybrid gasoline-electric vehicles (with government subsidy). Figure 5 . Figure 5.The TOCs for conventional vehicles, battery vehicles and plug-in hybrid gasoline-electric vehicles (with government subsidy). Figure 6 . Figure 6.GHG emissions per km from selected vehicles. Figure 7 . Figure 7. Cost efficiency of substitutions between BEVs or PHEVs and CVs. Figure 6 . Figure 6.GHG emissions per km from selected vehicles. Figure 6 . Figure 6.GHG emissions per km from selected vehicles. Figure 7 . Figure 7. Cost efficiency of substitutions between BEVs or PHEVs and CVs. Figure 7 . Figure 7. Cost efficiency of substitutions between BEVs or PHEVs and CVs. Figure 8 . Figure 8. Cost-emission efficiency from 2016 under future government subsidy policies.(a) High RE scenario; (b) New policy scenario. Figure 8 . Figure 8. Cost-emission efficiency from 2016 under future government subsidy policies.(a) High RE scenario; (b) New policy scenario. Figure 9 . Figure 9. Sensitivity analysis of cost-emission effectiveness (E c ) of substitution between NEVs and CVs.(a1-d1) the results in high RE (HREP) scenario; (a2-d2) the results in new policy scenario for the four substitution sets. Table 1 . Natural gas resource endowment (in tcm) in China. Table 1 . Natural gas resource endowment (in tcm) in China. Table 3 . Laws and regulations concerning cleaner energy use in the power sector. Table 4 . Finance and taxation policies that promote cleaner energy use in the power sector. Table 11 . Vehicle models and assumptions. Table 14 . Lifec ycle unit GHG emissions from the power sector.MWh).In 2024, the LCOE of PV is $63, lower than that of biomass fired generation ($66/MWh) in the same period; the LCOE of photovoltaic power generation will reach to $56/MWh, lower than that of nuclear power costs ($57/MWh).The LCOE of on shore wind power technology in 2016 can reach 57 US dollars/MWh, almost always the lowest cost of power generation technology, onshore wind power costs will reach 44 US dollars/MWh by 2030. Table 14 . Lifec ycle unit GHG emissions from the power sector. Table 15 . Comparison of results for NG fuel vehicles and CVs. Table 15 . Comparison of results for NG fuel vehicles and CVs. Table A3 . Lifecycle greenhouse gas emissions of nuclear power (g CO 2 eq/KWh). Table A7 . Lifecycle greenhouse gas emissions of onshore wind power.
2017-05-13T18:14:55.712Z
2017-04-29T00:00:00.000
{ "year": 2017, "sha1": "d3d7fbef9091a85e047ba6df2643858c8538704a", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1996-1073/10/5/600/pdf?version=1493456617", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "aa06611ea4f21d30996b7486fc1d2caca98551c0", "s2fieldsofstudy": [ "Economics", "Engineering", "Environmental Science" ], "extfieldsofstudy": [ "Economics" ] }
43100817
pes2o/s2orc
v3-fos-license
Impact of ISO 9001 Standard on the Quality Cost of Construction Projects in the Philippines Since past two decades, ISO 9001 standard has shown its capabilities to lower cost, increase productivity, and satisfy stakeholders (customers) in the organizations. Although ISO 9001 standard has proven its benefits to different sectors in all over the world. But there is still debate among researchers and practitioners concerning the usefulness of applying ISO 9001 in construction projects. However, it seems that among different methods, quality cost analysis is an excellent technique to indicate how much ISO 9001 is able to improve effectively quality performance, and reduce costs in the projects. Thus, the main purpose of this study is to assess the effects of ISO 9001 implementation on quality cost in construction projects. For this aim, a literature review was conducted to design a structured questionnaire in a sample of the 67 respondents from ISO 9001:2008-certified projects of large-scale (AAA) construction companies in Metro Manila, Philippine. As a quantitative research, the inferential statistics analysis used to test the hypotheses of this study. Lastly, the results reported that ISO 9001 standard significantly affects the reduction of quality cost within construction projects in Metro Manila, Philippines. I. INTRODUCTION ISO 9000 has been widely adopted in the construction industry, and the number of ISO 9001-certified construction firms is growing considerably in many countries [4]. This international standard is one of most effective quality management methods that has implemented worldwide in different industries based on product or service since 1987. This quality management standard is an effective tool to achieve the objectives of the manufacturing and service sectors as well as construction industry. It can assure that all phases of construction project consistently meet client's requirements (need), and having continually improved quality goals. ISO 9001 is a systematic approach (QMS) that aims to promote quality performance continuously based on the implementation of its requirements, documentation procedure, and audit activities. This system can be a part of every project management processes from the moment the project initiates to the final steps in the project closure phase as well [1]. ISO 9001 standard can improve the quality level of production processes in the organizations by its generic guidance, and powerful methodology, which best known "Plan-Do-Check-Act" cycle, in order to achieve quality objectives successfully. The majority of construction firms in different scales in developing, or even developed countries believe the adoption of the ISO 9001 is just wasting time and money for consultancy, training, periodical internal and external audit, and certification fee, without any benefit, and the only advantages of ISO 9001 are to cover the requirements of the clients and competitiveness in the market, not more. In against, some studies reported that ISO 9000 has numerous benefits, which can optimize "internal procedures" within construction projects [4]. Furthermore, many construction firms have been tried to establish quality management system (QMS) in their projects by implementing ISO 9001, "but not enough work has been undertaken to assess the quality of the implementation of these QMS in individual construction companies" [13, p. 628]. According to [18, p. 14], quality improvement programs such as ISO 9001 standard can be "critically analyzed using quality costing techniques to check the merit of the program" in the organizations. This method was first introduced by Crosby, as an appropriate method for measuring the performance of quality programs. Also, Juran explained the cost of quality as "cost of poor quality" can be caused by lack or inappropriate quality management implementation. Literature review unearthed that the cost of quality is not often used as an effective technique, or indicator for evaluating the performance of quality management approaches [19], in order to understand the effectiveness of quality management tools like ISO 9001 on the construction projects. However, "construction owners expect contractors to achieve continuous quality improvement by taking all possible measures to ensure the effectiveness of their QMS" [13, p. 612]. Surprisingly, literature review indicated that no study was conducted to examine the impact of ISO 9001 on quality cost, and its main elements. Thus, the general target of this study was to measure and clarify the effectiveness of ISO 9001 standard on reducing quality cost within ISO 9001:2008-certified projects of largescale (AAA) construction companies in Metro Manila, Philippines. As a correlational study, literature was first reviewed, in order to design an appropriate questionnaire, then its validity and reliability were tested by content validity and Cronbach's Alpha respectively. Then the questionnaires were distributed randomly among the respondents for collecting data. Finally, the simple linear regression was employed to analyze data, find results, and make conclusions for this study. A. ISO 9001 Standard International Organization for Standardization (ISO) is a worldwide federation of national standards bodies (ISO member bodies), which its intention is to design and present international standards by different technical committees for business, government, and different industries, etc. The most popular standards of ISO is ISO 9000 family or QMS that develop and maintained by ISO/TC 176 committee [8]. In 1987, ISO introduced a set of quality assurance standards, QMS standards originated from the UK standard (BSI 5750) for quality system. ISO 9000 family can improve quality performance of the organizations based on establishing an appropriate system for quality management, this system is able to generate a strategic decision for the organization with the aim of preventing wastes and unnecessary costs during the production processes of the products and services [7]. ISO 9000 family is included four standards, such as, ISO 9000 ("Fundamentals and vocabulary of QMS"); ISO 9001 ("Requirements of QMS"); ISO 9004 ("Managing for the sustained success of an organization"), and; ISO 19011 (Guidance for internal and external audits of quality management systems). Among these standards, ISO 9001 is the only standard, which is intended to certify the organizations by a third party certification as an evidence that the firm has been worked under a QMS standard [14]. ISO 9001 is included some clauses interpreting the QMS requirements, which are "generic" and applicable for implementing QMS within all organizations, regardless of their type, size and product provided for quality management system, technical committee of ISO is TC-176 that formulates all the standards of ISO 9000 [11]. Interestingly, ISO 9001 standard was promoted as quality management standards since 2000, when the ISO 9001 could adopt a framework based on PDCA cycle for QMS. Thus, ISO 9001 standard offers "a tested framework" to lead "business practices", and to consistently turn out quality products with minimum requirements, which should correspondingly perform for achieving QMS certification in the organizations [14], [12]. In addition, this framework is based on process approach, ISO defined process as a set of interrelated or interacting activities that use inputs to deliver an intended result. The aim of the process approach is to increase an organization's effectiveness and efficiency, in order to achieve its defined objectives such as customer's satisfaction corresponding the requirements, and likewise identifying and reducing the production process problems (e.g. failure, defects, non-conformance, wastes, delay, etc.) with the aim of reducing the quality cost of products or services [7]. B. Costs of Quality The cost of quality is one of the total quality management (TQM) tools. The cost of quality management system acts as the most appropriate method for "measuring", "monitoring", "controlling" and "decision making" activities in a firm which aims at "business excellence" and also specifies the "non-value added" costs [18]. In the majority of the organizations, quality costs can be often between "10 to 30 percent of sales", or "25 to 40 percent of operating expenses". Some of these costs are visible, some of them are hidden [16]. [3] was first introduced quality cost in his book "Quality Is Free". He justified that "quality is measured by the costs of quality which is the expense of non-conformance as the cost of doing things wrong". Crosby and Juran emphasized on the role quality cost as "the primary management tools", in order to ensure that the quality improvement has been happening through the implementation of quality management program/s [5,16]. According to [17], the most important issue to improve the competitiveness of any organization is to control and reduce quality costs, and many studies indicated the majority of the companies do not consider this technique as a powerful tool to improve quality of the products and services. Interestingly, 90% of the quality cost is hidden, but 100% or real cost of quality can appear by quality cost analysis. The adoption of quality costs can help the firms to survive in the market, reduce rework costs, and improve the quality of products or services more than those companies do not use this method [18]. In fact, the quality costs can be classified into the four groups that are namely: 1. Prevention costs: These expenses related to costs of design and manufacturing that are directed toward the prevention of non-conformance and defect [10], such as quality planning; new-products review; process planning; process control; quality audits, and; training. 2. Appraisal costs: Those costs are included the expenses of measuring, evaluating, or auditing products of product or process, which assure the products or services are conformance with the specified requirements, standards, and the requirements of the customer in general [9], [15]. 3. Internal failure costs: These costs happen when the outcome of product or service process cannot meet designed quality standards and the requirement of the customer, and this failure is found before transfer and delivery to the customer. These costs are Included in this area are scrap; rework; repair; downtime; defect and scrap evaluation [10], and; 4. External failure costs: These expenses generate when products or services cannot satisfy customer or specified requirements but the defects could not be discovered till delivery to the customer. These expenses are customer returns and allowances; repair and servicing; warranty claims; complaints, and; and image [14]. According to [16], the quality costs (Prevention, appraisal, internal and external failure costs) can be categorized into two main groups, which simplifies the analysis of total quality costs in construction industry, the cost of control and the cost of failure. The breakdown of these costs is shown below: C. ISO 9001 standard and quality cost in construction In construction projects, the prevention costs are the expenses of quality activities for preventing "deviations". The appraisal costs are related those expenses that can indicate whether a product, process, or service conforms to customer's needs and requirements. The failure costs are the expenses that project should spend if their product or service could not meet the specific requirements and standards [16]. "The relationship between these costs is reflected in the 1-10-100 rule", one dollar spent on prevention will save $10 on appraisal and $100 on failure costs [17]. Consequently, the companies should not withhold spending money for prevention and appraisal costs, in order to eliminate or reduce failure costs, because of much less costly to prevent a defect than to correct one. Juran's graph can show, how quality program can affect the quality costs (cost of poor quality) of the products or services. This graph is included two axises that are time on the horizontal axis and cost of poor quality on the vertical axis, Juran stated that quality cost or "non-quality" is the best parameter to evaluate quality improvement into the organization. As depict in Figure 1, the minimum level of total quality costs can be obtained, when the quality of conformance is 100 percent (perfection). In the reality, it is impossible to achieve perfection. It is justifiable that prevention and appraisal costs (control costs) increase slowly, while failure costs as well as total quality cost are reducing considerably during implementing quality management tool/s in the firms [9]. Indeed ISO 9001 standard as one of the effective quality management techniques has an important role to reduce quality costs in the projects. Some studies were proven its efficiency in saving money within various sectors. According to [20], [21], those organizations that possess ISO 9001 certification can accomplish their performance better, and reduce the expenses of the production processes more than non-certified companies in contrast. Also, [9] asserted that the main purpose of ISO 9001 is to promote the organization's effectiveness and efficiency, with the aim of satisfying customers. Therefore, QMS tries "identifying" and "eliminating" the root causes of the problems during producing the products, or services. This process can help the organization to reduce errors, defects, rework, and delay. Similarly, [21] reviewed 82 articles to specify the most significant advantages of applying ISO 9001 in the organizations. Their findings revealed that the implementation of ISO 9001 can generate positive changes to "reduce mistakes" and "rework", "save on costs" and "improve the management of the firms". In construction projects, some empirical evidence reported the existence of a direct positive association between ISO 9001 standard and quality costs, for example, [11, p. 210] found that "quality cost reduction", and "prevention of errors from the start" are two of most important achievements of ISO 9001 in construction projects. Likewise, the results of reviewing some empirical studies in construction companies by [6] showed that the average number of defects of ISO 9001-certified construction projects were significantly less than the number of defects of construction projects without ISO 9001 certification. Furthermore, [4] asserted that the case studies indicated ISO 9001 can assist construction project to avoid costly errors and failure. Thus, the reduction of quality costs by implementing ISO 9001 in projects can cause success in construction projects by satisfying stakeholders. III. RESEARCH FRAMEWORK AND HYPOTHESES Based on previous discussions, a framework was formulated and presented, in order to investigate and identify the relationships among ISO 9001 standard and the quality cost, and its main elements (control and failure costs) within the construction projects in Metro Manila, Philippines. As depicted in Figure 2, the independent variable of this research framework is ISO 9001 standard and the dependent variables are two main factors of quality costs, such as, failure costs, and control costs. Also, the following hypotheses were developed based on research framework to examine these relationships: H1: Control costs can be significantly increased during ISO 9001 implementation within the construction projects; H2: ISO 9001 standard can statistically decrease, or even eliminate failure costs in construction projects, and; H3: ISO 9001 implementation can minimize quality costs in general. A. Research Design As a deductive research, mono method quantitative design was employed to identify the relationships among independent variable as ISO 9001 standard and quality cost variables within ISO 9001:2008 certified projects of largescale (AAA) in Metro Manila, Philippines. In the first step, the study was carried out an in-depth literature review for determining the research problems, and the main concepts of the research, in order to design an appropriate research instrument related to the content of the study. Then a survey was administered and the questionnaires were randomly distributed among managers working at different levels in construction firms for collecting data. Finally, data was analyzed by inferential statistical analysis to obtain the results and conclusions. B. Sampling Technique and Sample Size For sampling, the simple random sampling technique was employed. The researcher distributed the questionnaires randomly to the individuals working in ISO 9001-certified projects of large-scale (AAA) firms in Metro Manila, Philippines. These Large-scale construction firms found and selected from the list of Philippine Contractors Accreditation Board (PCAB). Also, data of the study was obtained from those who worked at management level in construction companies. However, a total of eighty questionnaires that sent to the projects of Large-scale (AAA) construction companies, just 84% of them were duly completed and returned. Accordingly, the response rate of 84% achieved valid out of the 80 questionnaires, or the 67 usable questionnaires were used in the statistical analysis of this research. C. Data Collection In this research, secondary data was obtained from scholarly books, articles on ISO 9001 and quality cost technique. Primary data was collected by using the survey instrument. The questionnaire was comprised three sections (30 items). The first section consisted of 24 items concerning the requirements, or clauses of ISO 9001. Second section included two parts, the first part was covered the impact of ISO 9001 standard on control costs (3 items), and part two had three questions, which are related to impact ISO 9001 on failure costs in construction projects. The responses of all items of section I and II were given based on a "five-point Likert-style scale" (e.g. a scale from 1 to 5, strongly disagreement=1, to Strongly agreement=5). The content validity of items was being subjectively evaluated before collecting data by four experts before collecting data. Also, based on obtained data from the survey, the reliability of research instrument was tested by Cronbach's Alpha for ensure to get reliable results. D. Data Analysis As a quantitative research, data was analyzed using Statistical Package for Social Sciences (SPSS) Version 17 software. [2] suggested for testing the impact of the independent variable on dependent concepts, simple regression analysis is an appropriate statistical technique. Therefore, this method was adopted using the following regression equations to estimated statistically the effects of ISO 9001 on quality costs, and its elements in construction projects at the significance level of 0.01 and 0.05 (1-tailed): Model 1, Model 2, and Model 3 = Impact of ISO 9001 on control costs, failure costs, and quality cost respectively. β0 = Constant of proportionality; X = ISO 9001 certification; ε = Error term; β1, β2, and β3 = Unstandardized regression coefficients of predictors viz, control costs, failure costs, and quality cost respectively V. RESULTS A. Reliability and Validity of Research Instrument In this study, the questionnaire was carefully designed based on previous studies on ISO 9001 standard and quality costs. Then the questionnaire was given to experts for evaluating its content qualitatively (content validity). As stated by [2], for implementing the exploratory and confirmatory factor analysis at least the sample size should be about 100 and 150 respectively. Accordingly, this study was not able to use these statistical methods (exploratory and confirmatory factor analysis) to test the validity of the research instrument, because the sample size is very small (67). However, the reliability of the questions measured by Cronbach's Alpha as most widely used method by researchers to identify and omit the unreliable items of the research instrument. Reliability indicates "internal consistency" of scale items with each other. The alpha value of reliability or the alpha ( ) is between 0 and 1, and each scale ⍺ items is reliable that its alpha is at least 0.70 [2]. As presented in Table I, the overall value of Cronbach's Alpha for the independent variable was about 0.914 that means that the scale items were reliable for ISO 9001 items. While the overall coefficient of quality costs (control and failure costs) is 0.897. Also, Cronbach's Alpha of control and failure costs were 0.826 and 0.833 respectively, which means that all items are an acceptable range. Meanwhile, three items of ISO 9001 measures, and likewise one item of failure costs were identified as unreliable scales, and they dropped before hypotheses testing. B. Hypotheses Testing As stated previously, simple linear regression analysis was used to examine the three hypotheses of the current study by using SPSS software. Therefore, three Models were developed to calculate statistically the hypothesized relationships among ISO 9001 standard and quality costs, and its main elements (control and failure costs). As shown in Table II and III, the Adjusted R2 is 0.273 in Model 1, which interprets the ISO 9001 standard can explain 27.3 percent of the variance in the dependent variable, or control costs, β coefficient (β1=0.533) and F statistic (F1=25.775) were significant at 0.01 level of significance since p<0.01. Therefore, the model 1 is statistically significant at 1% and the first hypothesis (H1), which is related the impact of ISO 9001 standard on increasing control costs, strongly supported and accepted at 1% significance level from the results. In Model 2, ISO 9001 standard as independent variable accounted with 5% of the variation in the failure costs (Adj. R2=0.05), while the regression analysis statistically has been proven (β2= 0.254; F=4.486) that ISO 9001 can affect the reduction of failure costs at 5% level of significance since 0.038<0.05, while it is not significant at 1% because 0.038 is greater than p=0.01. Consequently, H2 is accepted at 5% significance level only. Third hypothesis (H3) of this research is concerned the effects of ISO 9001 on total variables of the control and failure costs, or quality costs as whole. According to the regression analysis, the significant values of F (21.685) and standardized coefficient β (β3=0.500) were significant (p<0.01). Whereas the Adjusted R2 is 0.239 that means that the independent variable (ISO 9001) explains (23.9%) the variation in the quality control in general. Thus, the findings of regression analysis can strongly confirm H3 (1%). VI. DISCUSSIONS AND CONCUSSIONS As stated by [18], [19], [5], [16], [3], [9], [17], quality cost analysis is an effective management tool, or an appropriate indicator to assess the efficiency of quality management programs like ISO 9001 standard, Six sigma, Lean Production, etc. in different industries. Therefore, this study sought to explore the impact of ISO 9001 standard on quality costs and its main factors (control and failure costs) within ISO 9001:2008-certified projects of AAA construction firms in Metro Manila, Philippines. The findings of this study revealed how much ISO 9001 standard can assist construction firms to achieve their goals by reducing the expenses of projects. However, the result of regression analysis showed that the implementation of ISO 9001 increases the control costs (prevention and appraisal costs) into construction projects at 1% significance level. These findings provided supporting evidence for the view of [9], who interpreted that quality management programs can initially increase expenses of the organizations, but the total quality costs will be extremely decreased after a certain period of time, as can be seen in Figure1. Likewise, the study investigated the impact of ISO 9001 on failure costs in construction projects. Surprisingly, the results revealed that ISO 9001 cannot reduce the failure costs at 1% level of significance, while its effectiveness on failure costs was significant within projects of ISO 9001-certified construction companies at 5% significance level. This outcome from analyzing simple regression is consistent with the studies of [20], [21], who found that ISO 9001 improves the quality performance of the organizations to "reduce mistakes" and "rework", "failure costs" if the standard has been implemented properly. Furthermore, the construction studies reported that the "number of defects" [6] and errors and failure [4] of ISO 9001-certified construction projects are significantly less than the construction projects without ISO 9001 certification. From literature review, the empirical studies of [11], [4] are also supported the findings of this research concerning the significant effects of ISO 9001 on quality costs at 1% significance level in construction projects. Thus, this study concluded ISO 9001 standard is an effective quality management technique that can improve non-stop the quality of construction processes with the aim of reducing the expenses of projects and quality costs. As can be seen in Table III, two of three hypotheses were accepted at 1% level of significance, and the only hypothesis 2 related to failure costs was significant at 5%. From this finding, it might be justified that the majority of construction companies have been adopted ISO 9001 as a marketing tool instead of using it as an effective management tool to solve their quality problems and eliminate internal and external failure for reducing their expenses. According to [4], this kind of notion can often cause that the construction projects implement ISO 9001 improperly, it is indeed a big challenge and barrier to achieving the advantages of ISO 9001 standard within construction projects. Thus, this study recommended to management of the construction firms in emphasizing more on quality aspects of ISO 9001 (QMS) within projects than commercial and marketing issues only, in order to improve continuously the construction projects' performance and reduce the quality costs of construction processes, which can promote the satisfaction of the customers and owners, and being more competitive. For further study, it is suggested to evaluate how new version of ISO 9001 (2015) can affect quality costs and its main elements in construction projects.
2017-10-17T15:56:21.076Z
2017-01-24T00:00:00.000
{ "year": 2017, "sha1": "00907bfc07d2685b2dcf1889db5cd3433ed67928", "oa_license": "CCBYSA", "oa_url": "https://www.ssoar.info/ssoar/bitstream/document/51417/1/ssoar-2017-neyestani_et_al-Impact_of_ISO_9001_Standard.pdf", "oa_status": "GREEN", "pdf_src": "ElsevierPush", "pdf_hash": "09815b1ec7434d746579401663c70127569a889a", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [ "Engineering" ] }
164421458
pes2o/s2orc
v3-fos-license
Direction of Arrival Estimation in Multiple Antenna Arrays by Using Power Delay Profile for Random Access Performance in 5G Networks —In the transition from 4G to 5G, various solutions are being developed to improve performance for features such as data rates, latency, connectivity density and reliability. One of these features is Random Access Procedure because of the increasing number of users in 5G networks. In the random access method, user equipment performs a random access to the base station with a preamble and thus registers itself to the base station. However, if more than one user equipment use the same preamble at the same time, collision occurs and the registration process in the base station could be halted. In this paper, a new method is proposed which can be used to calculate Direction of Arrival between adjacent antenna signals in the antenna array with the help of the phase differences. Thus, the collision can be avoided by using the beamforming technique of the MIMO system using the calculated arrival angle of the user equipment. The proposed method is verified for two and three user equipment placed at different angles, at different distances to an antenna array consisting of 10 antennas. I. INTRODUCTION T IS EXPECTED that in the Fifth Generation (5G) wireless communication systems, the connection density will increase more than tenfold compared to the Fourth Generation (4G) systems [1]. As a result of this increase, it is clear that user elements (UE) will cause significant congestion in accessing LTE services. To meet this demand, one of the solutions is the using location division multiplexing systems which is supported with multi input multi output (MIMO) antenna systems in addition to time and frequency division multiplexing structures with the intelligent deployment of Resource Elements (RE) [1]. Furthermore, Random Access (RA) that UEs use for the first access to base station is one of the bottlenecks that make LTE services difficult to access and the number of preambles, already used in 4G networks to solve the Random Access collision problem seems to be insufficient. It is clear that solutions that solve this congestion will enable more UEs to be registered to the base station more quickly. Registration of the UE to the base station is done over the Random Access Channel (RACH) in Long Term Evolution (LTE) networks. However, concurrent access of UEs to the same channel causes signal collision. To overcome this problem, a preamble structure was introduced in 4G. With the help of the preamble, the randomly selected Zadoff Chu sequences are being used to eliminate the effects of collisions at the base station [2], [3]. Since the number of users in 5G networks is expected to be much more than the 4G network, the number of Zadoff Chu series may be insufficient for the first registration of the UEs. Reusing the same Zadoff Chu series on the random access channels which are multiplexed in the 3D space with the help of MIMO antenna system might be a solution to the this problem. To do this efficiently, it is very important to correctly calculate the Direction of Arrival (DoA) of the different UEs in the coverage area of the base station. Using angular position information, the initial registration and channel allocation procedures for UEs can be performed using the same Zadoff Chu series with the help of the RACH Procedure. DoA resolution algorithms have been extensively investigated and in a variety of solutions were proposed in the literature. Delay and Collection techniques such as Bartlett method tries to magnify the signals from certain direction by compensating the phase shift [4], [5].The Capon method is a minimum variance method that estimates direction of arrival by changing weight to minimize the array power subject to unity gain [6]. MUSIC (MUltiple SIgnal Classification) and ESPRIT (Estimation of Signal Parameters via Rotational Invariance Technique) algorithm, which are based on the subspace method, are developed based on eigenvectors, eigenvalues and spectral matrix theories [7]. The above algorithms use the correlation of the signals received by the Direction of Arrival Estimation in Multiple Antenna Arrays by Using Power Delay Profile for Random Access Performance in 5G Networks O. AYDIN and T. AKYÜZ  I different antennas in the multi-antenna structure. Apart from these studies, subcarrier-based beamforming methods and multi-level beamforming approaches for more efficient operation are available [8]. Some studies also use two dimensional antenna arrays [9]. Other recent studies on DOA estimation are using MUSIC algorithm after rejecting interferences [10], wideband DOA estimation by using adaptive array technique [11], DOA estimation in cyclic prefix OFDM systems by using mono pulse ratio [12]. The estimation of the reception angle is proposed in [13] based on the distribution characteristics using power delay profile. The above-mentioned methods evaluate the correlation of all incoming signals; therefore, they also take into account the signals of the UEs having different input preamble. These algorithms have an upper limit on the number of source UEs to be calculated depending on the number of antennas used in the array antenna structure of the base station [9]. Therefore, the use of these algorithms at the receiver input gives the main difficulty in accurately determining the desired arrival angle of a large number of UEs. In this paper, a new method is proposed to overcome the above-mentioned problem, which is based on using the phase differences of the peak values in the Power Delay Profile array, which is already calculated in the receiver stage of OFDM layer, used LTE systems [1], [2]. A. Preamble and Zadoff Chu Sequence The RACH is the channel (Random Access Channel) for mobile users to access the base station. Physical structure of the RACH, Physical Random Access Channel (PRACH) is shown in the Access through this channel continues with the mutual acknowledgment mechanisms and the mobile device receives the configuration of the resource blocks (RB) that it needs from the base station. The Random Access of the UEs to the base station is achieved by preamble signatures produced using Zadoff Chu arrays which are perpendicular to each other, that provides the detection of UEs by the base station. The Zadoff Chu sequence is obtained from the following formula. As shown in Fig. 2, the user terminal initially selects a Zadoff Chu sequence. Afterwards, Discrete Fourier Transform (DFT) is performed to the Zadoff Chu series which has a sequence length of . The array of frequency signals of DFT output is then located at the 1.08 MHz range in the middle region of the Orthogonal Frequency Division Multiplexing (OFDM) band according to carrier frequencies of 1.25 kHz as shown in Fig. 1 [2]. Then, Inverse Fast Fourier Transform (24576 point IFFT) is applied to the carrier frequencies of 1.25 kHz (Fig. 2). The preamble signal of 0.8 ms long is then converted from the parallel to the serial by adding the cyclic prefix (CP) and guard time, and then sent to the antennas [2]. In the receiver part, as shown in Fig. 3, the inverse of the sending procedure is applied, and the preamble is obtained. B. Power Delay Profile and DoA The signal sent from the UEs received from the PRACH channel is pre-processed and preamble detection is performed according the defined standards as shown in Fig. 4. Assumed that ( ) is to be the sequence of the signal received, the Power-Delay Profile, ( ) is calculated by the following formula: where ( ) is the reference Zadoff Chu for the root index and ( ) * indicates the complex conjugate of ( ). As a result of this process which is shown in the block diagram in Fig. 4b, peak values occur only when the Zadoff Chu series generated from the root index selected by the corresponding UE to generate the preamble for the same root index. The distance between the peaks of this array to the reference point gives the time of arrival (ToA) of signals sent by the UEs. For example, if there are two UEs, two peaks are formed in the array for each UE as can be seen from Fig. 5. The signal from an UE sometimes reaches to the receiver after being reflected from an obstacle; in this case, a relatively weak peak may be seen with a delay as shown from the figure. In the case of multiple antenna structures, this process is applied on every antenna. The resulting Power Delay Profile sequences can be used for the direction of the arrival detection method. Obtaining the UE's signal input angles, gives the possibility to the base station to continue to perform the RACH procedure differently for each of the mobile terminals by using beam forming with the help of the antenna array. III. PHASE DIFFERENCE IN POWER DELAY PROFILE The signal from one UE to each antenna will come with a certain phase difference, both depending on the angle of arrival of the signal to the antenna array and the distance between the antennas. As can be seen in Fig. 6, time delays occur in the signals sent by the UEs at a certain angle relative to the range between the antennas. The constant defines propagation waves at light speed. ∆ is th antenna signal arriving delay time. Assuming that ( ) is a narrow base band signal and is the carrier frequency, the real part of this signal becomes ( ) = Re{ ( ) − 2 }. Antenna Array We can write ( ) at the receiver such that: and for the base band, we can write ( ) = ( − Δ ) − 2 Δ (5) and then, when we put instead of , we get ( ) = ( − Δ ) − 2 Δ (6) where is the sampling period. Assuming that ≫ Δ , we get We have Δ = sin ⁄ and = , so when we select = 2 ⁄ we get; ( ) = ( ) − 2 sin (8) ( ) = ( ) − sin (9) In discrete domain for th antenna we can write: We get ( ) as below after the cross correlation by substituting Equation (10) into Equation (2): In each ( ) series for each th antenna there is a phase difference angle of sin . If we consider the peaks in the ( ) series that indicate the signal from a UE, there is a phase difference between the ( ) of the th antenna and the +1 ( ) of the ( + 1)th antenna. The phase difference can be calculated as: and direction of arrival is = sin −1 (13) assuming that phase difference between adjacent antennas less than 180°. IV. SIMULATION RESULTS The simulation was performed in MATLAB using two UEs positioned against to the base station at a distance of 357 m and 2145 m with the angle of 30° and -10° degrees to the linear antenna array respectively. The antenna array consists of = 10 antennas spaced at distance = 2 ⁄ . As shown in Fig. 7, when the broadcast frequency is selected as 3GHz, this spacing distance should be about 5 cm. The preamble index of the two UEs in RACH channel was chosen as one. As a result, when the same Zadoff Chu sequence is used in the two UEs for random access, in the Power Delay Profile series, two peaks are formed as shown in Fig. 8. To find the direction of arrival, the phase shifts at the peak complex values are used. These phase shifts are shown in Fig. 9 for the fifth and sixth antennas of the antenna array in the base station. As mentioned above, is the direction of arrival to be found, and the phase shift is = sin as from the Equation (13). Since the phase in each consecutive adjacent antenna is approximately , the average of this values increase the accuracy. The average is calculated as follows: As can be seen from Fig. 11, the positions calculated in the first simulation for the first and second UE are not affected by the third UE. The location of the UE 3 is also calculated correctly. V. CONCLUSION It is expected that the number of users in 5G networks will be much higher than the number of preambles that can ensure that the UEs are successfully registered to the base station. In this paper, a solution is proposed that allows the simultaneous use of same preambles to increase the registration performance. For this purpose, using the power delay profile, the angle of receiving signal sent by the UE can be calculated to enable beam-forming functionality, which can be performed with the help of the antenna array. The phase difference of the incoming signals from the neighbouring antennas is calculated by using peak values which are already calculated OFDMA layers with the help of Zadoff Chu series in 4G networks. The success of the proposed method verified by using MATLAB with an antenna array consisting 10 antennas. Simulations are made for two different configurations, first consists two UEs and the second with third UE additional to the first configuration. All UEs which use same Zadoff Chu sequences in the preamble are located at different distance and angles. Simulation result proves that, the proposed method can be a candidate method for solving the simultaneous registration problem of multiple UEs in 5G systems. BIOGRAPHIES OMER AYDIN has received his B.Sc. and M.Sc. in Electronics Engineering from Istanbul Technical University in 1982, 1985, respectively. He has completed his PhD on 4G and 5G radio power frequency amplifiers in 2016 in Istanbul Technical University. He is now working in Netas R&D. He has more than 20 scientific papers on 5G communication systems. His research interests include 5G communication systems, theoretical and practical aspects of radio frequency power amplifier designs.
2019-05-26T13:18:23.662Z
2019-04-30T00:00:00.000
{ "year": 2019, "sha1": "e50f0f0d55b3875cab81d4123b0a58828907a35f", "oa_license": "CCBY", "oa_url": "https://dergipark.org.tr/en/download/article-file/708188", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "57ddeabfa943f5b3c8226e9aae896fa1086c8d24", "s2fieldsofstudy": [ "Engineering", "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
158757940
pes2o/s2orc
v3-fos-license
Urgent action to combat climate change and its impacts (SDG 13) Actionsonclimatechange(SDG13),includinginthefoodsystem, are crucial. SDG 13 needs to align with the Paris Agreement, given that UNFCCC negotiations set the framework for climate change actions. Food system actions can have synergies and trade-offs, as illustrated by the case for nitrogen fertiliser. SDG 13 actions that reduce emissions can have positive impacts on other SDGs (e.g. 3, 6, 12, 14, 15); but such actions should not undermine the adaptation goals of SDG 13 and SDGs 1, 2, 5 and 10. Balancing trade-offs is thus crucial, with SDG 12 central: responsible consumption and production. Transformative actions in food systems are needed to achieve SDG 13 (and other SDGs), involving technical, policy, capacity enhancement and finance elements. But transformative actions come with risks, for farmers, investors, development agencies and politicians. Likely short and long term impacts need to be understood. and Introduction Climate change is regarded by many as a defining challenge of our times [1] and thus it is not surprising that one of the SDGs (13) concerns 'urgent action to combat climate change and its impacts'. Meta-analysis of impacts of climate change shows 70% of studies with declines in crop yields by 2030s, with half the studies having 10-50% declines [2]. Climate extremes may exceed critical thresholds for agriculture; effective mechanisms to reduce production risk will be needed [3]. Climate change is already affecting food systems, and agriculture is one of the sectors expected to be most impacted by climate change [4]. Impacts on food systems are expected to be widespread, complex, and geographically and temporally variable [5 ]. Globally, agriculture and related land use change contribute nearly a quarter of annual GHG emissions, $10-12 Gt CO 2 e yr À1 [6]. Considerable emissions reduction will be needed in food systems if the global warming target is not to be exceeded [7 ]. Thus achieving SDG 13 will require many actions for adaptation and mitigation in food systems. A major challenge is that food systems are linked to many SDGs and there are likely to be trade-offs amongst SDGs through food system actions [8,9]; with trade-offs particularly challenging in developing countries where climate change vulnerability will be highest. This paper examines SDG 13 and how it links to food system actions, with particular attention to agriculture in developing countries. It argues for the need for SDG 13 being closely aligned with the Paris Agreement and other UNFCCC agreements. Particular attention needs to be paid to the trade-offs and synergies amongst SDGs, as shown in a case study of nitrogen fertiliser. A transformative approach is essential in food systems if the climate change challenge is to be addressed, while also addressing other SDGs. Transformation will have many elements: technical, policy, capacity enhancement and finance; and both the likely short and long term impacts of transformative actions need to be understood if negative impacts to particular stakeholder groups are to be avoided. SDG 13 -strengths and limitations; and links to food systems SDG 13 considers both adaptation and mitigation, and includes foci on: strengthening resilience; integrating climate change measures into national policies and planning; monitoring progress towards climate financial commitments; and, improving capacity on climate change, especially in Least Developed Countries (LDCs) and small island developing States (SIDS), and amongst women, youth and marginalized communities (Table 1, first column). SDG 13 largely covers processes towards outcomes (see indicators in Table 1, second column) rather than outcomes themselves, and lacks a mitigation target. Many SDGs -unlike SDG 13 -do include indicators that capture what needs to be ultimately achieved by those SDGs. For example: SDG 1 (no poverty): Proportion of population below the international poverty line. SDG 2 (zero hunger): Prevalence of moderate or severe food insecurity in the population. SDG 12 (responsible consumption and production): Global food loss index. SDG 14 (life below water): Average marine acidity (pH) measured at agreed suite of representative sampling stations. The main negotiating forum for climate change is the UNFCCC, and the SDGs were agreed prior to the UNFCCC Paris Agreement, so it is not surprising that the Paris Agreement is more comprehensive than SDG 13. The Paris Agreement specifies the 2 C goal, communication of nationally determined contributions (NDCs), need for transparency in reporting, agreements on mobilizing climate finance, adaptation goals, and avoiding and compensating for loss and damage. SDG 13 therefore needs to be closely aligned with UNFCCC agreements. From the SDG 13 indicators, we can derive some of the actions and monitoring needed by food system actors to combat climate change (Table 1, third column) but this is a limited set. More detail can be gained by examining country NDCs, but even here ambition levels may be insufficient to address climate change [10], and few reflect the transformative actions needed (see below). Trade-offs among SDGs A goal of the SDGs and 2030 Agenda is to increase policy coherence and reduce trade-offs among sectoral policies [11,12]. To implement the SDGs in an integrated way, SDG 13 policy and action should be guided by their interactions with other SDGs and the institutions implementing them. Actions on SDG 13 have interactions with many SDGs, as discussed in this section and with a specific case study on nitrogen fertiliser in the next section. Climate acts as a dynamic driver of the sustainability of food systems and the conditions affecting it: water, land, oceans, and hazards [5 ,13 ]. The impacts of climate on food systems in turn affect poverty, health, economics, infrastructure, equity and gender relations [5 ]. Climate change is also driven by food systems, energy, and unsustainable consumption and production, creating feedback effects. From a development perspective, achieving adaptation and mitigation in food systems will require success in other SDGs as enabling conditions of SDG 13, such as sustainable production and consumption (12), food security (2), poverty reduction (1), education (4), gender equity (5), water (6), life on land (15) and energy (7). Geographic, technical and governance contexts affect the specific nature of the interactions [11]. Major synergies occur between adaptation in SDG 13 and food security, poverty, and equity ( Figure 1, right side). Synergies can also be expected to increase between mitigation in SDG 13 with efficiencies in energy, water and nutrient inputs in agriculture ( Figure 1, left side). Reducing loss in the food supply chain to support sustainable production and consumption could reduce emissions between 15 and 30% [14]. A major trade-off is potentially the goal of forest conservation under SDG 15, which should limit agricultural expansion. The major sources of remaining arable land Urgent action to combat climate change and its impacts (SDG 13): transforming agriculture and food systems Campbell et al. 15 [15]. Yet livestock are fundamental to the adaptive capacity of tens of millions of smallholder farming households, through meat and milk production, manure for crop production, transport and traction. Although potential interactions can be anticipated, to mobilize change and achieve ambitious targets in SDG 13 for food systems, better information about these interactions and the actual impacts of climate action and responses to climate will be necessary [16,17]. Spatial and temporal monitoring of targets and their interactions will be needed [18]. Country priorities will vary, with developing countries focusing on production, food security and adaptation, and developed countries focusing more on the environmental impacts of food systems and mitigation. Case study: nitrogen fertiliser and the SDGs A specific case demonstrates some of the interactions amongst SDGs. Global N fertiliser consumption has increased by almost 100 Tg N yr À1 between 1961 and 2013 [19]. Further increases in crop production require that fertiliser is managed sustainably to avoid negative trade-offs that could undermine the multiple SDGs that N impacts (Figure 2). The most obvious trade-off is the need to increase N to meet SDG 2 whilst reducing N to support SDGs 6, 13, 14 and 15. The key is judicious N consumption, and thus SDG 12 is central: responsible consumption and production. Too little N Wide variation exists in fertiliser use. For example, Sub-Saharan Africa accounts for less than 2% of world fertiliser N consumption (mean rate, excluding South Africa: 7 kg N ha À1 ) while China consumes ca. 30% of world consumption (565 kg N ha À1 ). In some regions of Latin America and Asia and across most of Sub-Saharan Africa too little fertiliser N use results in soil nutrient mining and low yields. Improved access to fertiliser N will be critical to ending poverty (SDG 1) and hunger (SDG 2) and improving health (SDG 3). Too much N The opposite of this is that too much N fertiliser results in significant N losses, contributing to groundwater 16 Sustainability science contamination, eutrophication of freshwater and estuarine ecosystems, atmospheric pollution, and soil acidification and degradation. Nitrogen run-off and leaching are responsible for toxic aquatic algal blooms, fish death and loss of biodiversity, which undermine the realisation of SDGs 6, 14 and 15. Fertiliser N is also responsible for more than 30% of agricultural-related N 2 O emissions with agriculture being the major source (ca. 60%) of global N 2 O emissions. Approximately 70% of fertiliser-related N 2 O emissions derive from countries with emerging economies such as China and India where fertiliser consumption rates have grown rapidly due to fertiliser N subsidies whilst crop yield responses to N have stagnated [20,21]. By contrast, effectively targeted policies have resulted in a decline or reversal of growth in fertiliser N use in Western Europe and Australia whilst crop yields have continued to improve [22]. Well-targeted policies in the Netherlands have reduced fertiliser use to the same level as in 1960s whilst yields have doubled [21]. Optimal N Precision N management offers a means of achieving the SDGs through better N management on both large and small farms. For example, a range of precision N tools and techniques can support best fertiliser management on smallholder farms, such as chlorophyll meters, the leaf colour chart or optical sensors (e.g. GreenSeeker) for guiding in-season N management. Similarly, decision support software (e.g. Nutrient Expert, Crop Manager) is being used to refine N management practices, and such tools have become increasingly important in geographies where blanket fertilizer recommendations have been the norm. As broadcast-N application is a major source of nutrient loss, drilling of fertiliser N or fertigation using drip irrigation can precisely place N near the root zone thereby reducing losses. In Indo-Gangetic plains of India, both the Nutrient Expert and GreenSeeker-based nutrient management have increased the partial factor productivity of nitrogen in wheat compared with staterecommended and farmers' fertilizer practice. Through on-farm comparison in over 4000 farmers' fields across Indo-Gangetic plains of India, CIMMYT found that 'nutrient expert'-based management reduced GHG intensity of rice, wheat and maize production by 5-35% (average 13%). Transforming food systems to tackle food security under climate change What will it take to increase agricultural productivity (e.g. especially in sub-Saharan Africa), enhance food security, get rural communities out of poverty, build resilience to climate change and other stresses, reduce agricultural emissions and other agricultural environmental impacts, and improve diets and health outcomes? What will it take to balance the trade-offs amongst SDGs, as demonstrated by the N case study? The challenges are immense and call for nothing short of a transformation in food systems, with highly specific actions depending on context. Food systems are indeed transforming in many places [23 ], but many scholars argue that transformation will have to be much greater in the coming years, from the perspective of food security [24], climate change [25] and environmental sustainability [13 ]. We propose a theory of change embracing eight closely linked elements (Figure 3). Urgent action to combat climate change and its impacts (SDG 13): transforming agriculture and food systems Campbell et al. 17 Figure 3 Visioning and exante analysis The current levels of development and climate finance will be insufficient to tackle the challenges ahead and thus private sector investment needs to be stimulated, including, for example, through climate finance that derisks private finance [26,27]. However, there is seldom perfect alignment between private and public interests. With continuing urbanization in many developing countries, wealthier populations and changing consumer demands the food sector is going to become more dynamic, with the private sector -both small and large enterprises -likely to rise to the challenge of the changing demands. Element #2: credit and insurance Efforts to increase availability and access to credit and insurance need to be greatly scaled up, as credit and risk are factors holding back investment by smallholders in climate-resilient technologies and practices [28]. Insurance, and in particular index-based insurance with its lower transaction costs and rapid pay-outs, can be key to unlocking credit, as well as providing the usual protective functions. Many climate-smart investments require up-front investments (e.g. establishing trees in agroforestry systems) -innovative finance and credit can offset such up-front investments. Element #3: strong local organisations and networking Local institutions and networks are important in fostering climate action [29,30 ]. Farmers' groups, producer groups, water use associations, women's groups and other such groups need a strong voice to demand the needed services from service providers, and to negotiate with often powerful private sector players. Element #4: climate-informed advisories and early warning Knowledge is key to building adaptive capacity and helping farmers, their service providers and value chain actors deal with climate variability [31]. Farmers in most developing countries are faced by poor extension, with too few extensionists at farm level, and messages often being top-down generic messages not relevant in many contexts. Farmer advisories can be linked to climate forecasts, to help them select varieties, and plan for planting, field management operations and harvesting [32,33 ]. Appropriate climate-informed advisories can stimulate production, reduce input costs, reduce postharvest losses and reduce emissions (e.g. through better timing of fertilizer applications). There needs to be a continuum between 'normal' variability-related advisories on the one hand and early warning and emergency response for extreme events on the other [34]. Close collaboration and coordination between national meteorological services, national extension services and emergency response agencies, can increase production, build resilience and enhance social protection. Key will be functioning extension advisory services and national meteorological services accountable for the products they deliver. Element #5: digital agriculture Big data and ICT is transforming society [35] and is likely to revolutionize extension, as data from millions of farmers is combined with data from other sources (e.g. remote sensing, crop models, sensors) to better tailor information and services. ICT can also promote two-way extension, with farmers getting answers for specific questions they ask, giving feedback to extension messages so that extension can be further tailored and improved, and contributing to early warning systems (e.g. by providing information on pest outbreaks). Facilitating access to smart phones and improving connectivity to internet could be a crucial to drive food system transformations in developing countries. Element #6: climate-resilient and low-emission practices and technologies Agricultural practices and technologies, including for post-harvest operations, will be a key part of the transformational agenda. There are numerous practices and technologies that will assist in adaptation, with many also having emission-reducing potential [36]. These include, for example, agroforestry, that diversifies livelihoods and landscapes and builds carbon stocks; aquaculture, that meets the rising demand for animal protein and has the ability to diversify farmer incomes, and enhance resilience and nutrition; improved feed in dairy, which enhances animal resilience and health, diversifies livelihoods and reduces emission intensities; and responsible and sustainable fertiliser N management (as described in the case study). Many appropriate practices and technologies already exist, and the challenge is getting them widely used -the other seven elements of this transformation theory of change are intended to address the scaling challenge. Effective research and innovation systems are also needed -to continuously improve practices and technologies. Element #7: prioritisation and pathways of change Given the strong differentiation already in rural areas, and the asset differences amongst, for example, men and women, old and young, and peri-urban and distant farmers, a transformational agenda will have different effects on different kinds of stakeholders, thus the need to recognise different pathways for change [29]. For example, some farmers will be unable to respond to marketled development. Therefore well-designed social protection programs, involving cash and in-kind transfers to very poor and vulnerable households, can protect and rebuild productive assets and hence protect livelihood opportunities in the face of extreme climate events [37]. Adaptive social protection innovations, such as integration with credit, production inputs, agricultural extension and risk finance, increase the responsiveness of such programs to climate shocks. Choices of practices and technologies, types of credit and insurance, means of extension, and so on, should all be driven by careful prioritisation approaches [38], given the social and environmental variation in rural areas, and differing national contexts. Element #8: capacity, and enabling policy and institutions Each of the above elements of a transformational agenda is ultimately dependent on an enabling policy and institutional environment, including capacity enhancement of key actors, to provide the conditions and incentives to help businesses expand and invest, incentivize the uptake of insurance and credit, expand markets and availability of inputs, foster strong farmer and other local groups, greatly expand extension, connectivity and availability of mobile devices, create incentives for technological advances, help reduce food loss and waste, and contribute to shaping consumption patterns and improved diets. While many of the policy and institutional advances will be at national levels, supra-national policies and institutions are also important (e.g. related to trade, development, climate change) [39]. Policy actions also need to tackle undesirable trade-offs amongst SDG goals. These include environmental trade-offs, for example improved profitability of agricultural systems can drive deforestation and thus the need for forest governance policies to complement market policies in agriculture [40]. Transformative actions come with risks, for farmers, investors, development agencies and politicians. Likely short and long term impacts therefore need to be understood, for example, through visioning and ex-ante analysis [41], and short-term negative impacts that may cause resistance to beneficial longer-term outcomes need to be dealt with. Conclusions Transformative actions in the food system to achieve SDG 13 and UNFCCC agreements are crucial, but actions need to be carefully considered given the possibility of trade-offs between adaptation and mitigation, and amongst other SDGs. SDG 12 is considered to be central: responsible consumption and production [39]. Transformative actions will have many elements, including: (1) Expanded private sector activity and public-private partnerships; (2) Credit and insurance; (3) Strong local organisations and networking; (4) Climate-informed advisories and early warning; (5) Digital agriculture; (6) Climate-resilient and low-emission practices and technologies; (7) Prioritisation and pathways of change; (8) Capacity, and enabling policy and institutions.
2019-05-20T13:05:37.240Z
2018-10-01T00:00:00.000
{ "year": 2018, "sha1": "c345cd3f54a81c9451f431013b25ff2f3efd5bb7", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/j.cosust.2018.06.005", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "69a640a4995beb2f0cc1bc18520f7d367f7b5d8c", "s2fieldsofstudy": [ "Economics" ], "extfieldsofstudy": [ "Business" ] }
17667794
pes2o/s2orc
v3-fos-license
The Efficacy of Vildagliptin Concomitant With Insulin Therapy in Type 2 Diabetic Subjects Background In Japan, dipeptidyl peptidase 4 (DPP4) inhibitors have become standard therapeutic agents for type 2 diabetes, and numbers of patients receiving insulin therapy combined with DPP4 inhibitors, which is a highly effective regimen, are increasing. Methods In this study, we evaluated the efficacy of vildagliptin administered at the dose of 100 mg twice daily in 57 patients with type 2 diabetes already receiving insulin treatment. Results The 36 patients who simply received add-on vildagliptin showed a 0.6% decrease in HbA1c levels, despite a marked insulin dose reduction, mainly bolus insulin, of approximately 8.3 units. In addition, body mass index exhibited a significant negative correlation with the efficacy of vildagliptin, i.e., ΔHbA1c. On the other hand, the 21 patients switched from 50 mg of sitagliptin to vildagliptin showed HbA1c decreases approaching 0.7%. Conclusion Taking into consideration that twice-daily oral vildagliptin has already been reported to be advantageous in reducing postprandial hyperglycemia, this drug was suggested to be more effective in reducing HbA1c than sitagliptin under conditions in which it is used as a supplement to basal insulin, as in this study. Introduction Dipeptidyl peptidase 4 (DPP4) inhibitors increase the activity of glucagon-like peptide-1 (GLP-1) and gastric inhibitory polypeptide (GIP), which are incretin hormones secreted by gastrointestinal mucosal cells in response to carbohydrate intake, thereby promoting glucose-responsive insulin secretion [1]. Accordingly, because these drugs reduce both fasting and postprandial blood glucose levels with no risk of hypoglycemia, they clearly have higher efficacy than sulfonylurea agents, which are conventional insulin secretagogues [2]. Beyond glycemic control, DPP4 inhibitors were reported to have pleiotropic effects such as body weight loss [3] and improvement of lipid profiles [4]. It is estimated that DPP4 inhibitors have been used in as many as 3 million patients with type 2 diabetes since sitagliptin was first launched on the Japanese market in 2010. These drugs seem to be very highly effective in Japanese diabetic patients, who mainly have impaired insulin secretion. At present, seven DPP4 inhibitors, i.e. sitagliptin, alogliptin, vildagliptin, linagliptin, teneligliptin, anagliptin and saxagliptin, are available in Japan [5][6][7][8][9][10][11][12]. Among these, vildagliptin is considered to have a superior postprandial hypoglycemic effect, because it covalently binds to DPP4 [13], thereby markedly increasing postprandial GLP-1 activity, although it has to be administered orally twice daily due to its relatively short half-life [14]. Combined DPP4 inhibitor and insulin therapy was introduced in Japan in 2013, and numerous patients with type 2 diabetes receiving insulin began to additionally receive DPP4 inhibitors. However, to date, few studies have examined the efficacy of vildagliptin administered in combination with insulin therapy in Japanese patients. Therefore, in this study, we determined the efficacy of vildagliptin administered in combination with insulin. We also conducted a test in which sitagliptin, which is currently the most widely used DPP4 inhibitor in Japan, was switched to vildagliptin to compare the glucoselowering effects of these two agents. Figure 1. In study 1, 36 patients not receiving an oral DPP4 inhibitor were given 100 mg of add-on vildagliptin orally twice daily throughout the study period of 24 weeks. The insulin dose was reduced as appropriate at the start of the study and also changed as appropriate according to glycemic control status during the observation period at the discretion of the attending physician. In study 2, 21 patients receiving 50 mg of oral sitagliptin once daily were switched to 100 mg of vildagliptin orally twice daily and then observed for 24 weeks. The insulin dose was not changed at the time of switching DPP4 inhibitor treatment, but was changed as appropriate according to glycemic control status during the observation period. When we calculated the mean bolus insulin dose, we evaluated the bolus insulin dosage of patients treated with only basal insulin, as 0 units. In both study 1 and study 2, other oral hypoglycemic agents were left unchanged during the study period, and patients with severe hypoglycemia and those in whom HbA1c was maintained at 9.0% for 3 months or longer were withdrawn from the study. All patients gave written consent before participating in this study. Outcome measures The serum HbA1c levels, 2-h postprandial plasma glucose levels after breakfast, plasma C-peptide levels, body weight, and presence/absence of adverse events such as hypoglycemia were routinely monitored during the study period. These parameters were measured employing routine laboratory techniques. Exclusion criteria at enrollment included liver dysfunction (aspartate aminotransferase (AST)/alanine aminotransferase (ALT) > 50 IU/L), renal dysfunction (creatinine > 1.5 mg/dL) and severe obesity (body mass index (BMI) > 35 kg/m 2 ). Statistical analysis Data are shown as means ± standard error (SE). P < 0.05 by paired t test was considered to denote a statistically significant difference. STATA SE 11 was used for all statistical analyses. Results The clinical characteristics of the 57 enrolled patients at the start of the study are shown, separately for studies 1 and 2, in Table 1. The patients were being treated with insulin, and the duration of diabetes was at least 10 years. In addition, there was a tendency for obesity, with the mean BMI exceeding 25. Therefore, the mean total insulin dose was ≥ 30 units daily, which is considered to be high for Japanese patients. The study involved poorly controlled patients with a mean HbA1c level ≥ 8%. The results of study 1 are shown in Figure 2. In the vildagliptin add-on therapy group, the mean HbA1c level decreased from 8.1% to 7.5% (Fig. 2a) and, notably, the mean daily insulin dose could be reduced significantly, by 8.3 units (Fig. 2d). The mean dose of rapid acting insulin, i.e. bolus insulin, was reduced by 7.8 units (from 26.4 to 18.6 units), while that of basal insulin was reduced by 0.5 units (from 8.8 to 8.3 units). Thus, bolus insulin injected before meals accounted for most of the insulin dose reduction. The 2-h postprandial plasma glucose levels also decreased significantly (Fig. 2b). There were no significant changes in body weight (Fig. 2c) or 2-h postpran- Figure 3. After switching from sitagliptin to vildagliptin, the mean HbA1c level improved by 0.7%, down from 9.0%, despite almost no change in the insulin dose (Fig. 3a, d). The 2-h postprandial plasma glucose levels did not change (Fig. 3b), suggesting that mainly improvement and stabilization of fasting blood glucose levels had contributed to the significant improvement in HbA1c levels. Mean body weight increased significantly by 1.2 kg (Fig. 3c), but there was no significant change in 2-h plasma postprandial C-peptide levels. Figure 4 presents the relationships between the efficacy of vildagliptin, i.e., ΔHbA1c, and BMI and baseline HbA1c. BMI exhibited a significant negative correlation with ΔHbA1c in study 1, while no relationships were observed in study 2. Baseline HbA1c was strongly associated with ΔHbA1c in both studies. None of the patients were withdrawn from either study as there were no marked deteriorations of blood glucose levels. In study 1, two patients developed mild hypoglycemia, but there were no episodes of severe hypoglycemia. In addition, no safety problems were seen in any of the 57 patients. Discussion DPP4 inhibitors are therapeutic agents that, in combination with insulin treatment, can be expected to further improve blood glucose control [15]. In particular, add-on DPP4 inhibitors in patients receiving treatment with basal insulin alone, i.e., basal supported oral therapy (BOT), are reportedly more effective than premixed insulin preparations or intensive insulin therapy [15]. These data suggest that the postprandial hypoglycemic effect of DPP4 inhibitors and the fasting hypoglycemic effect of basal insulin make this the most appropriate combination. On the other hand, if DPP4 inhibitors are combined with premixed insulin preparations or bolus insulin, it is necessary to adjust the insulin dose, particularly that of bolus insulin, to avoid hypoglycemia. In the present study as well, the bolus insulin dose was reduced by 7.8 units, while the basal insulin dose was only slightly reduced (by 0.5 units). Add-on vildagliptin allowed some patients to finally be switched from basal-bolus insulin therapy to BOT. These results demonstrate that the addition of DPP4 inhibitors to insulin therapy allows substantial reductions in bolus insulin doses. Considering the results of previous meta-analyses and other studies, vildagliptin is likely to have the strongest hypoglycemic effect among the many DPP4 inhibitors currently available [16]. As vildagliptin covalently binds to DPP4 [13] and thereby markedly increases postprandialGLP-1 activity, it is considered to have a particularly excellent postprandial hypoglycemic effect [14]. In addition, it has been reported that, as a secondary effect of markedly reducing postprandial blood glucose levels, vildagliptin has a more pronounced effect in lowering blood oxidative stress and inflammatory markers than other DPP4 inhibitors [17]. As the reductions in these substances are likely to improve insulin resistance, this specific effect of vildagliptin depends on the degree of obesity (as reflected by high BMI). In fact, vildagliptin was previously reported to be more effective in diabetic patients with a tendency towards obesity [18]. In the present study, BMI was also negatively associated with ΔHbA1c, an observation consistent with this hypothesis. This is in marked contrast to the data on sitagliptin, the effect of which is reported to be attenuated at BMI values exceeding 25 [19]. Therefore, the insulin-resistance-improving effect of vildagliptin apparently enhances the fasting hypoglycemic effect [16] and, furthermore, produces greater reductions in HbA1c levels than other DPP4 inhibitors [20]. Although the usual dose of sitagliptin is 100 mg daily in countries other than Japan, 50 mg daily is adopted as the routine dose in Japan, because a phase III clinical trial in Japanese subjects showed no difference in hypoglycemic effects between 50 and 100 mg of sitagliptin [21,22]. In the present study, the treatment switch from 50 mg of sitagliptin to vildagliptin resulted in a marked improvement in HbA1c levels. This result suggests that insulin therapy plus vildagliptin, with its marked postprandial hypoglycemic effect, is a very beneficial combination, although the patient group with a relatively high BMI appeared to benefit more than leaner subjects in the present study. In conclusion, we were able to reduce the bolus insulin dose and further improve postprandial blood glucose and HbA1c levels by adding vildagliptin to the treatment regimens of patients with type 2 diabetes receiving insulin. Furthermore, we also demonstrated the usefulness of switching from sitagliptin to vildagliptin. The results of this study are considered to be clinically significant in that they present a new effective means of achieving glycemic control in type 2 diabetic patients with insufficient glycemic control while treated with insulin alone.
2016-05-12T22:15:10.714Z
2015-03-01T00:00:00.000
{ "year": 2015, "sha1": "b71a93030dd4ec08ecf654ac98b9224d15dbee2d", "oa_license": "CCBY", "oa_url": "https://www.jocmr.org/index.php/JOCMR/article/download/2057/1053", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b71a93030dd4ec08ecf654ac98b9224d15dbee2d", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
246331471
pes2o/s2orc
v3-fos-license
Music: A holistic approach to symptom management for patients undergoing cancer treatment Background and objective: Patients undergoing chemotherapy carry a tremendous amount of psychological and physical burdens. The focus of care in the oncology setting should include a blend of pharmacological and holistic medicine to achieve desired health goals. This research includes a comprehensive review of music therapy as a vehicle to promote psychotherapeutic effects on patients receiving chemotherapy. This review serves to identify the positive outcomes of patients engaging in music therapy while receiving chemotherapy. Methods: An integrative literature review was conducted. Articles published between 2007 and 2020 were identified searching the databases of CINAHL, PubMed, and ProQuest. Key words included music therapy, psychological, chemotherapy, depression, anxiety and cancer. Results: Eight articles were reviewed that identified three emotional effects in relation to receiving chemotherapy: anxiety, depression and quality of life. The literature focused on how music therapy was used to improve these emotional states. The Spielberger State Anxiety Inventory- State Anxiety Scale and other standardized units were used to evaluate the positive effects of music therapy. These tools, which help standardize assessments, found noticeable psychosocial improvements. Conclusions: Further studies are needed to conduct a more comprehensive use of music therapy in the healthcare setting. Inclusive literature should also be available as an alternative method of treatment to provide additional outlets for improvement of health outcomes. The information studied has already found promising results of music therapy on patients’ psychological wellbeing while receiving chemotherapy. INTRODUCTION Music can energize one's spirit, move one to tears, help one recall a simpler time or a past love, and music can heal one's soul. At one time or another, everyone has felt the mindaltering power of music. It is a tool that is often relegated to nonmedicinal purposes; however, the therapeutic properties of music have healed us all in various circumstances that we probably never even realized. The opposite can also be true-many people seek music as a remedy to lift a psycho-logical burden and as a balm for a wounded soul. Music therapy, as a nursing intervention, embodies a holistic approach to nursing care. Music can take an individual to a place of healing, confidence, and peace. The science of music therapy includes individual and evidence-based music interventions to achieve goals set by the patient; this process uses both active and passive musical exercises. [1] In the healthcare field, a therapeutic approach to treatment is a cornerstone of the healing process. And this therapeu-tic approach comes in many different facets, such as music therapy. Background & significance According to the American Cancer Society, [2] by the end of 2021, there will be a projected 606,520 deaths from cancer and 1,806,950 new cases of cancer in the United States. Cancer is a devastating disease that affects the patient (and the family) in insurmountable ways. Galanis, Kalolairinou, Konstantakapoulou & Spilioti [3] state in their research that the physical, emotional, psychological, and financial burdens of cancer have a dramatic effect on the quality of life for the patient and the families involved. Historical perspectives Music therapy is not a new idea; music was used in Greek Mythology in many ways to illustrate an emotional effect on the listener. Orpheus used his music to convey his anguish and assuage his emotional pain, mourning his dead wife. [4] Boethius influenced the world with music by penning these three types: instrumental, human and cosmic. He echoed Plato when he said that music has the power to affect an individual's mood and behavior. [5] Shakespeare believed that man was comparable to a musical instrument that can be "out of tune" as well as "finely tuned". This idea communicates that harmonic proportions of music also infiltrate the physical body. [6] Gilliland [7] reports in her original journal entry "The Healing Power of Music" that music therapy made huge advancements during the World Wars when it was prescribed for war neurosis, temporary insanity, other traumas of war and easing the mental discomfort of paralyzed muscles; music was also used to treat insomnia instead of prescribing medication. Current research Music therapy is currently used in a variety of modalities. These include singing, creating or composing music, listening to music or performing. The benefits of all modalities have been shared in current research. Quinn, Bodkin-Allen & Swain [8] identified overall positive benefits of music therapy, using group singing, for both patients and a control group. Gallagher et al. [9] identified the benefits of patient and supportive family and friends singing. A computerized database was used in the evaluation. Wilcoxon signed rank test was used along with a paired t test, before and after to see whether music therapy improved anxiety, body movement, facial expression, mood, pain, shortness of breath, and verbalizations. Sessions with family members identified that music therapy improved families' facial expressions, mood, and verbalizations. All benefits of the therapy were statistically significant (p < .001). Most patients and families reported subjective and objective positive and beneficial responses to music therapy. A futuristic approach is described by Brungardt A, Wibben A, Tompkins AF, Shanbhag P, Coats H, LaGasse AB, Boeldt D, Youngwerth J, Kutner JS, Lum HD. [10] They used virtual reality-based music therapy intervention with patient in palliative care to interpret the patient's compositions. Seventeen patients from ages 23-74 created their music sound track in a two day intervention then developed a 360 degree virtual reality experience that interpreted their music. The majority of participants reported positive feedback in addition to both emotional and physical responses to the virtual reality-based music therapy intervention. As illustrated in the background review of literature, music therapy is an excellent low cost intervention that should be considered when planning nursing care. Patients and families should be actively involved in the choice and modality. Chia, Hsiu, Kuei, Mei, Ying, and Yuan [11] conducted a metaanalysis on music interventions with cancer patients. The research yielded positive results as the patients involved reported improved quality of life, improved adjustments to pressures, pain relief, better able to express feelings, enhanced memory, improved communication, facilitated physiological rehabilitation, and achieved harmonic state of body, mind, and spirit. Their study used the Cochrane Collaboration Guidelines to ensure adequacy and quality of the studies conducted. Only studies with a 6 or more out of 10 were included in the meta-analysis. The findings confirmed that cancer patients who were experiencing pain and depression during their course of treatment can benefit from music therapy. Problem statement Music therapy is a holistic approach that can be used as a distraction and life enrichment during chemotherapy treatments, requiring little expense while improving patient outcomes Acaroglu and Bilgic. [12] Much research has been concluded and many studies have been conducted to investigate the positive effects music therapy has on a patient receiving chemotherapy. Purpose of the study The purpose of this research is to explore the psychotherapeutic effects of music therapy on patients undergoing cancer treatment. The results of this research can be integrated into nursing interventions used to transform patient care. In addition, this study will include both Pamela Reed's theory of self-transcendence and Katherine Kolcaba's Comfort Theory as a focal point to reducing psychosocial effects of cancer. According to Ackerman, [13] the main idea of selftranscendence is rising above oneself-relating one's values and beliefs to something greater. Music guides the patient to that transforming place of a higher perspective. Both cancer and treatments inflict incredible discomfort and anxiety to the patient; these undesirable feelings and side effects can negatively impact the patient and further worsen quality of life. Research question The following research question will be used to guide this literature review: What are the psychotherapeutic effects of music therapy on patients undergoing cancer treatment? Conceptual theory and framework Patient comfort is one of the most important pillars of nursing. Katherine Kolcaba developed the Comfort Theory in 1990's, which is still relevant today. According to the Comfort Theory, comfort exists in three forms: relief, ease and transcendence. [14] The Comfort Theory complements music as a holistic approach to medicine because it can be used as a medium to achieve each of the three forms mentioned above. Music relieves negativity, eases patient anxiety, and promotes patient self-transcendence. Transcendence is the "state of comfort in which patients are able to rise above their challenges." [14] Pamela Reed's Self-Transcendence theory (developed in 1991) can be applied to times of uncertainty and life-altering illnesses. People confronted with a diagnosis of cancer, facing the long road of chemotherapy, encounter many life changes. This life-changing circumstance confronts the patient and their family with new information, challenges, people, concerns, environments, and fears. Self-transcendence guides an individual to expand self-made boundaries in three areas: intra-personally (awareness of one's own philosophies, values and morals), interpersonally (relating to others and the environment) and trans-personally (broadened perspective about that which remains outside the individual's control). [15] Focusing on connecting these areas will grant the individual access to inner peace and elevate them to a place of advanced perspective, fostered by well-being and vulnerability. The model chosen to illustrate the purpose of this literature review incorporates facets of the Comfort Theory and Self-Transcendence. First, the patient's experience is the starting point which provides the need for the model. For this study, a patient suffering from cancer may experience emotional lability. Connecting with self, others, the environment etc. and evaluating one's own philosophies and values will aid the individual to find purpose with a more mature perspective of the situation. Second, emotional comfort comes from understanding one's own principles and estab-lishing healthy relationships. Finally, the journey to selftranscendence is not possible without patient participation. The patient must resolve to remain open to the possibilities of well-being through connection, support, and evaluation. Because music can modulate moods and emotions, it can then be used as a medium to guide the patient to a point of self-transcendence. [16] Research design This study was guided by Whittemore and Knafl's integrative literature review design to examine both the psychotherapeutic effects music therapy has on cancer patients and the strategies used as an intervention to improve outcomes in cancer patients. A variety of literature was utilized to determine how music therapy guides patients to achieve mental calmness in the fight through the disease process. The integrative literature reviews allow for multiple sources to be compared, analyzed, and critiqued based on the findings; this research diversity allows for a better understanding and evaluation of whether or not music therapy has a beneficial effect on cancer patients during treatment. Literature search strategy The following databases were searched for this integrative literature review: CINAHL, PubMed and ProQuest. Key words searched included: music therapy, psychological, chemotherapy, depression, anxiety and cancer. Article inclusion criteria included: • Population of patients who were receiving chemotherapy as treatment while using music therapy as adjunct therapy • Articles that were published between 2007-2020 • Documentation of improved psychological effects on patient's wellbeing and improved outcomes • Duration of music therapy and type of music played during the session The research produced a total of 119 articles. Of the 119 articles found, 8 articles met the criteria needed and were chosen for a more thorough review (see Figure 2). Data synthesis & analysis A total of eight articles were reviewed. The findings were assembled into a data matrix table displaying the author, publication date, purpose, sample, design study, results, and limitations of the articles (see Table 1). When selecting articles, the content pertained specifically to cancer patients undergoing chemotherapy. The articles showed various positive psychological effects due to music therapy, thus, narrowing the specific psychological effects studied that successfully influenced patients during chemotherapy. RESULTS A patient fighting cancer undoubtedly suffers a tremendous amount of physical and psychological pain. Bradt, Dileo, Magill, & Teague [17] investigated effects of music therapy and medicinal interventions of both the psychological and physical outcomes in people with cancer; they found that mental health and physical health complement each other. Therefore, when healthy mental habits are established, physical health outcomes should improve as well. Krishnaswamy and Nair [18] advocate for the need of nonpharmacological approaches to reducing anxiety and pain in patients enduring chemotherapy treatments. Acaraglu and Bilgic [12] found that music helped reduce the symptoms of chemotherapy and increased the comfort level of patients post treatment. Mahon and Mahon [1] conducted a research study that yielded positive results through both active and passive participation in music therapy sessions with a duration of at least thirty minutes. Aazami, Jasemi & Zabihi [19] found that patients reported feelings of less anxiety and depression after music therapy sessions. They also concluded that music therapy proved to be an inexpensive, non-invasive method to decrease the amount of anxiety and depression experienced by patients suffering from cancer. Galanis et al. [3] reported that patients felt less stress and anxiety after a music therapy session. Not only listening to music, but also participating in playing an instrument or singing reduced anxiety and depression, which are exercises that can be done without the music therapist. [1] Along with the strong evidence that supports the fact that music therapy has a positive effect on patient status during chemotherapy, patient participation is at the center of having a positive outcome; those who did not want music therapy showed no significant improvement of mood. [21] Out of the eight articles that were reviewed, three main psychological impacts were found in relationship to music therapy: anxiety, depression, and quality of life. Music Therapy and Anxiety. Anxiety was one of the main psychological factors observed among patients due to the anticipation of chemotherapy. According to Bradt et al., [18] the majority of patients reported anxiety leading up to and during chemotherapy. After implementing music therapy prior to and during chemotherapy sessions, patients reported the benefits of music therapy as well as readiness to explore other emotions related to chemotherapy. Lin, Hsieh, Hsu, Fetzer, & Hsu [21] conducted a study of two patient groups with high state anxiety due to receiving chemotherapy; one group participated in thirty minutes of music therapy while the other group did not. The participants who listened to thirty minutes of music reported a significant reduction in feelings of anxiety while the other group reported no change in anxiety. The result was determined by using a series of anxiety scales such as Spielberger State Anxiety Inventory-State Anxiety Scale and other standardized units. To compare the impact of music therapy (MT) versus music medicine (MM) interventions on psychological outcomes and pain in cancer patients and to enhance understanding of patients' experiences of these two types of music interventions. N = 31 Adult population was evaluated. Two sessions were evaluated by listening to pre-recorded music with a music therapist. Before and after each session patient reported their mood, anxiety, pain, and relaxation. Mixed methods intervention design in which qualitative data were embedded within a randomized cross-over trial. The qualitative data indicate that music improves symptom management, embodies hope for survival, and helps connect to a pre-illness self, but may also access memories of loss and trauma. Published by Sciedu Press Music helped in two ways: first, it took them to a place of tranquility; second, music helped them temporarily forget their current stressors. [20] Reed's theory of selftranscendence and Kolcaba's Comfort Theory play an integral part in achieving anxiety relief amidst the perils of cancer. The patient's emotional comfort must be kept at the center. This desired feeling is influenced by the patient's experiences as well as their participation in achieving wellness. In this case, cancer is the experience and music therapy is the participation-both working together to create a space of comfort. Music Therapy and Depression. Of the eight articles reviewed, depression was another psychological phenomenon that occurred in patients undergoing cancer treatment. Bradt, Dileo, Magill, & Teague [18] found that music therapy had a positive impact on patients experiencing depression as a result of chemotherapy. When patients received music therapy as adjunct treatment, they reported feelings of relaxation and a boost in mood. As seen above, participation is imperative to achieving the desired outcome of self-transcendence and the Comfort Theory. Acaroglu & Bilgic [12] determined from their study that patients showed a significant decrease in depressed mood (among other psychological effects) after listening to preferred music for thirty minutes three to four times a week. Music Therapy and Quality of life. Quality of life is evaluated through a person's self-reflection on satisfaction, emotional state, and feeling of purpose. Warth, Keßler, Hillecke, & Bardenheuer [22] observed patients reporting a greater reduction in fatigue resulting in an improvement on the quality of life scale. It should also be noted that a reduction in anxiety and depression (among other negative emotions) will elevate the patient's experience and ultimately better their quality of life. The improvement of all of these emotions works together towards a common goal of desired patient outcomes. These outcomes are illustrated by implementing the components from the theories of self-transcendence and the Comfort Theory. The result of patient participation despite their circumstances will blossom into feelings of relief, ease and emotional freeness. Music therapy has been proven to be an effective adjunct treatment in improving the quality of life in patients who are receiving chemotherapy. DISCUSSION The purpose of this integrative literature review was to determine the psychotherapeutic benefits of music therapy to help a patient cope with cancer. Three major psychological effects were positively impacted by music therapy: anxiety, depression and quality of life. These psychological effects were all improved with the utilization of music therapy, anxi-ety showing the greatest improvement. Decreasing anxiety prior to and during chemotherapy improves patient outcomes, decreases depression and improves quality of life. [12] While music therapy is the medium to drive a patient towards emotional wellness, the principles of the theories mentioned in this study work together to achieve that desired result. One complements the other. In other words, Reed's selftranscendence theory is the intangible result from the active implementation of Kolcaba's Comfort Theory through the influence and science of music therapy. A variety of music therapy strategies were used to help identify the most beneficial ways to improve mood during cancer treatment. These strategies include playing instruments, singing songs, composing or listening to music. Most sessions reviewed in the literature included thirty minutes of therapy where patients reported improved mood and calmness. Music therapy has a positive psychotherapeutic effect on patients undergoing cancer treatment. It is a holistic approach to decreasing anxiety and depression, and increasing quality of life by providing an outlet for patients to express themselves while offering a safe and economical addition to traditional treatment. This alternative therapy will also strengthen relationships between music therapist, nurse and patient to foster the desired goal of healthy outcomes. The qualitative benefit of music therapy is illustrated in the following case study. Case study Mary battled cancer for many years and departed a husband and four kids. She loved butterflies and spent her days building wooden houses when she wasn't too weak from chemotherapy. Working with a music therapist, Mary began to write a song about butterflies with undertones of healing and restoration. She was enthralled by the transformational process of a caterpillar becoming a butterfly, so she wrote about this miraculous transfiguration. This conversion, for Mary, symbolized change and forgiveness-something she battled with deeply. Mary not only dealt with the paralyzing diagnosis of cancer but also with the horrors of past abuse. Her musical composition fostered reconciliation and comfort in the darkest of moments. For Mary, music connected a broken family and provided healing to her wounded soul. [23] The above case study illustrates the model of selftranscendence and the ability of the patient to be open to connecting with the music therapist. The creation of music provided unexpected benefits with integration of images that helped her cope with both her past abuse and her present challenge of coping with cancer. In the case Mary experiences self-transcendence, a state of mind that moves the person beyond their earthly troubles and inspires a spiritual sense of well being and peace. Limitations Music therapy is an individualistic method of treatment because music is a personal expression of art and interpretation. One style of music does not break barriers for all patients; therefore, it is imperative to cater to the patient's musical preferences in order to see the desired effects of music therapy. In many studies, patients did not pick their preferred musical style, which greatly reduced the impact of music therapy. The music therapist or healthcare provider must remain open and flexible to adapt the fundamentals of music therapy to the needs of each patient. Also, the patients who were studied suffered from different cancers and were in different stages of cancer. This variability has a major impact on mood and participation. Suggestions for future research Patient outcomes vary depending on duration of therapy, preferred music genre and strategy. The exact mechanism of the complex psychological and physical responses to music therapy has not been studied thoroughly; while the understanding of how music therapy works is still unknown, research points towards a positive psychological effect with minimal physical changes. [3] Comprehensive research needs to be conducted to establish a more wide-range use of music therapy in the healthcare setting. Inclusive literature needs to be available for use as an alternative method of treatment to provide additional outlets for improvement of health outcomes. The proposed idea will equip healthcare professionals with effective ways to care for cancer patients. More research yielding definitive answers will play a revolutionary role in patient care, especially as a practice that is easily accessible to practitioners and patients alike. Implications for nursing Music therapy should become a common nursing intervention to assuage the discomforts associated with chemotherapy treatment in their patients. Music therapy is an underutilized, effective form of holistic treatment. It serves as a healing balm, carrying the weight of human suffering under the most burdensome circumstances and paving the way toward emotional wellness, regardless of the prognosis. Ultimately, more education and resources need to be provided for healthcare professionals so that they are aware of the effectiveness of music therapy during courses of chemotherapy treatment. In addition, more hospitals need to understand the importance of establishing music therapy programs which will enhance patient health outcomes. Patients should also be involved in selecting the type of music utilized for music therapy. CONCLUSION This integrative literature review has provided a preliminary comprehensive study of the psychotherapeutics effects used in music therapy to improve health outcomes in patients undergoing cancer treatment. The research has identified positive results of music as a holistic approach to emotional wellness. More research needs to be formulated, but the information already found shows promising results. No patient should feel uncomfortable or uneasy in the healthcare setting; no one enjoys going to the emergency department, staying at the hospital, or-much worse-spending time in an oncology unit. The reality, though, is that patient comfort gets lost in the wake of treatment practices and routine busyness. Part of providing excellent care includes ensuring the care environment is inviting and welcoming so that patients feel at home away from home. This model is often difficult to maintain, but more can be done to ensure patient comfort. Oncology units that incorporate best evidence-based practices to support zen and solace for their patients should include music therapy. Music holds the power (to transform moods and enrich a weary soul) to shine a light in the darkest of times and carry the heaviest burdens. It is a clinical pearl that every patient should benefit from. Could music therapy be the key that unlocks a new realm of discovery that enhances patient experience throughout the course of chemotherapy treatment? CONFLICTS OF INTEREST DISCLOSURE The authors declare that there is no conflict of interest.
2022-01-28T17:13:45.589Z
2022-01-23T00:00:00.000
{ "year": 2022, "sha1": "7e6f31a96b3253a061124580588b96489cba7ae0", "oa_license": null, "oa_url": "https://www.sciedupress.com/journal/index.php/jnep/article/download/21182/13228", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "a776697749038eeb19d114901f21e14da4d2819e", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
12164549
pes2o/s2orc
v3-fos-license
Is malaria illness among young children a cause or a consequence of low socioeconomic status? evidence from the united Republic of Tanzania Background Malaria is commonly considered a disease of the poor, but there is very little evidence of a possible two-way causality in the association between malaria and poverty. Until now, limitations to examine that dual relationship were the availability of representative data on confirmed malaria cases, the use of a good proxy for poverty, and accounting for endogeneity in regression models. Methods A simultaneous equation model was estimated with nationally representative data for Tanzania that included malaria parasite testing with RDTs for young children (six-59 months), and accounted for environmental variables assembled with the aid of GIS. A wealth index based on assets, access to utilities/infrastructure, and housing characteristics was used as a proxy for socioeconomic status. Model estimation was done with instrumental variables regression. Results Results show that households with a child who tested positive for malaria at the time of the survey had a wealth index that was, on average, 1.9 units lower (p-value < 0.001), and that an increase in the wealth index did not reveal significant effects on malaria. Conclusion If malaria is indeed a cause of poverty, as the findings of this study suggest, then malaria control activities, and particularly the current efforts to eliminate/eradicate malaria, are much more than just a public health policy, but also a poverty alleviation strategy. However, if poverty has no causal effect on malaria, then poverty alleviation policies should not be advertised as having the potential additional effect of reducing the prevalence of malaria. Background Malaria is commonly considered a disease of poverty [1][2][3]. Recent Demographic and Health Survey (DHS) data for the United Republic of Tanzania (from now on referred to as Tanzania), illustrate this relationship at the individual child level (Figure 1). The observed negative correlation between malaria and socioeconomic status (SES) may indicate that malaria infections cause low SES (e.g. ill workers are less productive), or that poverty increases the risk of malaria transmission (e.g. the poor are less able to afford malaria preventative measures). Also, there may be incidental associations, such as, improvements in road infrastructure in a region could simultaneously increase household incomes (e.g. through improved market access) and reduce malaria incidence (e.g. through better access to health care facilities). Understanding whether the malaria-poverty correlation implies causality and, if it does, the direction of causality, has crucial implications for malaria control efforts, such as the Roll Back Malaria (RBM) and the President's Malaria Initiative (PMI), and for the recent call for malaria elimination, with the ultimate goal of malaria eradication [4]. If a bi-directional link exits between malaria and SES, then (i) poverty alleviation policies (e.g. income redistribution, job creation, and educational investment) may have the side benefit of reducing malaria, becoming important tools to complement and maximize the impact of disease prevention and treatment efforts; and (ii) malaria control strategies may be beneficial for reducing poverty. Worral, Basu, and Hanson [4] reviewed about 50 micro-level studies that examined whether SES influences the uptake of malaria prevention and treatment, and if malaria is more common among the poor. Results from the former tended to be in good qualitative agreement, and showed a positive correlation between SES and the use of preventative malaria measures such as insecticide-treated nets (ITNs). Regarding the incidence of malaria on the basis of poverty status, the magnitude and significance of results varied across studies, depending on how poverty and malaria were measured. This review effort, however, highlighted important caveats in the existing work [4]. First and foremost, studies assessed malaria incidence as self-reported fever episodes, which is very likely to overestimate the results since a significant percentage of fever episodes are not due to malaria infections [5][6][7]. Second, reviewed studies failed to account for possible two-way causality in the association between malaria and poverty; this methodological limitation suggests caution in comparison and interpretation of extant research. Recent papers by Somi et al. [7,8] constitute the first attempts to address the research caveats mentioned above. The authors used laboratory-confirmed malaria morbidity data (parasitaemia based on microscopy), and accounted statistically for potential bi-directional malaria-poverty causality by using instrumental variable probit regression and propensity score matching. Data from 52 rural villages in south-eastern Tanzania, which comprised two Demographic Surveillance Sites (DSS), were used to construct a wealth index (proxy for SES), and to provide all other variables included in the model [8]. Results indicated that SES was negatively associated with malaria: a one-unit increase in the wealth index resulted in a 4% decrease in the prevalence of malaria infection. Malaria was also negatively associated with SES: an infection resulted in a reduction of 0.32 units in the wealth index [8]. In this paper, the aforementioned limitations are addressed, and three additional contributions to current knowledge are made. First, a formal conceptual framework of the two-way malaria-SES link is developed, utilizing a multi-disciplinary and multi-scale approach, which can be applied in other areas and can guide future empirical modelling. Existing work has relied on conceptual frameworks restricted to variables that are available for modelling exercises [8], and the lack of a formal conceptual framework hinders the comparability and replicability of studies to other settings [4]. Second, the bi-directional malariapoverty causality is examined using a simultaneous equation model estimated with nationally representative DHS data for Tanzania that included malaria parasite testing for Figure 1 Malaria prevalence among young children (six-59 months), by household wealth level, Tanzania, 2007/08. The wealth variable is based on the THMIS wealth index, generated with principal components analysis. The categories poor, middle, and rich represent, respectively, the bottom 40%, next 40%, and upper 20% of the THMIS wealth index distribution. young children (six-59 months). This represents a major improvement to recent estimates that relied on data from a few rural districts [7,8]. Also, although a previous study utilized DHS data for 22 countries [9], it used self-reported information on fever in the two weeks preceding the survey as a proxy for malaria, which is not an accurate measure of the disease burden [10], and did not examine causality in both directions. Third, geo-referenced information is included in the model. Chima, Goodman, and Mills [6] have urged researchers to account for geographic variations in the relationship between malaria and poverty, but data limitations have often precluded spatially explicit analysis. The empirical analysis here presented benefits from the fact that the Tanzania DHS data contain such information. Among the geographic variations explored in the empirical modelling are climatic conditions, local infrastructure, proximity to putative sources of malaria transmission, and characteristics of the local-and humanmade environment. Study area Tanzania is located in East Africa, along the shores of the Indian Ocean, between longitude 29 0 and 41 0 East, and latitude 1 0 and 12 0 South. Mainland Tanzania borders Kenya and Uganda (north); Rwanda, Burundi, and the Democratic Republic of Congo (west); Zambia and Malawi (south west); and Mozambique (south). Zanzibar lies off the eastern coast and is situated at approximately 30 km from the mainland. The country is divided into two unique rainfall patterns: unimodal and bimodal. The former has one marked rainfall season that often occurs between November/December and April, and is observed in the southern, south-western, central, and western areas of the country. Humidity is high between December and May. The bimodal pattern has two rainfall seasons, an intense one observed between March and May, and a milder one occurring between October and December. Humidity is normally high between March to June and November to December. Regardless of the rainfall pattern, temperatures are generally high between October and March. According to the 2004 update of the Global Burden of Diseases (GBD) [9], 44% of the disease burden in Tanzania (as measured by disability-adjusted life years -DALYs) was due to infectious and parasitic diseases. Among those diseases, malaria carried the largest burden, 20%. Malaria is the main cause for inpatient and outpatient consultations and the major killer for children under five in Tanzania. Malaria transmission is stable perennial to stable seasonal in over 80% of the country and the remaining areas have unstable transmission prone to malaria epidemics [10]. As a result, all the country's 42 million people are at risk of acquiring the disease. Recent statistics [11] show that Tanzania had gross national income per capita of $1,200 (in 2007 purchasing power parity), and it ranked low on quality-of-life indicators such as life expectancy at birth (51 years males, 53 years females), the adult literacy rate (69%), and the under-five mortality rate (118 per thousand). As for malaria, the estimated number of clinical cases per year ranges between 14 and 18 million, and the estimated number of deaths is about 60,000 (approximately 80% of deaths are among children under five years of age) [11]. Malaria is the leading cause of outpatients, deaths of hospitalized people, and admissions to medical facilities of children less than five years of age. For these reasons, malaria is considered a major cause for low worker productivity among those between 15 and 55 years old, and a key impediment to human capital formation for people between five and 25 years of age. The disease is considered to be a formidable obstacle to foreign investment and economic development in Tanzania [12]. DHS data In 2007-08, a special DHS was carried out in Tanzania: the HIV/AIDS and Malaria Indicator Survey (THMIS). As part of the THMIS, blood samples were collected from children aged six-59 months. Malaria infections were assessed through the use of the Paracheck Pf TM rapid diagnostic test (RDT). The THMIS interviewed the guardians of 7,502 children; 6,686 of these children were living with their guardian and therefore had information available on household and location characteristics. Only children aged six-59 months were eligible for malaria testing, and there were 5,955 children in this age group. The guardians of 5,627 of these children consented to the child being tested for malaria. There are missing values for 43 children for whom consent for malaria testing was received. Therefore, the THMIS has confirmed results of a malaria infection for 5,584 children aged six-59 months [12]. Dropping from the sample those children with missing values for wealth-related variables yielded a working sample of 5,547 young children. The survey was conducted from October 2007 to February 2008, is nationally representative, and key indicators can be calculated for urban and rural areas, and for regions [12]. Therefore, the survey comprises diverse settings: urban and rural, low and high malaria risk, poor and relatively wealthy. As is usual practice in the DHS, the THMIS collected data on household and respondent characteristics, as well as information on malaria prevention and treatment outcomes. The survey also contains geographical information that allows for the assessment of spatial variations in the bi-directional malaria-poverty causality. First, it contains the centroids of the sample clusters, allowing for the assembling of varied cluster-level data. Second, the THMIS includes information on region of residence (there are 26 regions in Tanzania), facilitating the merging of various region-level information that characterizes the local environment (as described below) with individual/household data. Individual-level THMIS variables utilized in this study include: (i) the result of the child's malaria test (1 = child tested positive for malaria); (ii) age of the child tested for a malaria infectioncategorized as six-23 months, 24-35 months, 36-47 months, and 48-59 months; (iii) a binary variable indicating if the mother of the child is engaged in farming activity; (iv) a continuous variable reporting the number of overnight out-of-town trips that the mother of the child made in the previous year; (v) a binary variable indicating that the child's mother and/or the child's father had secondary education or higher (generated from variables for the education level of the child's mother and of the household head); and (vi) a binary variable indicating the use of an ITN by the child the night before the interview. Household-level variables utilized are: (i) age of the head of the household in yearsage squared was also included, in order to assess if the relationship between wealth and age increases as one ages; (ii) sex of the head of the household (1 = female); (iii) number of children aged under five years living in the household; (iv) number of children aged six-12 years living in the household; (v) number of adolescents aged 13-17 living in the household; (vi) number of adults aged 18-64 years living in the household; (vii) number of adults aged 65 or more years living in the household; (viii) a binary variable indicating if the house received indoor residual spraying (IRS) in the previous year; (ix) a binary variable indicating if the household is located in a rural area; (x) a binary variable indicating if the house has improved roof material (iron sheet, concrete, tiles or asbestos); (xi) a binary variable indicating if the house has improved wall material (brick, wood/timber, cement, stones or iron/metal); (xii) physical assets owned by the household (car, motorbike, bicycle, fridge, television, radio and mobile phone); (xiii) source of water; (xiv) type of toilet; (xv) access to electricity; (xvi) number of rooms per household member; and (xvii) type of flooring material in the house. Although the THMIS has a variable to measure the general health status of the household head, and this variable is expected to be important to household wealth, its inclusion would reduce the sample by more than 10% due to missing values. Therefore, the variable was not considered in the model. Based on household-level variables (xii)-(xvii), a wealth index was created using principal component analysis (PCA), as proposed by Filmer and Pritchett [13] and Vyas and Kumanarayake [14]. The first principal component was used to determine weighting factors for each variable measuring wealth and to define the wealth index. Although the THMIS does provide a wealth index calculated using the same procedure, the index includes bed net ownership, and type of roof and walls used in the house. These variables, however, are explanatory factors of the prevalence of malaria infection, and therefore should not be included in the wealth index. To control for the seasonality of malaria transmission, the study utilized binary variables indicating the month between October 2007 and February 2008 when the household interview and malaria testing took place. Finally, cluster-level variables collected by the THMIS included in the analysis are: (i) average elevation; and (ii) distance from the cluster centroid to the nearest health facility (km). Regional data on the local environment In order to better account for potential impacts of the local environment, data were assembled from varied sources with the aid of geographical information systems (GIS). Table 1 summarizes all the environmental data gathered for the analysis, and the sources from which they were obtained. Data were treated to reflect regionlevel characteristics, which could then be merged with the THMIS data. Rainfall data were derived from satellite-based estimates [15] available by dekads (periods of roughly 10 days). Different ways to summarize rainfall were tried, in order to test which variable would be able to better capture the relationship between precipitation and malaria [16,17]. Thus, the following variables were constructed at the regional level: (i) total and variance of rainfall in the dry, rainy, and agricultural seasons, as well as deviations of the total from a short-term (2003-2008) mean; (ii) total amount of rainfall by month, as well as deviations of monthly rainfall from the short-term mean; and (iii) proportional difference between the rainfall in The road network layer was rasterized in ArcMap (ESRI, Redlands, CA, USA) to allow for the calculation of a road density indicator for each region (percentage of the region's area utilized as roads). Since there is no standard width for roads, a 10 m-wide road was considered based on the current recommendations for construction of new roads in the country. A similar procedure was utilized to obtain the percentage of the region's area covered by rivers, assuming an average width of 3 m. In addition, the different types of land use were summarized by region, and the percentage of the region being used for agricultural cultivation was included in the analysis. Also, the coefficient of variation of the slope was calculated by region for inclusion in the analysis. Finally, a layer with major lakes in Tanzania was utilized to calculate the distance from each cluster centroid to the nearest lake (km). Conceptual framework A household's SES is expected to impact malaria incidence among its members primarily because limited economic resources reduce the uptake of malaria preventative and/ or curative measures, such as use of anti-malarial drugs, regular adoption of mosquito avoidance measures, and search for health care [5]. This is highly plausible since those measures have substantial direct costs. In sub-Saharan Africa, households spend as much as $180 per year (1999 US dollars) on such measures, which represents a sizeable share of household income [6]. In the opposite direction, it is anticipated that malaria illness reduces a household's potential to accumulate wealth in at least three ways. First, ill health reduces labour supply and productivity, and the resultant reduction in household income makes saving difficult [6,[14][15][16]. The time lost per malaria episode for a sick adult ranges, on average, from one to five days, and the same amount of time is lost to work when adults care for a sick child with malaria [6]. Second, malaria imposes health costs and, in the absence of formal health insurance markets, individuals may cover such costs by drawing down their savings, selling physical assets, or borrowing money [6,14]. Third, malaria may induce households to change their productive activities ex ante, and such adaptation may come at a cost to wealth accumulation [6,17,18]. Thus, it is hypothesized that a vicious circle of malaria illness and low SES exists, and this analysis aims to study and quantify these relationships empirically. The conceptual framework is proposed based on literature review of factors that explain malaria prevalence and SES ( Figure 2). The framework provides a foundation for studying the two-way relationship between malaria and SES, proxied by wealth, as explained later in this section. To understand and measure bi-directional malaria-poverty causality, the conceptual framework ( Figure 2) accounts for numerous factors that interact and contribute to transmission of malaria and to poverty. These factors are grouped in three categories: (i) individual and household; (ii) geographical; and (iii) macro. The first includes: (a) individual characteristics: genetic immunity; acquired immunity; migratory pattern; age; economic activity; education; and cultural beliefs; (b) household composition: Turning to wealth, the conceptual framework assumes household wealth holdings provide a useful metric for economic well-being. In the poverty literature, SES is typically measured in terms of expenditure or income. Such information is unavailable in the DHS. The decision to use a wealth-based SES measure, however, is neither unprecedented [7,13,18,19] nor purely for practical reasons. A recent comparison of SES measures for multivariate analysis of the socioeconomic gradient of malaria prevalence found that a wealth-based index is a useful alternative to the usual consumption measure [7]. In addition, wealth provides a more complete picture of household living standards than income. Wealth provides a household with economic stability, because households with liquid assets are better able to endure income shortfalls. A household experiencing temporary low income due to job loss of a household member could be classified as income poor. In fact, such a household may not experience economic hardship if liquid assets are available to smooth consumption over income fluctuations. Another key role of assets is in providing a foundation for risk taking that leads to resource accumulation over time [20]. For instance, household savings can be used to start up a business or invest in a child's education. Thus, while a lack of income means that people struggle to get by, a lack of assets can prevent them from getting ahead. More details are provided in Additional file 1. Empirical model To examine whether the observed association between malaria prevalence (M) and household wealth (W) implies causality, a simultaneous equation model was estimated, as summarized by equations (1) and (2) In both equations, i and j index individuals (children aged six-59 months) and regions, respectively; X is a vector of individual-and household-level factors; G is a vector of geographic determinants; S represents a set of binary variables for the month of interview; α and β are coefficients to be estimated; and is the error term. The parameters of primary interest to this study are a 1 , which indicates the direction and magnitude of the link from wealth to malaria prevalence, and b 1 , which indicates the direction and magnitude of the link from malaria prevalence to wealth. Table 2 provides definitions and descriptive statistics for all variables included in the model. A key estimation issue is simultaneity bias, due to joint determination of M and W. Simultaneous equation models generally violate a standard assumption of the classical linear regression model, which states that all explanatory variables should be uncorrelated with the error term [62]. To illustrate that this assumption is unlikely to hold, consider the situation where X M in equation (1) increases in a given time period. Ceteris paribus, this would be associated with an increase in M. Turning to equation (2), it is apparent that an increase in M will, all else being equal, be associated with an increase in W. This suggests that e M and W increase together; that is, there is correlation between an explanatory variable and the error term. If the simultaneous model was estimated using ordinary least squares (OLS), the correlation between error terms and endogenous explanatory variables would result in biased estimates of a 1 and b 1 . To demonstrate the source of bias, consider a 1 , which is supposed to be the effect of W on M holding other factors constant. In a simultaneous model estimated with OLS, a 1 instead measures some combination of the effects of W and M on each other due to joint determination. Therefore, to obtain consistent estimates of the dual causality between malaria and wealth, the model described by equations (1) and (2) were estimated using instrumental variables (IV) regression [62]. In this approach, the basic strategy to deal with simultaneity bias is to find proxy variables for the endogenous explanatory variables M and W. The key to the IV approach is finding appropriate "identifying instruments" to include in the first-stage regression. A valid identifying instrument is a variable that is highly correlated with the endogenous variable, and uncorrelated with the error term. As a test for IV validity, pair-wise correlations were calculated, and assessed the significance of the selected IVs in the model. Geographical factors •Climate (rainfall, temp) Figure 2 Conceptual framework of the bi-directional malaria-poverty causality. The wealth equation was identified using the following IVs: binary variables for the child's age and binary variables for the month of interview. These IVs are expected to be highly associated with malaria incidence but to have no direct association with household wealth. Note that the variables for bed net, IRS, improved roof, improved walls, and geographic mobility of the child's mother were excluded from the wealth equation due to a concern that these variables are endogenous to wealth and could thereby result in biased parameter estimates. The malaria equation (1) was identified with the following instruments: age and gender of the household head, and household composition variables. The household composition variables are numbers of young children (less than five years of age), children aged five to 12 years, teenagers aged 13-17 years, adults aged 18-64 years, and elderly aged 65 years and over. These IVs are expected to be highly correlated with household wealth but to not directly influence malaria transmission. DUAL CAUSALITY To explain the IV approach, the following steps were taken to estimate the effect of malaria prevalence on household wealth. The first step was to re-write structural equation (1) as a reduced form in which malaria is expressed as a function of all exogenous variables in the simultaneous equation system plus an error term. This was done by substitution of equation (2) into equation (1) and collecting terms, to arrive at equation (3). A relabelling of the parameters yielded the reduced-form equation (4). The next step in the IV approach was to estimate reduced-form equation (4). The third step was to use the first-stage regression results to obtain the predicted value of M (prevalence of malaria infection). The final step in the IV approach was to estimate structural equation (2). In this second-stage regression, the predicted value of M from the first-stage probit regression was used as the malaria explanatory variable. By substituting predicted malaria prevalence for observed malaria prevalence, we presumably eliminated the correlation between the explanatory variable M and the error term e W . To account for the binary nature of the malaria variable M, a treatment regression approach for the first-and secondstage regression models was used; the treatment in this case was having malaria. The treatment regression model is estimated with maximum likelihood estimation. The IV approach for estimating the effect of wealth on malaria is similar to that described above, but the model is an IV probit model since the first-stage model has a continuous dependent variable while the second-stage model has a binary dependent variable. All model estimations were conducted using Stata version 11 (Stata Corp.; College Station, TX, USA). Table 3 presents the results for tests of internal coherence of the wealth index. The table displays, by wealth groupings, the averages for variables measuring asset ownership, access to utilities and infrastructure, and housing conditions. In general, the results indicate sizeable differences across wealth groups, and these differences were in the direction that would be expected if the wealth index provided a good measure of SES. For example, while none of the poorest households owned a mobile phone, 33% and 89% of the middle and richest households did, respectively. The percentage of households that had access to an improved source of water also increased by wealth group. The finding that the middle of the wealth distribution had the highest average bike ownership suggests that as wealth increases households were initially more likely to own a bicycle but at a certain wealth level bicycle ownership declined. This is likely explained by wealthier households preferring to purchase a motorbike or a car rather than a bike. Wealth index Does malaria illness among young children contribute to low household wealth? The wealth model suggests that malaria illness among young children (six-59 months) was a contributing factor for low household wealth in Tanzania ( Table 4). The correlation between malaria and the IVs used in the model were statistically insignificant at standard test levels (5%) or had coefficients close to zero, and most of the IVs were highly significant in the wealth equation. These results support the validity of the IVs used in the malaria model. Results shown in Table 4 indicate that the malaria variable, which was predicted from a first-stage, reducedform regression, was highly statistically significant. Controlling for other factors that influence wealth, households that had a child who tested positive for malaria at the time of the survey had a wealth index that was, on average, 1.9 units lower (p-value < 0.001). To put this figure in perspective, note that the standard deviation for the wealth index is about 2, which suggests that malaria among young children had a large negative effect on household wealth. Results for the control variables provide an indication of how well the model fits the data. Twelve of the 19 controls were statistically significant at standard test levels and findings for these variables conform to prior expectations, as previously described in the proposed conceptual framework. Results for household-level variables indicate that household wealth was negatively associated with female headship, number of young children, and agricultural occupation of the child's mother. Household wealth was positively associated with number of household members aged 13 and above, and secondary education of the child's mother and/or father. Households had lower wealth if they were located at higher elevations, had poor market access (as measured by 10 mwide road density and distance to health facility), and were in a rural area. Higher rainfall during the survey period compared to the short-term mean for 2003-2008 was correlated to higher wealth, which is as expected in a country where the majority of the population earns a living from rain-fed agriculture. Does low household wealth increase the risk of malaria illness among young children? Table 5 presents the results of the malaria model. Lack of significant correlations between the IVs and wealth, and the fact that most IVs were significant in the malaria equation provide evidence of the validity of the instruments used. Model results indicate that malaria prevalence among young children was unrelated to the household's wealth position. The coefficient of the wealth index had the anticipated negative sign, but it was not statistically significant (p-value = 0.677). Turning to the control variables for the full model, children less than two years of age had lower malaria prevalence than children between the ages of two and four years. Children were less likely to have malaria if they lived at higher elevations, if they lived in proximity to a health facility, if road density was high, if they slept under an ITN the night prior to the interview, and if they lived in houses where IRS was done in the year before the survey. Malaria prevalence was higher if rainfall during the survey period was greater than the mean value observed during 2003-2008. Living close to a lake, river, agricultural field, or in a rural area, was found to be linked to higher malaria prevalence. Finally, children were more likely to test positive for malaria in January The sample size for Tables 2 and 3 differs from the sample size for Tables 4 and 5 because some of the explanatory variables had missing values leading to additional reductions in sample size for the regressions. 2* Indicates statistical significance at the 0.05 significance level or better. 3 The estimation of standard errors was adjusted for clustering on households to account for possible non-independence of observations within households. It is expected that children in the same household are similar to each other on account of shared genetics and/or home environment. 2008 compared to October 2007. This finding is not unexpected given that rainfall levels were, on average across the 26 Tanzania regions during the survey period, 630 mm higher in January than in October. Discussion This paper proposed a conceptual framework of the bidirectional link between malaria and SES, utilizing a multi-disciplinary and multi-scale approach. The framework presented a comprehensive picture of mechanisms through which malaria and SES may impact each other, and guided the selection of variables included in the empirical model here estimated. Many of the factors listed in the framework were not available in the THMIS and, therefore, the empirical model was as comprehensive as the availability of data allowed. Yet, the inclusion of environmental variables generated with the aid of GIS was an important contribution to the analysis, and facilitated controlling for potential impacts of the natural and human-made environment on the prevalence of malaria infections. The bi-direction association between malaria and SES was appraised with nationally representative data for Tanzania that assessed malaria infections based on RDTs, accounting for environmental variables assembled with the aid of GIS. Results show that households with a child who tested positive for malaria at the time of the survey had a wealth index that was, on average, 1.9 units lower (p-value < 0.001), and that an increase in the wealth index did not reveal significant effects on malaria. These results differ from Somi et al. [8], who reported that malaria was negatively associated with SES, and that SES was also negatively associated malaria. However, results here presented agree with the bulk of the literature that finds no statistically significant association between SES and malaria illness [4], although the latter literature does find that SES has a positive influence on malaria prevention and treatment seeking. Also, the results here presented are specific to children aged six-59 months, while Somi et al. [8] did not restrict their analysis to this age group. Findings of the present study implicating malaria incidence as a cause of poverty, and those presented by Somi et al. [8], have important policy implications. If malaria is indeed a cause of poverty, then one could argue that malaria control activities, and particularly the current efforts to eliminate/eradicate malaria, are much more than just a public health policy, but also a poverty alleviation strategy. The lack of an effect of SES on malaria also has important policy implications. If poverty has no causal effect on malaria, then poverty alleviation policies should not be advertised as having the potential additional effect of reducing the prevalence of malaria. For example, recent studies showed evidence that microfinance institutions can be used to effectively deliver malaria knowledge to those communities benefiting from loans, and that had a significant impact on prevalence levels [21,22]. Yet, there is no evidence that poverty reduction resulting from microfinance programmes can solely result in lower malaria transmission. This study has some limitations. First, the validity of the proposed identifying instruments could be disputed. However, supporting empirical evidence of their appropriateness was presented in the Results section. Second, the THMIS data are cross-sectional and observational, which limits the ability to infer causality. Future research on malaria-poverty dual causality should use longitudinal data and take advantage of natural experiments, to the extent that this is possible. Third, the proxy for SES was a wealth index computed through PCA, based on asset ownership, access to services/infrastructure, and housing characteristics. The THMIS data did not include information on current value, purchase price, or vintage of assets. These data omissions posed some complications for assigning a monetary value 1* Indicates statistical significance at the 0.05 significance level or better. 2 The estimation of standard errors was adjusted for clustering on households to account for possible non-independence of observations within households. It is expected that children in the same household are similar to each other on account of shared genetics and/or home environment. to the household's stock of wealth. As a result, the wealth index is a relative measure of well-being. Therefore, results cannot be used to estimate the magnitude of the reduction in household wealth resulting from malaria illness among children. Datasets with other measures of economic wellbeing, such as income and consumption, would facilitate estimating some of the economic costs of malaria illness among young children. Fourth, the clusters used in the THMIS do not have spatial boundaries, due to confidentiality issues. Therefore, while cluster centroids are provided by the THMIS, cluster-level environmental data cannot be assigned, and the only geographical scale that could be used was the region. Finally, the effect of malaria illness among young children on household SES is likely to be of a different magnitude from the effect among working adult members. On the one hand, malaria episodes among young children should be more severe and therefore involve greater costs for treatment. On the other hand, malaria illness among working adults should have more of an impact on household income via reductions in labour supply and labor productivity. Further research is now being conducted, using other DHS data that will enable comparison of malaria-SES effects for women (including pregnant women) and young children. Additional file Additional file 1: Details on the conceptual framework .
2017-06-16T17:41:14.017Z
2012-05-09T00:00:00.000
{ "year": 2012, "sha1": "e49c7280dbdcc755f8f6504d8de34c50301526b0", "oa_license": "CCBY", "oa_url": "https://malariajournal.biomedcentral.com/track/pdf/10.1186/1475-2875-11-161", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "5ed2b3f39e98e94c702d5486d6bc8e3bd08b9f9b", "s2fieldsofstudy": [ "Medicine", "Economics" ], "extfieldsofstudy": [ "Geography", "Medicine" ] }
248408737
pes2o/s2orc
v3-fos-license
Thioredoxin-1: A Promising Target for the Treatment of Allergic Diseases Thioredoxin-1 (Trx1) is an important regulator of cellular redox homeostasis that comprises a redox-active dithiol. Trx1 is induced in response to various stress conditions, such as oxidative damage, infection or inflammation, metabolic dysfunction, irradiation, and chemical exposure. It has shown excellent anti-inflammatory and immunomodulatory effects in the treatment of various human inflammatory disorders in animal models. This review focused on the protective roles and mechanisms of Trx1 in allergic diseases, such as allergic asthma, contact dermatitis, food allergies, allergic rhinitis, and drug allergies. Trx1 plays an important role in allergic diseases through processes, such as antioxidation, inhibiting macrophage migration inhibitory factor (MIF), regulating Th1/Th2 immune balance, modulating allergic inflammatory cells, and suppressing complement activation. The regulatory mechanism of Trx1 differs from that of glucocorticoids that regulates the inflammatory reactions associated with immune response suppression. Furthermore, Trx1 exerts a beneficial effect on glucocorticoid resistance of allergic inflammation by inhibiting the production and internalization of MIF. Our results suggest that Trx1 has the potential for future success in translational research. INTRODUCTION Thioredoxin (Trx) is a ubiquitously expressed protein with a low molecular weight of 12 kDa. It is a part of the Trx system that includes NADPH and Trx reductase (TrxR) (1). Trx shows thioldisulphide reductase activity that is influenced by a highly conserved active site (-Cys32-Gly-Pro-Cys35-) (2). The reduced form of Trx transfers its reducing equivalents to disulphides within the target molecule and catalyzes their reduction. TrxR uses NADPH in this process to reduce the active site disulphide in the Trx substrates to dithiol (3) ( Figure 1A). Overall, the Trx system plays a critical role in regulating the cellular redox balance through the reversible thiol-disulphide exchange reaction. Indeed, there are three distinct forms of human Trx, encoded by separate genes: cytosolic Trx (Trx1) and mitochondrial Trx (Trx2), and a spermatid-specific isoform of Trx (SpTrx/Trx3). Trx1 is a major isoform of Trx that is located in the cytoplasm but is also translocated to the nucleus. Trx1 can directly scavenge reactive oxygen species (ROS) and thereby protect against oxidative stress (4). It is also involved in various redoxdependent cellular processes, such as gene expression, signal transduction, and cell growth and apoptosis, and interacts with various target molecules (5). Under stress conditions, Trx1 is released into the extracellular space where it exerts a cytoprotective effect and shows cytokine-like activities (6). Trx2 is a mitochondrial redox protein sharing 35% sequence homology and similar catalytic properties with Trx1 in vitro, and possesses the Trx1 active-site but lacking additional structural cysteines (7). It plays a critical role for scavenging ROS to maintain a reducing status in the mitochondrial matrix (8). Allergic diseases include immune-mediated disorders mainly characterized by a Th2 immune response phenotype. In patients with asthma, plasma Trx1 levels were significantly higher in the attack stage than those in the remission stage, and these levels of Trx1 substantially increased with the severity of the asthma attack (9). This implies that Trx1 can be a useful clinical parameter in asthma progression prediction. The protective effects of Trx1 are associated with the pathogenesis of several human disorders, including metabolic syndromes and neurodegenerative, cardiovascular, and inflammatory diseases (10)(11)(12)(13). In this review, we focused on recent studies on the underlying intercellular and intracellular mechanisms through which Trx1 regulates immune cells in response to allergic inflammatory diseases, such as allergic asthma, food and drug allergies, contact dermatitis, and allergic rhinitis (AR), as well as identifying the potential Trx-based therapeutic strategies for treating allergic diseases. Allergic Asthma Airway inflammation in allergic asthma is a complex process, and Th2-type inflammation and excessive accumulation of eosinophils are the important features (14). At the site of airway inflammation, Th2 cells secrete large amounts of interleukin (IL)-3, -4, -5, -9 and FIGURE 1 | (A) Mechanism of redox regulation by the thioredoxin (Trx) system. The reduced form of Trx catalyzes the reduction of the disulphide bonds in the target protein. Oxidized thioredoxin is restored to its reduced state by the NADPH-dependent flavoenzyme thioredoxin reductase. (B) Trx1 inhibition of eosinophil activation and chemotaxis. Trx1 can eliminate reactive oxygen species (ROS) produced by eosinophils and directly inhibit the activation of the mitogen-activated protein kinase signal pathway when entering cells. Trx1 regulates Th2 response by inhibiting IL-13 production, which prevents IL-13 from stimulating epithelial cells or fibroblasts to produce eotaxin. In addition, Trx1 blocks the pro-inflammatory effect of the upstream chemokine macrophage migration inhibitory factor, which can directly induce the chemotaxis of eosinophils or promote the production of eotaxin by epithelial cells or fibroblasts to promote eosinophil recruitment. (C) Potential mechanisms of the effects of Trx1 on mast cell degranulation. Crosslinking of the allergen and IgE complex with FcϵRI activates the mast cell degranulation pathway, which then activates Lyn, Syk, Btk, and phospholipase Cg (PLCg). Activation of PLCg eventually activates Ca 2+ and protein kinase c (PKC), which contributes to degranulation. ROS induced in FcϵRI-stimulated mast cells activate mast cells by activating PLCg, Ca 2+ influx, and PKC. Accordingly, Trx1 prevents mast cell degranulation by scavenging ROS. The effective catalytic function of bII tryptase secreted by mast cells depends on the existence of normal disulphide bonds in molecules. The Trx1 system selectively reduces the number of disulphide bonds, which then reduces the catalytic activity of bII-tryptase. -13 and recruit/activate eosinophils, mast cells, and basophils (15). IL-13 is crucial to the pathogenesis of asthma; overexpression of IL-13 significantly induces the occurrence of allergic asthma in a mouse model (16). Additionally, IL-13 induces not only the proliferation of goblet cells (the main effector cells for mucus production in the respiratory tract) but also subepithelial fibrosis which leads to airway remodeling (16,17). Activated eosinophils migrate to the bronchial epithelium and release ROS and eosinophil granulocyte protein, resulting in airway hyper responsiveness (AHR) and epithelial damage that exacerbates the respiratory symptoms (18). The growth and survival signaling induced by ligand/receptor interactions in the airway smooth muscle cells are mediated through ROS (19). In addition, macrophage migration inhibitory factor (MIF), an important upstream regulator of airway inflammation, promotes eosinophil differentiation, survival, activation, and migration by binding to CD74 and CXCR4 on the surface of eosinophils (20). In the pathogenesis of allergic asthma, reduced expression of antioxidant genes is also observed, such as Nrf2. Clinical studies have shown that with the reduction of Nrf2 expression in the body, asthma becomes more serious (21,22), and the application of Nrf2 agonists significantly relaxes the bronchi and improves the symptoms (23), suggesting that the occurrence of allergic asthma is not only related to Th2 inflammatory reaction but also has a close relationship with oxidative stress. Clinical treatment of asthma mainly involves b2-receptor agonists, corticosteroids, and aminophylline. Although b2 agonists are currently the largest class of treatment agents for asthma, their use is controversial because of poor clinical reactions and possible life-threatening adverse events. For moderate and severe asthma, combination therapy with inhaled corticosteroids and long-acting b2-agonists is used; however, this combination cannot prevent, reverse, or treat the underlying causes of the disease. Moreover, this treatment requires continuous monitoring for side effects and resistance (24). For instance, aminophylline can often cause adverse reactions, such as palpitations, headache, and vomiting (25). Trx1 is closely associated with asthma. Serum Trx1 levels in patients with acute asthma exacerbation are significantly high and there is a significant correlation between these levels and eosinophil cationic protein (9,26). Exogenous Trx1 treatment can significantly improve AHR and airway inflammation in ovalbumin-sensitized mice (27). Similarly, in a mouse model of chronic asthma, the systemic use of Trx1 significantly inhibited airway remodeling, eosinophil infiltration, and AHR while reducing the expressions of eotaxin (an eosinophil chemokine), macrophage inflammatory protein-1 and IL-13 in the lungs; thus, Trx1 improves pathological changes in the airway to prevent remodeling and asthma development (28), in agreement with a recent report which indicated that Trx1 displayed pronounced protective effects on the manifestation of allergic airway inflammation (29). Trx1 also inhibits Th2 cytokine production by directly downregulating MIF production and indirectly inhibiting eosinophil chemotaxis. Notably, the realization of this process does not depend on the regulation of systemic Th1/Th2 immunity (30). The proliferation of goblet cells that secrete excessive mucus increases the morbidity and mortality of asthma patients; however, Trx1 prevents this proliferation or improves established goblet cell proliferation (31). Trx1 also regulates ARH and airway remodeling by directly reducing the production of intracellular ROS. Additionally, the clinical drug ephedrine may produce antiasthma effects in vivo through inducing Trx1 production (32). The regulation of allergic asthma by Trx1 also involves the Trx1/ Txnip system. Trx1 and Txnip are normally combined into a dimer, and the Trx1/Txnip dimer is separated in the presence of irritant factors such as inflammation. Txnip can activate such signaling pathways as ASK1, Bax, p38, and Caspase3, and induce apoptosis in the lung tissue (33). Moreover, Txnip can also activate inflammasomes to trigger inflammatory reactions (33), all of which aggravate the symptoms of allergic asthma. Trx1, on the other hand, inhibits these reactions, thereby maintaining the balance between Trx and Txnip. Overall, Trx1 may be useful for the treatment of asthma and may represent a therapeutic target for asthma control. Allergic Rhinitis The inflammatory process of AR is similar to that of asthma. Many Th2 cells infiltrate the nasal mucosa and release cytokines (e.g., IL-4, IL-5 and IL-13) that promote IgE production by plasma cells. A lot of medical treatment modalities used as a treatment of AR, such as antihistamines, steroids, montelukast, and immunotherapy. However, these therapeutic modalities can fail on some occasions (34). Generally, large-scale production and release of inflammatory cells, including eosinophils and ROS and their metabolites, plays a vital role in the pathogenesis of allergic inflammatory airway diseases (35,36). As an endogenous antioxidant protein, Trx1 has strong antioxidative stress effects. Thus, administration of exogenous Trx1 can inhibit AHR induced by specific allergens via the inhibition of eosinophil accumulation in the airway of mouse models with asthma (27,28). Additionally, a recent study indicated that the concentration of Trx1 in nasal tissues is significantly decreased in patients with chronic rhinosinusitis with nasal polyps and a connection between elevated ROS levels and decreased levels of Trx1 has also been observed (37). Quercetin has been suggested as a dietary supplement for improving the clinical symptoms of allergic diseases, such as AR, but its precise mechanisms of action remain unclear. Nevertheless, Trx1 levels in the nasal mucosa significantly increase after oral administration of quercetin; moreover, the frequency of nasal allergy-like symptoms, such as sneezing and nasal rubbing, are significantly reduced (38). These changes provide insights into the possible mechanism underlying the favorable effects of quercetin on AR. Food Allergies Food allergies are caused by aberrant immune responses towards food antigens; these responses are skewed towards Th2 responses associated with IL-4, IL-5, and IL-13. Current treatments for IgEmediated food allergies are largely confined to the avoidance of the suspected allergens, antihistamine treatments, and corticosteroid therapies with low efficacy and several side effects. Food allergen immunotherapy induces desensitization and promotes permanent immune tolerance to food allergens by gradually increasing exposure to the allergens (39); however, the incidence of adverse reactions is high and is a long-term treatment process (40). Trx1 treatment has been effective against food allergies in previous studies. For example, the application of Trx1 significantly reduced allergic reactions in a wheat allergy dog model subjected to skin test. Therefore, Trx1 potentially reduces wheat sensitization by reducing the number of disulphide bonds in the major protein allergens of wheat (41). Similarly, Trx1 reduces the number of disulphide bonds in b-lactoglobulin, an allergen in bovine milk. The disulphide-reduced protein shows increased sensitivity to pepsin digestion and decreased hypersensitivity in vivo (42). Additionally, a Trx1-treated saltsoluble wheat allergen was shown to reduce IgE binding in children with asthma (43). Consistent with these results, active systemic and passive cutaneous anaphylaxis testing on guinea pigs showed that yeast extract rich in Trx1 significantly reduced egg mucin-induced anaphylaxis. It was hypothesized that the anti-allergic activity of Trx1 itself plays a role in these effects (44). Accordingly, Trx1-rich yeast extract can potentially be used to ferment foods, such as alcoholic beverages and bread. Recently, recombinant Trx1 rice has been shown to improve blactoglobulin digestion and decrease its allergenicity, thereby improving the feasibility and practicality of its large-scale application; a plant Trx system would be more cost-effective than those of Escherichia coli or animals (45). Drug Allergies Drug allergies (DAs) can be IgE-or non-IgE mediated. Some drugs, such as anesthetics, antibiotics, nonsteroidal antiinflammatory drugs and codeine, are associated with a carrier protein through a prototype or its metabolite (46). Binding of cell-bound IgE molecules activates mast cells and releases various factors, such as histamines, leukotrienes, prostaglandins, and cytokines, which can cause extensive tissue damage. Trx1 is a stress-induced redox regulatory protein in vivo; thus, it inhibits histamine release by eliminating ROS in mast cells (47). The mechanisms of DAs may often be associated with non-IgEmediated complement activation. Indeed, Trx1 is known to inhibit the activation of the complement cascade at different stages, e.g., suppressing C3 cleavage and C5 convertase activation (48,49). The functions of Trx1 in mast cells and the complement system are described in section 3.5 "Suppression of Complement Activation". Contact Dermatitis Contact dermatitis is a common inflammatory skin disorder that is usually characterized by alternating relief from and deterioration of symptoms, but it can be persistent at times (50). Contact dermatitis can be categorized as irritant contact dermatitis (ICD), a non-immunologically driven inflammatory reaction to an irritating substance, and allergic contact dermatitis, a type-IV delayed-type hypersensitivity reaction resulting from the activation of allergen-specific T cells, i.e., a second exposure to the allergen resulting in circulating memory T cells homing to the skin and eliciting an immunologic reaction that causes skin inflammation (51). Topical corticosteroid treatment is typically the first-choice treatment for contact dermatitis. However, corticosteroids are not suitable for longterm use because of multiple side effects, such as skin atrophy, telangiectasia, dermatoglyphics, and pigmentation. Oxidative stress is known to play a key role in contact dermatitis inflammation. In particular, ROS participate in dendritic cell activation (52). In addition to being an endogenous redox regulatory protein, Trx is an effective ROS scavenger (53). Because ROS regulate the function of dendritic cells that function in the sensitization phase of contact hypersensitivity, transgenic overexpression and systemic administration of exogenous Trx1 can suppress skin inflammation by inhibiting neutrophil recruitment during the elicitation phase, but not during the induction phase, in mice treated with 2,4dinitrofluorobenzene (54). Transgenic overexpression of Trx1 and systemic administration of exogenous Trx1 can prevent the cutaneous inflammation caused by UV radiation through regulating the cellular redox status and ROS scavenging (55). We previously demonstrated that Trx1 ameliorates ICD by inhibiting epithelial production and releasing inflammatory cytokines and chemokines (56). Existing research suggests that Trx1 can be used to treat contact dermatitis; however, its exact therapeutic mechanism requires further clarification. Eliminating ROS and Maintaining Redox Balance Trx1 can directly remove ROS produced in inflamed tissues and help maintain redox balance. Mitsui et al. showed that Trx1 transgenic mice had strong resistance to oxidative stress and a longer life span compared with wild-type (WT) animals (57). Compared with the Trx1 system, the intracellular redox system has similar antioxidant mechanisms, such as the glutathione and peroxidase systems, which defend against oxidative stress. The Trx1 and glutathione systems act as backup systems to provide electrons for each other, i.e., the two systems protect cells from oxidative damage synergistically (58,59). In addition, Trx1 is required to provide electrons when peroxidase is used to reduce ROS in organisms (60). Thus, Trx1 plays a key role in the balance of multiple redox systems in the body, and it coordinates the normal operation and function of these systems. Moreover, Trx1 is also a downstream target molecule for the activation of many redox signals. For example, in the case of mitochondrial redox imbalance, Nrf2 signal that is important for redox is activated (61). Nrf2 signaling activates some specific targets, including Trx1 and glutathione (62). In the allergic state, expression of Trx1 can be induced to reduce the damage caused by excessive ROS. Simultaneously, the Trx1 system restores and refolds oxidized and damaged proteins. Consequently, Trx1 likely plays an important protective role against allergic inflammation. Inhibition of MIF Human MIF, a member of the Trx1 family of proteins that displays thiol reductase activity, was first cloned from T cells in 1989 (63). It shows inhibitory properties against the migration of macrophages and plays an essential role in cellular immunity, particularly in delayed-type hypersensitivity (64). MIF is largely considered a pleiotropic inflammatory medium with a wide range of immunoregulatory and pro-inflammatory activities, including the induction of inflammatory cytokines, regulation of macrophage and lymphocyte proliferation, and functions similar to those of chemokines (64,65). Furthermore, MIF is directly involved in eosinophil differentiation, survival, activation, and migration (20). MIF shares the redox-active motif -Cys-Xxx-Xxx-Cys-with Trx1 (66). It has sulfhydryl reductase activity and direct redox reactions with Trx1 (67). Several preclinical studies using animal models have shown that Trx has beneficial MIF-related functions against various inflammatory diseases. For example, the serum MIF level of Trx1 transgenic mice was significantly lower than that of WT mice in a dextran sodium sulphate-induced colitis mouse model (68). In mice with systemic inflammatory reactions from smoking, MIF gene expression in the spleens of Trx1 transgenic mice was inhibited compared with the expression levels in control mice (69). Using a mouse model of asthma, Torii et al. determined that MIF production in the lungs of Trx1 transgenic mice was significantly reduced despite similar systemic Th2 responses and IgE concentrations, indicating that Trx1 can suppress airway inflammation by directly inhibiting MIF independent of systemic Th1/Th2 immune modulation (30). In vitro studies have provided evidence on the strong anti-MIF effect of Trx1. For instance, the production of MIF in macrophages cultured with LPS and IFN-g was significantly inhibited by Trx1 (68). MIF expression is also suppressed in Trx1-transfected cells (70), and topically applied exogenous Trx1 suppresses the expression of MIF in ICD skin tissues (56). Additionally, MIF can enter cells to induce a series of inflammatory reactions, and cell surface Trx1 is one of the target proteins for MIF internalization. Specifically, Trx1 on the cell surface binds to extracellular MIF with high affinity and blocks MIF internalization. Exogenous and intracellular Trx1 can also directly bind to MIF, thereby forming a complex that blocks MIF-induced inflammatory response (71). Regulating the Th1/Th2 Immune Balance In the cell microenvironment, proliferation and differentiation of Th1/Th2 cells are affected by various factors, such as cytokines, antigen properties, T cell receptor signal intensity, antigenpresenting cell types, and costimulatory molecules (72)(73)(74). In addition to these external factors, cell redox status is considered to play a role Th cell differentiation. T cells have limitations in terms of cystine uptake and require exogenous mercaptan for their activation to play a role in this process. During antigen presentation, after dendritic cells interact with T cells, the former generate and release Trx1, which reduces extracellular cystine to cysteine used by T cells; thus, the normal proliferative ability of T cells as well as an effective immune response are maintained (75,76). Trx1 also controls the redox state of cell surface receptors, such as CD4 and CD30, and thereby affects the behavior of T cells (77,78). When Th2 cytokine responses increase, Trx1 induces the expression of Th1-like cytokines, such as IL-1a, IL-1b, IL-1Ra, and IL-18, which in turn suppresses Th2-like cytokine expression (27). In recent studies, Trx1 has been confirmed as a specific target gene induced by the cytokine IFN-g that directly drives the Th1 immune response (79,80). Indeed, IFN-g promotes Th1 differentiation and downregulates the Th2 response (81). Exogenous Trx1 can induce the expression and release of IFN-g in Th1 cells, and the increased IFN-g level in turn increases the Trx1 level. The intracellular Trx1 of IFN-g-activated macrophages increases the secretion of the Th1 cytokine IL-12 by regulating the thiol redox state. Given the mutual induction and promotion of Trx1 and IFN-g by immune cells during oxidative stress, a positive feedback mechanism could exist between Trx1 and IFN-g as they participate in stimulating Th1 immunity (80). In addition, hTrx1 can bind to Dectin-1 and/or Dectin-2 on antigen presenting cells to secret IL-1b and IL-23, which influence Th2/Th17-polarizing milieu during the allergic sensitization in the skin (82). Recently, IL-4 has been identified as a new target of Trx1; specifically, its activity can be selectively suppressed by Trx1 (83) ( Figure 2); thus, the production of IgE by B cells may also be effectively blocked. However, Trx1 does not directly affect the proliferation and differentiation of Th1/Th2 cells; instead, it suppresses inflammation by regulating the production and release of Th1/Th2 cytokines because lymphocytes isolated from Trx1-transgenic (Trx1-Tg) mice are similar to those from WT mice in terms of their ability to produce Th2 cytokines, such as IL-4, IL-5 and IL-13, once they leave an in vivo environment with high Trx1 (30). The recent report showed that regulatory T cells (Tregs) play an important role in maintaining immune tolerance to allergens by inhibiting the type 2 immune cells, inducing tolerogenic dendritic cells, regulatory B cells and IgG4producing B cells in allergic disease (84). Increased Trx1 in Tregs enhances tolerance to oxidative stress (85). Thus, Trx1 may prevent the occurrence and progression of Th2-driven allergic inflammatory conditions by adjusting the Th1/Th2 balance. In response to various cytokines released by Th2 cells, such as IL-4, IL-5, IL-5 and IL-13, many inflammatory cells are activated, which in turn elicit an inflammatory response that leads to the clinical symptoms of allergic disease. The main effector cells are eosinophils, mast cells, and neutrophils in this complex immune response. A number of studies have shown that Trx1 directly modulate these cells through various mechanisms. Next, we focus on the role and the mechanisms of Trx1 on these inflammatory cells in allergic reactions. Eosinophils Excessive proliferation and infiltration of eosinophils is generally considered a marker of allergic inflammation. Trx1 inhibits the migration and activation of eosinophils by regulating the extracellular Th1/Th2 balance, cellular signaling pathway, and molecules that interact with eosinophil-produced cytokines. In allergic asthma, Trx1 inhibits eosinophil accumulation by inducing Th1 cytokine production and suppressing Th2 cytokine production (27). Low expression of MIF in the airways of Trx1-Tg mice significantly inhibits eosinophil aggregation and mucus metaplasia (30). Additionally, MIF can directly induce the production of eotaxin to promote eosinophil chemotaxis (86); however, as described previously in the text, Trx1 can bind to MIF inside and outside the cells to block its internalization and pro-inflammatory activity. Eotaxin, an eosinophil chemotactic chemokine, is mediated by the C-C chemokine receptor type 3 (CCR3) on the surface of eosinophils (87). Eotaxin-stimulated eosinophils incubated with Trx1 significantly decreased the activation of eotaxinstimulated ERK1/2 and p38MAPK pathways (88)(89)(90); however, Trx1 does not affect CCR3 expression in eosinophils. Thus, chemokine-induced eosinophil migration is apparently attenuated by regulating the downstream signaling of CCR3. In addition, intraperitoneal Trx1 injection significantly reduces the overproduction of MIP-1a and IL-13, which are closely related to eosinophil chemotaxis in the lungs (28). In vitro studies have confirmed that Trx1-overexpressing human bronchial epithelial cells can be protected from eosinophil-induced damage (91). Furthermore, Trx1 directly suppresses the production of ROS in eosinophils (92). Overall, Trx1 exerts anti-allergic effects by regulating eosinophil activation and migration ( Figure 1B). Mast Cells Mast cell activation plays an important role in various immediate allergic diseases. ROS functions in FcϵRI-mediated degranulation of mast cells (93,94), and several ROS are generated during FcϵRI-mediated activation of mast cells. Thus, blocking the production of intracellular ROS can prevent the release of FcϵRI-mediated allergic mediators from rat mast cells (95). Son et al. stimulated mast cells from WT and Trx1-Tg mice with Ag, DNP-bovine serum albumin. The levels of histamine secreted by mast cells from Trx1-Tg mice were T cells cannot ingest cystine and rely on antigen-presenting cells (e.g., dendritic cells) to provide cysteine for them. Dendritic cells convert extracellular cystine into cysteine through Trx secretion, thereby promoting the proliferation of activated T cells. A positive feedback mechanism exists between Trx and IFN-g wherein Trx1 induces the expression and release of IFN-g in Th1 cells and the increased IFN-g level increases Trx1 levels in turn. IFN-g-activated intracellular Trx1 of macrophages increases Th1 cytokine IL-12 secretion by regulating the thiol redox state. Furthermore, Trx1 selectively inactivates the cytokine activity of IL-4 and inhibits the Th2 immune response. significantly reduced compared with that in WT mice, and the levels of intracellular ROS suggested that Trx1 inhibits mast cell degranulation by blocking ROS production (47). As the underlying mechanism, ROS mainly activates phospholipase Cg (PLCg), protein kinase C (PKC) and Ca2+ influx to cause medium release (94,96), whereas Trx1 effectively inhibits PLCg, PKC, and Ca2+ influx in the signal transmission of ROSactivated FcϵRI-dependent mast cells ( Figure 1C). bII-tryptase is one of the most abundant proteins stored and released in mast cells, and it participates in various acute and chronic allergic processes. It is commonly noted in patients with asthma and AR (97,98). The redox activity of the allosteric disulphide bond (Cys220-Cys248 disulphide bond) in bIItryptase plays an essential role in exerting enzyme activity, and Trx1 is a related bII-tryptase reducing agent in vivo; it can selectively reduce the disulphide bonds and potently reduce the catalytic activity of bII-tryptase in the reduced state (99) ( Figure 1C). Neutrophils Neutrophil recruitment is an important step in the pathogenesis of allergic sensitization and inflammation (100). Trx1 has an obvious inhibitory effect on the binding of neutrophils to vascular endothelial cells. Nakamura et al. found that Trx1 can inhibit the adhesion of neutrophils to endothelial cells in a mouse air sac chemotactic model (101). CD62L is an important adhesion molecule that is expressed and released by neutrophils, and it plays a key chemotactic role in neutrophil adherence to vascular endothelium and blood vessel penetration (102). Specifically, exogenous Trx1 acts directly on neutrophils, inhibiting the activation of the p38 mitogen-activated protein kinase (MAPK) signaling pathway, which causes the downregulation of CD62L in neutrophils, and ultimately reduces the adhesion of CD62L to endothelial cells. We previously explained its specific mechanism of action (103). Additionally, C32S/C35S mutant Trx1, showing a mutation at the redox function site, cannot inhibit the adhesion of neutrophils to human umbilical vein endothelial cells, indicating that the redox site of Trx1 is necessary for the inhibition of neutrophil adhesion (101). Moreover, in an LPSinduced bronchial inflammation rat model intravenously injected with 8 mg/kg of Trx1 every day, neutrophil infiltration into the bronchial and lung tissues had significantly reduced (104). Although adhesion molecules, such as ICAM-1, expressed by endothelial cells play important roles in neutrophil extravasation, Trx1 does not alter the expression of such adhesion molecules in these cells (101,104). Therefore, Trx1 can inhibit the neutrophil recruitment by other chemokines, and it may play a unique role in neutrophil exudation of allergic inflammation. Suppression of Complement Activation Excessive complement activation has been implicated in the pathogenesis of allergic inflammatory disorders, such as IgEindependent DAs, and the increased production of the anaphylactic toxins C3a and C5a contributes to the activation of mast cells or basophils, vasodilation, and smooth muscle contraction. Transgenic overexpression of Trx1 in vivo or exogenous Trx1 injection can reduce choroidal neovascularization formation in laser-injured mouse models, which is closely associated with the complement activation of the Trx1 inhibition alternative pathway (48). Complement factor H, a multidomain and multifunctional protein, functions within the negative feedback that occurs during complement alternative pathway activation. It competes with factor B for C3b binding and accelerates the degradation of C3 convertase into its component (105). Trx1 inhibits C3 cleavage into C3a and C3b in a dose-dependent manner and prevents the deposition of C3b, and it inhibits the activation of C3b and reduces the generation of C3 convertase by binding to complement factor H; thus, it enhances the inhibition of C3 cleavage by complement factor H (48). Moreover, Trx1 inhibits the activation of C5 convertase through its active site, thereby preventing the production of C5a and the formation of the membrane attack complex (49) (Figure 3). The deposition of C5b and C9 is also inhibited by Trx1 in a concentration-dependent manner in all three pathways during their early stages; however, Trx1 does not inhibit the deposition of non-allergic toxin C3b, which has a conditioning effect on bacteria and promotes phagocyte phagocytosis (106). C5a shows strong chemotactic activity in neutrophils and stimulates them to produce a large amount of oxygen free radicals, prostaglandins, and arachidonic acid. When Trx1 is intravenously injected into mice, complement-mediated neutrophil recruitment is significantly inhibited (48,49). Therefore, blockage of complement activation by Trx1 may represent a therapeutic target for relieving IgE-independent allergic inflammation. TRX1 IMPROVES GLUCOCORTICOID RESISTANCE Glucocorticoids (GCs), which stabilize mast cells to prevent degranulation and exert broad anti-inflammatory effects by binding to glucocorticoid receptors (GRs), are recognized as an effective first-line therapy for allergic diseases. Notably, GCs interfere with the division and proliferation of systemic lymphoid tissues under the action of antigens, affect the metabolism of lymphocytes, and induce lymphocyte apoptosis. Therefore, long-term administration of GCs attenuates host immunity to specific antigens and leads to the inhibition of the immune response to pathogenic microorganisms. In previous studies, we have shown that the antiinflammatory and anti-allergic effects of Trx1 may inhibit host immunity, which is in contrast to the effects of corticosteroids (47,54). Long-term use of GCs can cause GC resistance or insensitivity, which is a major obstacle in the treatment of allergic diseases. MIF can be induced by GCs and it enhances GC resistance (107); specifically, it impairs GC sensitivity via MAP kinase phosphatase-1 (MKP-1) inhibition (108). MKP-1 is an important MAPK signal inhibitor that is induced by GCs and mediates GC inhibition of ERK, JNK, and p38 MAPK activities as well as cytokine production induced by pro-inflammatory stimuli, such as LPS or IL-1 (109)(110)(111). MIF has been shown to downregulate GC-induced leucine zipper (GILZ) expression through a unique set of effects on transcription factor expression and phosphorylation. Notably, MIF-induced regulation of MKP-1 and MAPK activation is mediated through GILZ (112). Furthermore, MIF affects the NF-kB/IkB signal cascade, leading to accentuated inflammation and GC resistance (113). Trx1 can bind to GR and enhance the response of the cells to glucocorticoids (114). In addition, it can bind directly to MIF inside and outside the cell (71). Thus, Trx1 represents a potential intervention target between GC and MIF balance ( Figure 4A). CONCLUDING REMARKS Trx1 induction is considered an effective compensatory protective mechanism through which damaged tissue proteins are reduced or repaired. Trx1 exerts anti-inflammatory effects on a wide variety of inflammatory disorders. In this review, we summarized the available data on Trx1 and highlighted a variety of mechanisms underlying its beneficial effects against allergic inflammation. Trx1 improved GC resistance, thereby acting as a promising therapeutic target both as a supplement to existing treatments for allergic diseases and for patients with hormone intolerance. In addition, it has been reported that increased levels of Trx1 in plasma or serum are correlated with the progression of diseases, especially allergic asthma. Thus, Trx1 may also serve as a potential diagnostic marker and be useful in prognostic assessments. A variety of protein expression systems, including yeasts, lactobacillus, algae and plants cells, have been developed with anti-allergic and anti-inflammatory activities that are comparable to those found for purified recombinant human thioredoxin (rhTrx); thus, feasible sources for production of thioredoxin protein currently exist. In future translational research focused on Trx1, it will be essential to conduct human studies. Importantly, clinical trials are now ongoing in which rhTrx1 is being administered to patients with atopic dermatitis and trans-tracheal FIGURE 3 | Potential mechanism of thioredoxin-1 (Trx1) inhibition of complement activation. Serum Trx1 inhibits C3 cleavage in the alternative pathway alone and enhances factor H (FH)-induced inhibition of C3 cleavage by combining with FH, which reduces C3a levels and C3b deposition. In contrast, Trx1 on the surface of endothelial cells or serum Trx1 blocks the production and deposition of C5b by inhibiting C5 convertase activity in the three complement terminal pathways; moreover, C9 deposition is inhibited. At the same time, Trx1 inhibits the production of anaphylaxis toxin C5a, which reduces the chemotaxis of neutrophils. inhalation experiment with rhTrx1 are being performed; Trx1 is showing good efficacy with no major side effects (unpublished data). Finally, we suggest that Trx1 will be an important potential target for anti-allergic and anti-inflammatory drug development in the future ( Figure 4B). AUTHOR CONTRIBUTIONS JW, JY, and HT were involved in the conception and writing of the manuscript, JZ, CW, AF, and SL contributed to literature searches and extensive discussions, and all authors agreed to publish the paper. FIGURE 4 | (A) Thioredoxin-1 (Trx1) improves glucocorticoid (GC) resistance through macrophage migration inhibitory factor (MIF). MIF impairs GC sensitivity via the inhibition of MAP kinase phosphatase-1 (MKP-1). MKP-1 is induced by GC to mediate GC inhibition of ERK, JNK, and p38MAPK activities as well as cytokine production. MIF inhibits GC-induced leucine zipper (GILZ) expression through unique effects on the expression of transcription factor and phosphorylation. MKP-1 and MAPK activation are regulated by MIF via GILZ. MIF also affects the NF-kB/IkB signal cascade. Trx1 may directly bind to GC receptor and enhance the response of cells to GCs. Both intracellular and extracellular Trx1 bind to MIF and form a heterodimer to prevent MIF entry into cells and MIF-induced GC resistance. (B) Potential clinical applications of Trx1 in allergic diseases. Administration of Trx1 suppresses the excessive allergic inflammatory response. Future clinical applications of Trx1 could include treatment of asthma or allergic rhinitis with a Trx1 inhaler, topical application for patients with contact dermatitis, and/or oral delivery for those with food allergies. It may also be promising to combine Trx1 with corticosteroids. Finally, Trx1 could potentially be administered as an intravenous injection.
2022-04-28T13:22:27.441Z
2022-04-28T00:00:00.000
{ "year": 2022, "sha1": "c4eeaffc40f0f3aacc588a9ba53e01ea55586383", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Frontier", "pdf_hash": "c4eeaffc40f0f3aacc588a9ba53e01ea55586383", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
10710660
pes2o/s2orc
v3-fos-license
Bilateral Femoral Nutrient Foraminal Cement Penetration during Total Hip Arthroplasty. INTRODUCTION Cement pressurisation is important for the insertion of both the acetabular and femoral components during Total Hip Arthroplasty (THA). Secondary to pressurization the rare phenomenon of unilateral cement incursion into the nutrient foramen has previously been reported. No bilateral case has been reported to date. This has implications both for misdiagnosis of periprosthetic fractures and for medico-legal consequences due to a presumed adverse intra-operative event. CASE REPORT We present a case report of a 59 year old, caucasian female who underwent staged bilateral cemented Stanmore THA. The post-operative radiographs demonstrate evidence of bilateral nutrient foramen penetration intra-operatively by standard viscosity cement. The patient suffered no adverse consequences. CONCLUSIONS In summary, cement extravasation into the nutrient foramen is an important differential to be considered in presence of posterior-medial cement in the diaphysis of femur following THA. This requires no further intervention and has no effect on the outcome. Cement pressurisation is important for the insertion of both the acetabular and femoral components during Total Hip Arthroplasty (THA). Secondary to pressurization the rare phenomenon of unilateral cement incursion into the nutrient foramen has previously been reported. No bilateral case has been reported to date. This has implications both for misdiagnosis of periprosthetic fractures and for medico-legal consequences due to a presumed adverse intraoperative event. We present a case report of a 59 year old, caucasian female who underwent staged bilateral cemented Stanmore THA. The post-operative radiographs demonstrate evidence of bilateral nutrient foramen penetration intra-operatively by standard viscosity cement. The patient suffered no adverse consequences. In summary, cement extravasation into the nutrient foramen is an important differential to be considered in presence of posterior-medial cement in the diaphysis of femur following THA. This requires no further intervention and has no effect on the outcome. cortical breach. These appearances may deceive the apparent that the gun (Stryker UK cement gun) nozzle, abutted the endosteum closely. No untoward intraoperative events were noted and the patient returned to the ward with no adverse features in the post-operative orthopaedic team into reducing or eliminating weight put course. through the leg, thereby prolonging patient rehabilitation. A check X-ray of the procedure taken two days post-Cement penetration of nutrient foramen can have operatively demonstrated significant cement extrusion presentation similar to iatrogenic breach and should be from the posterior-medial aspect of the femoral diaphysis considered as differential approximately 2 cms (26.6 mm) proximal from the stem tip and 17mm extrusion into the soft tissues. (Fig. 1 & 2 AP and lateral of proximal femur, measurements took Patient (MC) was a 59 year old female presenting with account of radiographic magnification). The patient had bilateral hip Osteoarthritis. She underwent right-sided no adverse pain on mobilisation. A CT scan was requested cemented Stanmore THR. The hip joint was exposed which showed cement extrusion outside the femur cortex through the posterior approach. The femoral cavity was (Fig. 3). Given no report of pain on mobilisation and the prepared, cleaned using pulse lavage and brushing, dried absence of a definitive fracture line, cement extravasation and a size 12.5 mm cement restrictor placed (cement plug was attributable to pressurisation through the nutrient JRI) two centimetres distal to the tip of the femoral foramen. Three months later the patient attended for component. After three to four minutes of polymerisation contra-lateral surgery and underwent an identical standard viscosity cement (Refobacin, Biomet) was procedure as the first hip. A similar, but not identical x-ray introduced into the femoral cavity using 4th generation appearance was noted, (Fig. 4 ) with 8.5 mm cement cementing techniques. A retrograde technique was extrusion out into the soft tissues and 4 cms (41mm) employed with a suction catheter placed distally in the cement extrusion from the tip of the prosthesis. The initial cementation period and a proximal cement patient was happy with the post-operative result and pressurisation adapter for the cement gun was used. It was continued to make an uneventful and full recovery. Conclusion Factors most likely to result in cement extravasation into the nutrient foramen include less oblique and wide foramen and those associated with the cement itself such as high pressure. Our bilateral case was a female measuring 145 cm. Patient size associated with a narrow femur and the ability of the cement gun to occlude the medulla may increase local pressurisation considerably. It is noteworthy that of the 19 cases reported in the literature [1][2][3][4][5], 16 have occurred in females, hence it is reasonable to assume that a female preponderance does indeed exist. Gaucher's disease and β-thalassaemia have both been associated with enlarged nutrient foramina in phalanges [7] but no association is reported with regards to the femur. The patient was tested and found to be negative for these conditions. The anatomical location of the nutrient artery has been proven to be relatively consistent [8]. Given cement extrusion at this level, the diagnosis of an iatrogenic cortical breech is unlikely. Some authors have suggested that morphological features of the extra-diasphyseal cement may help in differentiating vascular cement infiltration from cement extrusion secondary to fracture [3]. The appearances of both a thin line and localized cement mass have been reported in association with this phenomenon [5]. The literature supports the view that the long-term clinical implications of cement extrusion into the nutrient foramen is minimal [1][2][3][4]. Weismann felt the relationship of cement in the nutrient vasculature and clinical symptoms was less clear. The vetinary literature contains a study of radiographically diagnosed medullary infarction secondary to THR and relates this to nutrient vascular compromise [9]. In summary, cement extravasation into the nutrient foramen is an important differential to be considered in presence of posterior-medial cement in the diaphysis of the femur following total hip replacement.
2017-06-04T18:17:51.587Z
2012-10-01T00:00:00.000
{ "year": 2012, "sha1": "ba8780cfd2e538f23ee5a4c7f4a9cf2d7f977b2a", "oa_license": "CCBYNCSA", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "062104555c72a92d84a8fce3c431a7d778642ce2", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
70139354
pes2o/s2orc
v3-fos-license
Design of Teaching Methods Using Virtual Educational Environment The purpose of the presented article is the review of approaches to design of modern methods of training developed and creation of design technology option of training methods in the conditions of using the virtual educational environment for formation of common cultural and professional competences of students of majoring in pedagogical education. Within the prescriptive theory by means of a subject and design method, conceptual modeling of a set of methods of students’ training in the conditions of using information-communication saturated environment was carried out. The modeling allowed one to allocate design stages of methods of students’ training in a pedagogical field of a “modern” educational paradigm when using the virtual educational environment. The general approach to design of activity of teachers including design of training methods on the basis of the accounting of main structural components of educational technologies is reflected in presented results. The given results of research of features and opportunities of the virtual educational environment allow one to define essence of enrichment of training methods and mechanisms of self-adjustment and self-improvement of the system of training methods in information educational environments, and to formulate a conclusion about impossibility of creation of the modern educational process without the virtual educational environment. Introduction Within formation and development of structural components of common cultural and professional competences of students of the pedagogical higher education institutions provided by federal state standard of higher education there is actual a question of formation technological literacy of future teachers.In addition, according to recommendations of UNESCO, it is relevant to develop students' ability to help pupils in using the information and communication technologies (ICT) for the organization of their successful cooperation, the solution of educational tasks, development of skills of the doctrine [1] in the conditions of ICT-saturated educational environment of a modern school. In the specified conditions of pedagogic and methodicalstudies conducted (for example, [2][3][4][5][6][7][8][9] and others), they are connected with studying features, opportunities and conditions of using the information educational environment of educational institution, personal educational environments, the virtual educational environment. The need to use information educational environments receives theoretical justification in educational process of many higher education institutions, in practical use of learning management systems, cloudy services and other information educational environments for the solution of educational and organizational tasks extends. Studies of questions of creating a technique of using information educational environments, training techniques with personal educational environments ( [10], [11]), essence and features of a modern (information and communication) method of training [10], classification and design of methods of training in the conditions of using the educational environment were conducted as a result [10]. As this direction of pedagogical research is new in a depression phase (T. Kun terminology, 1964), a "modern" educational paradigm, predictingthe lack of a uniform approach to creation of the system of methods of training in information environments, distinction in understanding of a ratio of methods and forms of educational activity, essence of a training technique in conditions of using virtual educational environments, is observed. The option of activity of the teacher of design of methods of training when using virtual educational environments is offered within a contradiction between the need for creation of a training technique of students in conditions of using virtual educational environments for formation of common cultural and professional competences and ambiguity of understanding the structure of this technique (including features of the system of methods of training). Besides,there is also absence of the technology of designing the training methods of a "modern" educational paradigm in the considered materials. Methodology, objectives and research design The purpose of the presented article is the review of approaches to design of modern methods of training developed and creation of the option of technology of design of methods of training in the conditions of using the virtual educational environment for formation of common cultural and professional competences in students majoring in pedagogical education. Opening research methodology, we will indicate the need disclosures of essence and opportunities of use in educational process of the virtual educational environment, specification of a conceptual framework and allocation of features of methods of training of students in the virtual educational environment, the analysis of the existing technologies of design of methods of training of students. So, for justification of the offered technology of design of methods of training, we will begin with consideration of essence and opportunities of use an educational process of the virtual educational environment. Under the virtual educational environment (VEE), according to modern researches (in particular), we will understand network communication space in which the organization of educational process, its methodical and information support, documenting, interaction between all subjects of educational process, and also management to themare provided. Despite the lack of the uniform approach to allocation of VEE structure, it is expedient to point to the interrelation of its following components specified by many authors: personal educational environments of the students (projected independently), personal environments of training of teachers, global and local networks, a learning management system -LMS, cloudy services. Use of opportunities of all set of the allocated components from the point of view of researchers promotes formation (development) of independence and activity trained, to increase of sensibleness of process of knowledge, formation of professional and common cultural competence. In one of classifications of virtual educational environments (which basis its accessory is) there is the personal teaching environment (PTE) and the personal learning environment (PLE). Commenting on features of each, the allocated environments, we will specify that, being guided by modern hardware decisions, within creation of PLE, it is supposed to create the Internet virtual space for an exchange and storage of educational information, ensuring communication, planning of activity, collecting and storage of results of training. For organization of such space, it is expedient (according to Starichenko B. [11]) to use the cloudy technologies realized on the Internet, and also means of the Web 2.0 services. Thus environment is built and developed by the trainee, including in it all components which are required for him for development of educational programs -substantial, tool, communication and other. Significant argument in favor of such environment is possibility of its development and use after graduation from the educational institution that provides practical support of the concept of the distributed continuous training during all life [12]. The personal teacher environment is formed by the teacher by a choice of network services and tools necessary for it and creation of the blog of discipline, in which work all allowed persons can take part. Certainly, the teacher has opportunity to place among all necessary training materials or links to them and necessary cloudy tools. Thus in PTE, the idea of creation of thematic network community that possesses motivational appeal to students is realized. The virtual educational environment made of two specified environments in organizational and communicative aspect (according to [9]) represents difficult self-adjusted (due to expeditious correction of actions of participants of process of communication in relation to the changing situation) and self-improving (due to establishment of effective interrelation and its improvement in the process of assimilation of more difficult types of interrelations), the communicative system providing communication between participants of educational process. It is expedient to consider the allocated possibilities of the virtual educational environment when designing a technique of training in the ICT-saturated environment and creating a system of methods of training in the conditions of using VEE which, from our point of view, is enriched and updated in each phase of development of a "modern" paradigm. We will specify a conceptual framework and we will mark out features of methods of training in the virtual educational environment. Expansion of information educational space leads to emergence not only new means of ICT, but also to updating of all didactic system of training including, first of all, the new methods of training allowing to solve new didactic problems or tasks which earlier (without use of means of virtual environments) it would be impossible to solve completely theoretically or practically. With the advent of new methods of training (in particular, the research Semenova I. [13]), it is expedient to speak and about new methods of use of the virtual educational environment in educational process. We will specify a difference between the concepts "training methods with use of VEE" and "methods of use of VEE in training" in the allocated context. The training method with use of the virtual educational environment is a set of joint actions of the teacher and trainees on the organization of an exchange of educational information and management of her perception, understanding, storing and the correct application by means of the information and communication means which are (and included trained) a part of VEE. The method of using the virtual educational environment in training isa set of actions of the teacher (a choice of forms and ways of transfer of educational information, modeling of educational process, etc.) with use of information and communication means for achievement of the didactic purposes according to the diagnosed psychology and pedagogical situations. At allocation of the provided formulation we differentiate the concepts "method of use of VEE by the teacher in training" and "method of use of VEE trained in the exercise". So method of use of VEE trained in the exercise is an activity trained, based on means of the virtual educational environment, and undertaken by it for the solution of informative and (or) educational tasks. In addition to the formulated definitions we will offer the following interpretation of the term a method of training by use of VEE: actions of the teacher and trained on broadcasting, processing and assimilation of a training material about VEE and its potential for the solution of educational and informative tasks. In this way VEE acts as a training subject which educational purpose isn't reduced only to development of a technological component, and includes also formation of skills of research of the communicative and developing VEE opportunities. The accounting of the told allows to speak about selfadjustment and self-improvement of methods of training of students not only at the expense of a various range of the used tutorials which are (and included by the student) a part of VEE, but also at the expense of the allocated didactic opportunities of VEE. We will carry out the short analysis of the existing technologies of design of methods of training of students. For disclosure of essence of design of methods of training in traditional educational model we will address, for example, to M.E. Bershadsky and V.V. Guzeev's research where, in particular, the didactic bases of development of educational technology are presented and traditional methods of training are allocated: model, explanatory and illustrative, heuristic, programmed, problem (Bershadsky M. and Guzeev V. [2]). The choice of a concrete method of training, from the point of view of the allocated authors, depends on answers to the following questions: -whether it is necessary to actualize entry conditions at the beginning of educational occupation? -whether it is necessary to formulate intermediate tasks during work on material of educational occupation? -whether to offer ready ways of the solution of intermediate tasks or to provide to carry out a choice of a way trained independently? -whether to show trained ready algorithms of the solution of total tasks or to allow them to make an independent choice of a way of the decision? Answers to these questions also reflect logic of design of methods of training in "scientific" (the term Semenova I. [13]) an education paradigm. In training model with use of ICT the choice of a method of training can be carried out with a support on new approaches to classification of modern methods of training. For example, possible design stages (choice) of a method of training with use of ICT within ideology of computer didactics can look as follows: 1) formulation of the didactic purpose (choice of target category); 2) selection and correlation of the making actions with features of informative processes as activity (so, for example, educational and informative tasks for application of cognitive processes of visual perception, spatial imagination, and also cogitative operations of the analysis, synthesis and classification etc. are necessary for formation of subject knowledge); 3 Results and discussion As it was stated above, use of the virtual educational environment in educational process has the substantial, methodical and technological features which need to be considered at design of methods of training and methods of its use. We will present the option of design of methods of training developed in specialization of the results received by us in the conditions of allocation of feature of the virtual educational environment with a support on the main design stages of educational technology including: -diagnostics and self-diagnostics of the level of the academic progress of the trainee, their psychophysiological features, educational requirements and professional interests, and also creation and accumulation of the information base containing the diagnostic data allowing to judge dynamics of development of competences and competencies of students; Design of educational activity in conditions of using the virtual educational environment Whether it will construct educational occupations taking into account the features of substantial filling of the virtual educational environment and methods used by students in this environment? no In the presented scheme, features of activity of the teacher at design of modern process of training for formation of common culture and professional competences, which need to be considered in the conditions of the personal educational environments filled by students, are recorded (see fig.1). We will in addition indicate the need in introductions to ideologies of the offered technology of such concepts as "environmental" methods of training" and "methods of virtual training", which option of definition we will present as follows: "Environmental" methods of training is a set of joint actions of the teacher and trainees on the organization of an exchange of educational information and management of her perception, understanding, storing and the correct application by means of the information and communication means which are (included trained) a part of the information educational environment. Methods of virtual training are the individual-based methods of training constructed on the accounting of features of substantial filling of the personal educational environment of students, and also level of formation of the methods and their range used by students in this environment combined with the personal environment of the teacher. Conclusion In the modern information society, any subject of education has the self-adjustment personal educational environment. Possibilities of information and communication space allow personal educational environment of students to be filled automatically with substantial content with which it is possible to carry out any kinds of educational (and professional) activity throughout the entire life. Therefore training methods in a modern educational paradigm cannot be projected without features and substantial filling of personal educational environment of students. Thus we will note that if the process of design of modern educational activity will be carried out from positions of the classical didactics developing in the conditions of "scientific" [13] paradigms, at a certain stage of educational activity one inevitably will demand orientation to the personal educational circle of the student and the accounting of the specified features of VEE. However, implementation of the formulated requirement within a "scientific" paradigm cannot be carried out fully and correctly. So, in a "modern" educational paradigm, methods of training of students have to be focused on use of personal educational environments, and activities for their design have to consider: -substantial, methodical and technological features of the virtual educational environment; -main design stages of educational technology; -diagnostic data on the level of formation of components of common cultural and professional competences of a certain contingent of students. The offered scheme is generalized for students of pedagogical specialties of training as it is focused on certain substantial components of common cultural and professional competence (in particular, the judgments and reflections of methods of own educational activity and methods of training of the teacher and an analysis stage of opportunities of using these methods assuming, for example, a stage in future professional activity for a certain contingent trained). However it contains possibility of specification and a specification for students of other orientation of training on the basis of theoretical results of the experts, investigating questions of filling the system of training methods in the conditions of using the virtual educational environment. The following steps in the specified direction of research will be connected with creation of a technique of formation of abilities of students in using the personal educational environment for educational activity, a technique of formation of teacher'sabilities to use possibilities of personal educational environment of students for achieving educational purposes.
2019-02-19T14:07:26.385Z
2018-01-01T00:00:00.000
{ "year": 2018, "sha1": "d05461ce0dbbce121b42f221b8bdc463c7bd2026", "oa_license": "CCBY", "oa_url": "https://www.shs-conferences.org/articles/shsconf/pdf/2018/11/shsconf_cildiah2018_01176.pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "9b4d5fa8880dac3151b29c0b76a3add977ac2c6e", "s2fieldsofstudy": [ "Education", "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
218582281
pes2o/s2orc
v3-fos-license
Temperature Measurements in the Vicinity of Human Intracranial EEG Electrodes Exposed to Body-Coil RF for MRI at 1.5T The application of intracranial electroencephalography (icEEG) recording during functional magnetic resonance imaging (icEEG-fMRI) has allowed the study of the hemodynamic correlates of epileptic activity and of the neurophysiological basis of the blood oxygen level-dependent (BOLD) signal. However, the applicability of this technique is affected by data quality issues such as signal drop out in the vicinity of the implanted electrodes. In our center we have limited the technique to a quadrature head transmit and receive RF coil following the results of a safety evaluation. The purpose of this study is to gather further safety-related evidence for performing icEEG-fMRI using a body RF-transmit coil, to allow the greater flexibility afforded by the use of modern, high-density receive arrays, and therefore parallel imaging with benefits such as reduced signal drop-out and distortion artifact. Specifically, we performed a set of empirical temperature measurements on a 1.5T Siemens Avanto MRI scanner with the body RF-transmit coil in a range of electrode and connector cable configurations. The observed RF-induced heating during a high-SAR sequence was maximum in the immediate vicinity of a depth electrode located along the scanner’s central axis (range: 0.2–2.4°C) and below 0.5°C at the other electrodes. Also for the high-SAR sequence, we observed excessive RF-related heating in connection cable configurations that deviate from our recommended setup. For the low-SAR sequence, the maximum observed temperature increase across all configurations was 0.3°C. This provides good evidence to allow simultaneous icEEG-fMRI to be performed utilizing the body transmit coil on the 1.5T Siemens Avanto MRI scanner at our center with acceptable additional risk by following a well-defined protocol. The application of intracranial electroencephalography (icEEG) recording during functional magnetic resonance imaging (icEEG-fMRI) has allowed the study of the hemodynamic correlates of epileptic activity and of the neurophysiological basis of the blood oxygen level-dependent (BOLD) signal. However, the applicability of this technique is affected by data quality issues such as signal drop out in the vicinity of the implanted electrodes. In our center we have limited the technique to a quadrature head transmit and receive RF coil following the results of a safety evaluation. The purpose of this study is to gather further safety-related evidence for performing icEEG-fMRI using a body RFtransmit coil, to allow the greater flexibility afforded by the use of modern, high-density receive arrays, and therefore parallel imaging with benefits such as reduced signal dropout and distortion artifact. Specifically, we performed a set of empirical temperature measurements on a 1.5T Siemens Avanto MRI scanner with the body RF-transmit coil in a range of electrode and connector cable configurations. The observed RFinduced heating during a high-SAR sequence was maximum in the immediate vicinity of a depth electrode located along the scanner's central axis (range: 0.2-2.4 • C) and below 0.5 • C at the other electrodes. Also for the high-SAR sequence, we observed excessive RF-related heating in connection cable configurations that deviate from our recommended setup. For the low-SAR sequence, the maximum observed temperature increase across all configurations was 0.3 • C. This provides good evidence to allow simultaneous icEEG-fMRI to be performed utilizing the body transmit coil on the 1.5T Siemens Avanto MRI scanner at our center with acceptable additional risk by following a well-defined protocol. However, simultaneous icEEG-fMRI is prone to signal loss around the icEEG electrodes and more particularly when using echo-planar imaging (EPI) sequences due to magnetic susceptibility effects; using gradient echo EPI we found up to 50% signal drop at around 5 mm from the electrode contacts (Carmichael et al., 2012). Currently, our icEEG-fMRI acquisitions are limited to the head transmit and receive RF coil, in accordance with the conclusions of our previous investigations on the technique's feasibility (Carmichael et al., 2008(Carmichael et al., , 2010(Carmichael et al., , 2012. The use of the body transmit coil in conjunction with the use of a head receive coil array would allow the use of parallel imaging techniques to reduce scanning time and susceptibility effects (Pruessmann et al., 1999;Larkman et al., 2001;Griswold et al., 2002;Setsompop et al., 2012). In terms of subject safety, the combination of icEEG-fMRI constitutes a particularly challenging imaging technique due to a number of health risks (in addition to the invasiveness of icEEG electrode placement), associated with the exposure of metallic implants to the three fields used in MRI, namely: static magnetic field (B 0 ), the radiofrequency (RF) field (B 1 ) and the switching gradient magnetic fields. In principle, the B 0 field can cause an implant to experience a net force (displacement) or rotational (torque), the RF field can result in heating of the tissues around the implants and the gradient fields can induce eddy currents resulting in neural stimulation (Carmichael et al., 2010;Hawsawi et al., 2017). The exhaustive safety and data quality tests (Carmichael et al., 2008(Carmichael et al., , 2010(Carmichael et al., , 2012) that preceded our implementation of icEEG-fMRI lead us to define a data acquisition protocol that limits us to use a head RF-transmit/receive coil [in addition to low SAR sequences and positioning of the electrode wires along the RF coil's central (Z) axis], with important implications for blood oxygen level-dependent (BOLD) sensitivity (Carmichael et al., 2012). In view of further developing icEEG-fMRI by modifying our protocol to allow the use of our MRI scanner's body transmit RF coil, we undertook new phantom tests to assess the conditions under which the body RF-transmit coil could be used with an acceptable level of additional risk. This article focuses on characterizing the RF-induced heating in the vicinity of icEEG electrodes exposed to RF produced by our 1.5T MRI scanner's body transmit coil with different lead configurations and whether these were connected to the recording system or not. MATERIALS AND METHODS We measured RF-induced temperature changes in the immediate vicinity of icEEG electrodes placed in a standard test phantom exposed to a body transmit coil, over a range of lead placement and termination configurations. In line with our previous work on the safety of icEEG-fMRI (Carmichael et al., 2008(Carmichael et al., , 2010 five icEEG electrodes were placed in the head part of the phantom to simulate a representative, realistic clinical scenario (see Figure 1). Phantom Preparation Following the ASTM F-2182-02a guidelines, we used a container made of acrylic with the following dimensions: head length = 290 mm, head width = 195 mm, torso length = 300 mm, torso width = 330 mm and height = 150 mm (Figure 1). It was filled with 0.70 g/L of NaCl, 8 g/L of polyacrylic acid (PAA) and 15 L of distilled water in order to simulate human brain tissue electrical conductivity of 0.26 S/m (Park et al., 2003). EEG Electrode, Connecting Lead and Recording System Configurations Three depth icEEG electrodes, two 8-contact (Ad-Tech model SD08R-SP10X-000; 8 platinum contacts, 10 mm spacing, 72 mm recording area and 380 mm total depth length) and one 10-contact (Ad-Tech model SD10R-SP10X-000, 10 platinum contacts, 10 mm spacing, 92 mm recording area, and 390 mm total depth length) were positioned as follows, along lateral trajectories mimicking a bilateral mesial temporal lobe implantation: Depth 1 with eight contacts was positioned on the left hand side [9.86 cm from the superior aspect of the phantom (top of the head) and 5.5 cm from the anterior surface (face; depth from the gel's surface), with the deepest contact located 12 cm from the left lateral surface]; similarly, Depth 2 with eight contacts was inserted (through a hole in the Grid electrodesee below) on the right hand side [9.28 cm from the superior aspect of the phantom (top of the head) and 5.5 cm from the anterior aspect (face; depth from the gel's surface), and the depth of 7.5 cm from the right lateral surface]; and Depth 3 with 10 contacts in the left hand side located 10 mm superior to Depth 1 in the same coronal plane and depth. A Grid electrode (Ad-Tech model FG64D-SS10X-0E2, 10 mm spacing, 64 platinum contacts, nichrome wire, and electrode total length of 455 mm) was placed in a para-sagittal plane in a location to emulate the placement of electrodes over the left cortical region and located 2 cm away from the head's lateral aspect. A Strip electrode (Ad-Tech model TS06R-AP10X-0W6, 6 platinum contacts, 10 mm spacing, 72 mm recording area and 380 mm total depth length) was located in an axial plane in the superior part of the phantom head (2.18 cm from the top of the head). Lead extension wires (length = 90 cm), which are used to connect the electrode leads to the EEG digitization and amplification system [DAS; consisting of the electrode lead input box, battery pack and amplifier(s)] for the purpose of recording, were used in some of the heating tests. Following our routine practice for patient scanning sand bags were placed on top of the electrode leads and cables along FIGURE 1 | Experimental set up. Schematic representation of the gel-filled phantom (Ph), scanner RF coils (T and R), foam insert (F) and EEG electrodes, leads and digitization and amplification system (DAS). The following icEEG electrode were placed in the gel-filled phantom: Depth 1 (D1) orientated laterally, Depth 2 (D2) (lateral), Depth 3 (D3) (lateral), Grid (G) (para-sagittal), Strip (S) (anterior-posterior). Depending on the experimental configuration (see Figure 2), the electrodes were connected to leads (L), extension cables (Ext), and the DAS system [electrode lead input box (DAS1), battery pack (DAS2), amplifier (DAS3)]. Depending on the experimental configuration, the DAS system was placed either on top of a foam insert (F) or at the bottom of the scanner bore (no foam insert). their path from the phantom to the DAS (Vulliemoz et al., 2011). In accordance with our icEEG-fMRI data acquisition protocol (Carmichael et al., 2012) a MRI scanner EEG equipment positioning foam insert manufactured by us, to be placed at the head end of the scanner bore (between the head coil and bore opening at the scanner far end) was used in the tests to ensure the reproducible and secure placement of the electrode lead tails and extensions, and EEG DAS in the scanner bore (Figure 1; Carmichael et al., 2012). The positioning foam insert consists of a hemi-cylinder (length: 79.7 cm) with a radius that matches the scanner bore's internal diameter, and has grooves and cut outs (depth: 0.8 cm) in its (top) flat surface to enable reproducible placement of the leads and DAS along the scanner's central (Z) axis, to minimize the coupling between the EEG system and RF E field, which by design is made to have the smallest possible magnitude on the Z-axis within the scanning field of view (Lemieux et al., 1997). In some of the tests described below, the effect of not using the positioning foam insert on the RFinduced heating was assessed; without the foam insert in place, the leads and EEG DAS rest on the bottom of the scanner bore (therefore away from the Z-axis, closer to the body coil). Our previous work has demonstrated the effects of electrode and lead placement, and of electrical termination on the amount of RF-induced heating in the vicinity of icEEG electrodes (Carmichael et al., 2008(Carmichael et al., , 2010. Two sets of measurements were performed: Experiment 1 and Experiment 2; each set corresponding to a scanning session, and designed to provide an assessment of heating increases in the tissue surrounding the icEEG electrodes using different lead configurations for the body RF transmit coil, and assessing reproducibility by repeating some measurements. The configurations are labeled A(i) (i = 1, 2, 3) for Experiment 1 and B(i) (i = 1,. . . , 5) for Experiment 2. Experiment 1 In this experiment, we set out to perform an evaluation of the effects of using the body transmit RF coil on the heating in the vicinity of icEEG (Depth 1, Depth 2, Depth 3, Strip, and Grid) electrodes located inside a water-based phantom. Previous studies (Carmichael et al., 2008(Carmichael et al., , 2010Boucousis et al., 2012;Ciumas et al., 2013) studied the effect of body transmit coil and concluded that body transmit coil produces significant temperature increase above the safety levels, and we sought to update this information for the configurations of electrodes, connecting leads and EEG DAS specified in our successfully implemented protocol (Carmichael et al., 2012). We also sought to explore slight variations on this arrangement and to obtain temperature measurement reproducibility data. The three following electrode configurations were studied in Experiment 1 (see Figure 2): A(1): With foam insert. No lead extensions. Electrodes unterminated with Strip, Grid, and Depth 2 lead tails bundled together (using adhesive tape) on the right side of the superior aspect of the phantom head (right bundle) and The sequence of temperature measurements, with configuration, RF exposure sequence and manipulations, in Experiment 1 are shown in Table 1. Experiment 2 This experiment constitutes an elaboration of Experiment 1, designed to explore the heating that results from scenarios that deviate more from our protocol (therefore akin to fault conditions), in particular in relation to the placement of the leads relative to the scanner's central axis by not using the foam insert; also it provided additional (inter-session) reproducibility data. The following five electrode configurations were studied in Experiment 2 (see The sequence of measurements of heating with the specified icEEG leads configurations and the applied MRI sequence in Experiment 2 are shown in Table 2. MRI System and RF Exposure Sequences The MRI scanner used in this investigation was a 1.5T Avanto (Siemens, Germany) in the Neuroradiology department of the National Hospital for Neurology and Neurosurgery (UCLH NHS Foundation Trust), London, United Kingdom; this is the scanner used for our icEEG-fMRI experiments on human subjects (Vulliemoz et al., 2011). All RF exposure in this work was performed using the scanner's standard body RF-transmit coil. Temperature Measurements The temperature changes in the immediate vicinity of selected electrode contacts were monitored and recorded continuously using five fiber-optic sensors (model T1C-10-PP05 and model T1C-10-B05, Neoptix, Canada), connected to a 4-channel signal conditioner (Neoptix ReFlex-Neoptix, Canada). Based on prior experience we estimate the temperature measurement precision (standard deviation in the absence of heating) to be of the order of ±0.2 • C. The temperature sensors were placed in five locations as follows: the tips of Depth 1, Depth 2, and Depth 3; contact number 48 of the Grid electrode which is located in the corner of the electrode and a reference location, at a depth of approximately 3 cm in the phantom gel, 10 cm away from all the electrodes corresponding roughly to the phantom's neck area. Because we were limited to four temperature channels simultaneously, in some tests we repeated the measurement with alternative temperature probes. In particular, following Experiment 1 (see Table 3), in which we did not measure the temperature at Depth 1 based on the results of our previous work (Carmichael et al., 2010) which suggested that the heating would be greatest at Depth 3. We tested this assumption in the first three measurements of Experiment 2 (see Table 4). This demonstrated that the heating was greater at Depth 1 than Depth 2 (see Table 4), and therefore decided to record at Depth 1 for the rest of that experiment. Experiment 1 The maximum observed temperatures for all measurements in the presence of the foam insert across configurations A(1)-A(3) at every location can be found in Table 3. Figure 4 shows a typical temperature measurement series (measurement 1.1). The maximum temperature increase overall was 0.7 • C at Depth 2 for measurement 1.3, which was the electrode location of greatest heating for most measurements. In accordance with our expectations, the temperature increase values measured were greater for TSE than EPI; the maximum temperature changes for all EPI exposures were equal to, or below, 0.3 • C, which is near the threshold of detectability (0.2 • C), and comparable to the temperature increase at the reference position. Comparison of measurements 1.3 and 1.5-7 shows good reproducibility relative to phantom repositioning. Experiment 2 Experiment 2 was performed one week after Experiment 1. The maximum observed temperature change values for all measurements are shown in Table 4. Figure 5 illustrates temperature changes for measurement 2.1. For the high-SAR sequence, the highest temperature increase recorded was 4.5 • C, at electrode Depth 1 (measurement 2.8). For the EPI sequence (measurement 2.6), the maximum temperature increase was negligible. Measurements 2.1-2.4 are two sets of repeat measurements with the foam insert and lead extensions [configurations B(1) and B(2), with extensions laid along, and away from the Z-axis, respectively]; these resulted in maximum temperature increases in the range 0.2-2.4 • C for the depth electrodes, 0.5-1 • C for the Grid. For the remaining measurements, due to the limited number of channels of our temperature signal conditioning unit, and our wish to sample temperatures simultaneously (Depth 1 not sampled in Experiment 1), we used the temperature probe at Depth 1 instead of Depth 2, due to the higher temperatures observed at the former in measurements 2.2 and 2.3. After removing the lead extensions and foam insert (measurements 2.5 and 2.6), the maximum temperature increase dropped to 0.4 • C across all electrodes (for the high-SAR sequence; negligible for EPI). Reconnecting the electrodes to the lead extensions (without foam insert; measurement 2.7) resulted in greater maximum temperature increases (1.9 • C); connection to the digitization and amplification system (without foam insert; measurement 2.8) resulted in the maximum temperature increase of 4.5 • C, at electrode Depth 1, the location of the greatest temperature increases in all measurements except 2.5. DISCUSSION We performed experiments to quantify the amount of heating induced in the immediate vicinity of a set of intracranial EEG electrodes by exposure to RF generated by a body transmit coil in a 1.5T MRI scanner. This work builds directly on our experience of acquiring concurrent icEEG-fMRI data using a quadrature head RF transmit coil in the MRI scanner (Vulliemoz et al., 2011;Chaudhary et al., 2016;Murta et al., 2016Murta et al., , 2017Ridley et al., 2017;Sharma et al., 2019) and in particular the safety tests that made it possible (Carmichael et al., 2008) and associated scanning protocol (Carmichael et al., 2012). This protocol contains prescriptions on the choice of RF transmit coil, MR sequence, and the type, connection and positioning of the EEG wires and equipment, and relies to a large degree on the use of a scanner bore foam insert on which the EEG system can be placed precisely and consistently. In this work our objective was to confirm whether, based on the same protocol, the use of the body RF transmit coil instead of the head-only transmit coil in the same MRI scanner, would result in excessive heating. The electrodes were positioned inside a water-based gel phantom in a configuration that emulates a clinical scenario, in line with our previous tests (Carmichael et al., 2008) and subjected to trains of RF excitation pulses (low and high-SAR sequences) Table 3. through the body RF transmit coil. We explored a range of electrode lead configurations: length, placement relative to the scanner's central axis and termination; each a deviation from our previously defined protocol (Carmichael et al., 2012). Current international guidelines recommend that MRIinduced heating should not cause temperature in the head to exceed 38 • C, suggesting an allowable increase of ≤1 • C (IEC, 2016). In summary in this work, for the low-SAR (EPI) RF exposures prescribed in our protocol, the maximum observed temperature increase was 0.3 • C across all tested configurations. This provides further evidence on the suitability of our established icEEG-fMRI protocol by extending its applicability to our 1.5T MRI scanner's body RF-transmit coil. We assessed reproducibility by performing a number of repeated measurements within each experiment, for the same configuration, either by simply repeating the RF exposure (considering the high-SAR measurements only: measurements 1.1 and 1.3, 1.5 and 1.6, 2.1 and 2.2, 2.3 and 2.4) or repeating the RF exposure after moving the phantom assembly in and out of the scanner bore (measurement pair 1.6 and 1.7). Furthermore, taken together, measurements 1.5, 1.6, 1.7, 2.1, and 2.2 constitute repeated measurements for the same (intended) configuration [A(2) and B(1)] between scanning sessions (Experiments 1 and 2, which took place one week apart). The results of these comparisons (mean and standard deviation of inter-measurement difference across all locations: 0.0 and 0.2 • C, respectively) give an indication of the good reproducibility of our measurements (Bland and Altman, 1995), and which combined with the reference temperature measurements, suggest a detection threshold of the order of 0.5 • C. To our knowledge there has been a single previous investigation of the safety of using body transmit coil for icEEG during fMRI at 1.5T: Ciumas et al. (2013) performed temperature assessments in a water-gel phantom and rabbit cadavers using an EPI sequence for depth electrodes in two orientations: axial and lateral, showing temperature increases in the range of 0.2 and 1.3 • C. Previously, we investigated RFinduced heating for a body transmit coil at 3T for a set of Table 4. electrodes placed similarly to the set up used in this work. For high-SAR exposures maximum temperature increases of 6.4 • C at the grid electrode and 0.7 • C at the depth electrodes were observed when the electrode leads and extensions were separated ("open circuit" configuration), while the temperature increases were lower when gathering the leads and extensions together in a bundle "short circuit" (Carmichael et al., 2008). In addition, when placing the leads and extensions close to the scanner bore, the maximum heating at the grid was found to be 2.9 and 6.9 • C at the depth electrodes (Carmichael et al., 2012). Boucousis et al. (2012), working at 3T and using the body RF-transmit coil, observed a maximum temperature increase of 4.9 • C when applying high-SAR and 0.5 • C for low-SAR (fMRI) sequences. Both studies concluded that high-SAR sequences should be avoided when performing icEEG-fMRI; however, Boucousis et al. (2012) concluded that low-SAR sequences with the body transmit coil do not pose an unacceptable risk for the patients. The purpose of performing heating tests with high-SAR RF exposures, even for the evaluation of scanning protocols that preclude them, is manifold: (1) assess the risks associated with worst case scenarios (operator error during application of the protocol); (2) ensure that conclusions reached based on low-SAR tests do not simply reflect temperature measurement sensitivity limitations; and (3) reflects the requirements specified in the standard guidelines (ASTM F-2182-02a; ASTM, 2011). In this study, for the high-SAR (TSE) exposures the maximum observed temperature increase was 4.5 • C, for a configuration in which the wires and lead extensions were far from the scanner central axis (lying at the bottom of the scanner bore: no foam insert). This compares with a maximum temperature increases of 2.4 • C across all configurations with the wires lying along the scanner's Z-axis on top of our foam insert. We also note that this is much greater than the maximum increase of 0.7 • C for the two configurations without lead extensions [A(1) and B(3)], thereby further confirming the important effect of lead length (Carmichael et al., 2010). Concerning the impact of circuit termination, which in our experiments tended to be associated with greater heating (measurements 1.8 vs 1.7 and 2.8 vs 2.7), this may result from this corresponding to a conductive loop, as opposed to capacitive effects between wires in close proximity. Furthermore, "avoiding loops" is the usual guidance when placing electrophysiological leads in the MR environment (Lemieux et al., 1997;Kainz et al., 2002;Balasubramanian et al., 2017). Importantly, the lead extensions and connection to the EEG DAS are necessary for the application of icEEG-fMRI. Therefore, while they can affect the risk level adversely, in particular the use of lead extensions, our aim was to demonstrate that the amount of heating created in the "with extensions and connected" condition was acceptable and in what conditions relative to other factors (positioning, sequence, and coil types). The generalization of the conclusions that can be reached from our measurements is limited by numerous factors, including: the representativeness (and quality of fabrication) of the phantom and of the electrode configuration, the specific characteristics of the MRI scanner, and temperature measurement capability (spatial sampling, limited by the number of available temperature probes) and measurement error. While some of these, in particular the variety of possible electrode implantations used in clinical practice, may be particularly challenging we believe that this study is in line with previously published empirical work and furthermore reflects the ASTM standard-level of evidence. Similarly, in relation to spatial sampling of the temperature changes, our use of four temperature probes is also in line with many other recent studies (Boucousis et al., 2012;Ciumas et al., 2013;Jorge et al., 2015;Balasubramanian et al., 2017). We tried to mitigate this limitation using prior knowledge; for example, while in retrospect it might have been preferable to record the temperature at Depth 1 instead of Depth 2 in Experiment 1, we do not believe that this significantly alters our conclusions because Experiment 2 was specifically conceived as a series of worst-case scenarios (this in contrast to Experiment 1 which is effectively a feasibility test for the body transmit coil, based on our recommended configurations (Carmichael et al., 2012). Therefore, the guidance that can be provided based on our results can be summarized as follows: icEEG-fMRI is feasible with acceptable risk on a 1.5T MRI scanner (TIM Avanto, Siemens, Erlangen, Germany) using the standard body RF transmit coil if the following restrictions are applied: the EEG leads are brought together as close as possible to the top of the head and placed exactly along the scanner's central axis, and toward the back (head end) of the scanner, and connected to the EEG input box, itself placed on the scanner axis, which is connected to the EEG amplification units, also placed as close as possible to the scanner axis (this positioning can be facilitated by the use of a scanner foam insert (see Carmichael et al., 2012) and all scanning must be restricted to low-SAR gradient echo sequences. CONCLUSION In summary, this study provides good evidence for the feasibility of simultaneous icEEG-fMRI utilizing the body transmit coil on the 1.5T Siemens Avanto MRI scanner at our center. Careful consideration of the positioning of the electrode leads and EEG amplification system and choice of sequences is crucial, and should follow our established protocol (Carmichael et al., 2012). DATA AVAILABILITY STATEMENT The datasets generated for this study are available on request to the corresponding author.
2020-05-12T13:07:42.019Z
2020-05-12T00:00:00.000
{ "year": 2020, "sha1": "e0b2a37cda69f931012da234a368587e07958a89", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fnins.2020.00429/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e0b2a37cda69f931012da234a368587e07958a89", "s2fieldsofstudy": [ "Medicine", "Engineering" ], "extfieldsofstudy": [ "Medicine", "Materials Science" ] }
36974391
pes2o/s2orc
v3-fos-license
Assessing Learning Outcomes in Middle-Division Classical Mechanics: The Colorado Classical Mechanics/Math Methods Instrument Reliable and validated assessments of introductory physics have been instrumental in driving curricular and pedagogical reforms that lead to improved student learning. As part of an effort to systematically improve our sophomore-level Classical Mechanics and Math Methods course (CM 1) at CU Boulder, we have developed a tool to assess student learning of CM 1 concepts in the upper-division. The Colorado Classical Mechanics/Math Methods Instrument (CCMI) builds on faculty consensus learning goals and systematic observations of student difficulties. The result is a 9-question open-ended post-test that probes student learning in the first half of a two-semester classical mechanics / math methods sequence. In this paper, we describe the design and development of this instrument, its validation, and measurements made in classes at CU Boulder and elsewhere. I. INTRODUCTION In recent years, the physics education research (PER) community has placed a strong emphasis on improving student learning in upper-division courses for physics majors. [1][2][3][4] Many research studies have shown the wide variety in students' understanding of particular physics concepts and practices during and after instruction. [5][6][7][8][9][10][11][12][13] Systematic efforts to assess student understanding on a broader scale have been useful in facilitating this effort. 14 These systematic assessments of student understanding at the upper-division highlight common and persistent student difficulties that can both inform curricular and pedagogical innovations and help form the basis for research efforts. Furthermore, these measures of student performance provide an indicator of the effectiveness of different pedagogies and curricula and can be used by instructors and departments to improve course offerings over time. In fact, over the last 40 years, the awareness created by assessments of student learning using conceptual inventories has helped to drive widespread transformation of introductory lecture courses. [15][16][17] The use of these conceptual inventories has also helped the physics community identify persistent difficulties and provided the means to compare learning outcomes between different pedagogical and curricular reforms as well as across many institutions and implementations. [18][19][20][21][22][23][24][25] Over the last decade, the Department of Physics at the University of Colorado Boulder (CU) has worked to transform their upper-division lecture courses to more student-centric instruction. 26,27 This transformation process has involved the development of facultyconsensus learning goals, 28 the development of instructional materials, 29 and research to identify student difficulties, 5,11,12,30 which has informed refinements to both the aforementioned learning goals and instructional materials. In recent years, upper-level assessments in the areas of quantum mechanics 31 and E&M 32,33 have been developed to, in part, understand the impact of these transformations on student understanding. In this paper, we present the Colorado Classical Mechanics/Math Methods Instrument (CCMI) that is both grounded in the history of this work and opens a new space for upper-level physics assessments -middledivision Classical Mechanics and Mathematical Methods (CM 1). The CCMI (Sec. II) consists mostly of openended questions that probe students' use of the sophisticated skills and practices outlined in faculty-consensus learning goals. In Sec. III, we present the development of the CCMI including the design of the questions and the measures that provide evidence of validity. We discuss the design and structure of the grading rubric as well as measures of reliability in Sec. IV. In Sec. V, we present statistical results from its implementation at CU and other institutions through the lens of classical test theory. Finally in Sec. VI, we discuss implementation, measurement, and possible uses. II. THE COLORADO CLASSICAL MECHANICS/MATH METHODS INSTRUMENT The Colorado Classical Mechanics/Math Methods Instrument (CCMI) is a 9-question (with a total of 22 subparts) open-ended test that focuses on topics taught in the first half of a two-semester classical mechanics sequence. This first course concludes before a discussion of the calculus of variations; hence, the Lagrangian and Hamiltonian formulations of mechanics are absent from the test. The CCMI focuses on core skills and commonly encountered problems. Students solve a variety of problems such as: determining the general solution to common Learning goals evaluated: Students should be able to: · choose appropriate area and volume elements to integrate over a given shape. · translate the physical situation into an appropriate integral to calculate the gravitational force at a particular point away from some simple mass distribution. Q9 Consider an infinitely thin cylindrical shell with non-uniform mass per unit area of σ(φ, z). The shell has height h and radius a, and is not enclosed at the top or bottom. (a) What is the area, dA, of the small dark gray patch of the shell which has height dz and subtends an angle dφ as shown to the right? (b) Write down (BUT DO NOT EVALUATE) an integral that would give you the MASS of the entire shell. Include the limits of integration. The sample question appears on the CCMI pre-and post-tests; vector calculus is a prerequisite for CM 1. This question constitutes 9% of the total post-test score. differential equations (e.g.,ẍ = −A 2 x); finding equilibria and sketching net forces on a potential energy contour map; and decomposing vectors in Cartesian and planepolar coordinates. We have designed the CCMI to be given in a standard 50-minute lecture period. To accompany the longer post-test, we have developed a short (15-20 minute) pre-test that contains a subset of three problems taken from the post-test. Figure 1 shows a sample CCMI question that appears on both the pre-and the post-test. III. DEVELOPMENT AND CONTENT VALIDATION The development of the CCMI followed the process established by Chasteen et al., 32 which was recently reviewed by Wilcox et al. in their paper describing the uses and development of upper-level physics assessments. 14 Broadly speaking, the process involves establishing and prioritizing assessable learning goals, crafting questions that are tested with students using think-aloud interviews, 35 and validating questions based on student and faculty input. A. Development History At CU, CM 1 is a blended classical mechanics and mathematical methods course that forms the first half of a two-semester sequence in classical mechanics. In recent years, this course was transformed from lecturebased instruction to more active and student-centric instruction. 27 The early part of this transformation involved the development of consensus learning goals by a group of faculty. A series of faculty meetings were held to develop consensus course-scale learning goals and to articulate the topical content coverage of the course. 28 Overall 19 faculty (4 PER, 15 non-PER) participated in at least one of the 7 meetings, with an average of 9 faculty at each meeting. 28 Course-scale learning goals focus on how the student develops over the whole semester. For example, students in CM 1 are consistently exposed to the connection between math and physics. Relevant course-scale learning goals for this area include: "Students should be able to translate a physical description of a sophomore-level classical mechanics problem to a mathematical equation necessary to solve it. Students should be able to explain the physical meaning of the formal and/or mathematical formulation of and/or solution to a sophomore-level physics problem. Students should be able to achieve physical insight through the mathematics of a problem." After the development of course-scale learning goals, a set of specific, topic-scale learning goals were drafted. To develop these topic-scale learning goals, we utilized field notes collected during lectures, weekly homework help sessions, and faculty meetings. A further set of faculty meetings were held in which the topic-scale learning goals were agreed upon. In these meetings, several topic-scale learning goals were selected to be assessed on the CCMI as articulated in the learning goals. 28 These topic-scale learning goals combined content coverage that faculty had defined and the mathematical and problem-solving skills characteristic of upper-division coursework. For example, "Students should be able to use Newton's laws to translate a given physical situation into a differential equation" and "Students should be able to project a given vector into components in multiple coordinate systems, and determine which coordinate system is most appropriate for a given problem." These course-scale and topical-scale learning goals are available online. 34 These topic-scale (measurable) learning goals formed the basis for the development of the CCMI. While these learning goals were developed by CU faculty, and are spe-cific to CM 1, we believe that many are applicable to the mathematical methods and classical mechanics courses offered at other universities because (1) the goals were developed by traditional, not PER, physics faculty, and (2) the topical coverage was drawn from the first five chapters of a standard classical mechanics textbook. 36 Moreover, faculty from five other institutions have given the CCMI in their courses and were interviewed to obtain feedback on the learning goals assessed by the CCMI as well as the CCMI itself. These interviews led to changes in coverage and scoring of the CCMI. As the topic-scale learning goals were developed, CU faculty discussed which ones were most fundamental to student learning, that is, which goals (when met by students) would be taken as evidence of learning in CM 1, which goals formed the basis for future learning (e.g., in future physics coursework), and, thus, which goals should be assessed on a standardized instrument. When a topic-scale goal was deemed by faculty to be assessmentworthy, a draft assessment item was written by the postdoctoral researcher facilitating these conversations with input from faculty. Sixteen open-ended questions were initially written. Some of these questions were adapted from exam or clicker questions written by CU faculty in previous semesters. All questions were informed by observed student difficulties. 29 The early versions of these questions were entirely open-ended and were developed to draw out student ideas about the particular concepts and skills that would be assessed on the final instrument. The earliest version of the CCMI contained 16 questions -more than could be answered in a single 50minute class period. Thus, the CCMI was split into two 11-question versions with some number of overlapping questions; each version was given to half the students in the class. One benefit of developing this instrument at a large, research-intensive university is a large population of students taking CM 1 -in some semesters, more than 100 students have been enrolled in CM 1 at CU. Through a number of administrations of early versions of the questions, feedback from faculty and students, as well as timed testing, the CCMI was trimmed to an 11question, open-ended assessment that could be administered in a 50-minute period. Following this internal development period, the CCMI was offered in a "beta" version to faculty teaching courses like CM 1 at other institutions. Administration of the CCMI at these other institutions provided additional feedback on the content coverage and scoring of the CCMI. Interviews were conducted with these "beta" testers to learn more about their courses, their needs, and their view of the CCMI. These interviews were prompted by concerns about certain questions from the Colorado Upper-division Electrostatics Assessment from colleagues using it at other institutions. 37 Prior to these interviews, faculty were given a copy of the CCMI and the accompanying learning goals (e.g., Fig. 1) to review. The "CM 1" courses that our interviewees taught ranged from quite similar to CM 1 (e.g., a 2 semester sequence classical mechanics) to quite compressed compared to CM 1 (e.g., a 1 semester course on classical mechanics that surveys all common topics including Lagrangian Hamiltonian dynamics and the orbit equation). While there was a substantial diversity among the topical coverage among the courses taught by these faculty, most agreed that 9 of the 11 questions were covered well enough in their courses to be included as part of the assessment. However, for most faculty, 2 questions, which deal with Fourier Series and Laplace's equation, were covered superficially or not at all in their courses. As a result, the CCMI consists of 11 questions -9 core questions that count towards the overall score that can be compared across institutions, and 2 optional questions that may be used at institutions where such topics are taught. B. Content Validation of the CCMI In designing the CCMI, we took the approach that an assessment of student learning should address the topics that traditional physics faculty value. This serves to validate the instrument in the sense that the questions being asked of students cover the topics in the way that faculty desire. Further, this process serves to generate buy-in to use the instrument. Secondly, the instrument needs to be interpretable by students, that is, students need to be able to interpret each question consistently and in the ways that instructors expect. Below, we detail how we established the validity of the CCMI through discussions with faculty and think-aloud interviews with students. a. Expert Validation As the basis for the questions were the expert-developed learning goals, 28 the instrument was grounded in the topics deemed essential by faculty. Draft questions were developed from these learning goals; some were inspired by existing course materials (clicker questions, exam questions, etc.) and others were crafted from scratch. Once a complete set of questions was drafted, faculty at CU and elsewhere were consulted individually to obtain their feedback on the instrument. The CCMI was sent to faculty before meeting with the postdoctoral researcher for a semi-structured interview. The instrument and subsequent questions were positioned to the interviewed faculty in the following way: "Does this question ask about the kinds of things you want students in your CM 1 class to learn?" "If a student in your CM 1 class correctly solved this question, would you say that student demonstrated an understanding of this topic? Why?" "If a student performed well on this instrument, would you expect them to have performed well in your CM 1 class? Why?" As faculty spoke on these different topics, follow-up questions were asked to elucidate the meaning behind faculty's answers. In all, nine faculty (4 at CU and 5 elsewhere; all non-PER faculty) were interviewed for between 50-90 minutes. Individual faculty input was often aligned with each other, likely because these interviews took place following the discussion of learning goals. But, there were conflicting comments at times. For example, most faculty interviewed agreed that the instrument should focus on conceptual aspects of CM 1 while one or two faculty desired students to perform calculations on certain questions (e.g., Taylor series) because they believed that to be the only way to judge student learning on those particular topics. Where there was disagreement between interviewed faculty, we sided with the majority. Hence, the CCMI focuses on more conceptual aspects of CM 1. Faculty input was critical to deciding which questions to prune from the 16-question version of the CCMI. Discussion with faculty lead to ranking questions by "most important for students to understand after completing CM 1." b. Student Validation Questions on the CCMI were further shaped by conducting think-aloud interviews with students while they solved the CCMI. The interviews served two purposes: (1) to ensure that the wording of the questions was clear for students (i.e., that students would interpret questions as asked), and (2) to collect student reasoning for correct and incorrect answers in order to help shape the grading rubric, which had not yet been fully designed. Eight CU students who had recently completed CM 1 earning grades ranging from A to C were interviewed (in two cohorts) for 60-90 minutes as they solved the CCMI. Following a think-aloud protocol, 38 students narrated their thoughts while solving each question. The interviewer took notes identifying how each student read each question, what reasoning was brought to bear on each question, and where there were points of confusion or issues of clarity. If at any time, the student struggled to answer a question, the interviewer suggested they make their best attempt given what they understand. Following a student's completion of the CCMI, the interviewer followed-up question by question with the student about their reading of the questions and their reasoning through their answer. The interviewer also discussed the correct solution to each question with most students as they were often interested in how well they performed. These interviews and notes were analyzed for salient themes that addressed issues of clarity and student reasoning after the first cohort of students completed the interviews. The most prevalent issues were addressed by the first round of editing by the development team. For example, no interviewed student in the first cohort knew how to answer the Taylor series question. Discussion with the interviewees indicated a mismatch between our intent (i.e., explaining the importance of the small parameter in the expansion) and their experience (i.e., not ever being asked to think explicitly about the small parameter). Questions were redrafted before conducting interviews with the next cohort of students. In this second set of interviews, the majority of questions elicited the ex-pected responses and underlying reasoning. Those questions that still had some issues were positioned to the students as, "In this question, we are trying to get you to work with this idea (e.g., Taylor series) in this way (e.g., identifying the small parameter in the expansion), how would you know to do that?" Students' responses to questions of this kind provided the final edits to the previously-problematic questions. IV. GRADING THE CCMI Scoring student responses to an assessment reliably undergird the value of the assessment to students and faculty. The rubric for the CCMI was informed by the lessons learned from our development of other upperlevel assessments 14 as well as experienced and anticipated challenges for faculty users. A. Rubric Rationale With the validity of the CCMI established, we turned to scoring student responses to provide an indication of student performance. It is important for independent assessments of student learning, such as the CCMI, that independent graders achieve consistent results. Therefore, the scoring rubric needs to capture the variety of student responses and indicate how each response is scored. There are a number of possible approaches to supporting graders in this work. For example, in the electrostatics context, the Colorado Upper-Division Electrostatics Diagnostic (CUE) took the approach of training graders to attend to both students' final answers and the nuances of student responses. 32 As such, graders were not only providing a consistent score for student work, but also attending to the details of student difficulties. The training was not intended to be overly prohibitive (∼8 hours), but there was not much interest outside the PER community to learn to grade the CUE. Thus, researchers at CU have continued to provide a grading service to the physics community. In order to facilitate grading and promote wider use of the CUE, Wilcox, et al. developed a multiple-choice version of the CUE that can be delivered online. 33 This work leveraged the large body of CUE responses collected over the years to develop an updated set of questions and a logical grading model that has proven quite successful -reproducing similar results to the original CUE. While the CCMI has recently had significant interest from faculty at a number of institutions, the initial work to develop the rubric could not leverage a large body of responses. Thus, we decided to separate the two roles of the assessment into two rubrics: (1) a grading rubric Question 9 (Writing an Integral) -Total points: 3: Fig. 1. The format focuses the grader's attention on the final response provided by the student. The grading rubric was not designed to elucidate details of student difficulties, but rather to capture the common final responses provided by students and score them accordingly. that allows for scoring student work from a "mastery" perspective 39,40 and (2) a difficulties rubric that helps to uncover the prevalence of student difficulties in CM 1. 41 The grading rubric is intended for faculty with no training to grade their students' responses consistently and have confidence that their scoring is meaningful. The difficulties rubric that we are developing is intended for researchers (or faculty) who intend to dig deeper into student reasoning and requires some amount of training. In this paper, we discuss only the grading rubric. The approach to grading the CCMI that we have used focuses on the students' final answer and points are taken away for errors in that answer. Graders need only to attend to one part of a student's answer and can score based on more salient features of the student's final response. This grading approach is taken by both the CCMI and the Colorado UppeR-division ElectrodyNamics Test (CURrENT). 40 The development of the grading rubric was grounded in patterns in students' responses to CCMI questions, which formed the basis for categories in the grading rubric. 12,41 The grading rubric describes how points are deducted for different errors, providing examples where necessary (it does not list all the possibilities). The illustrative errors are those commonly seen in students answers. The allocation of points on each question and the partial credit awarded for some responses are based on faculty "rankings" of the relative importance of the learning goal each question assesses and the rela-tive importance of the errors. The rubric used to grade the question shown in Fig. 1 appears as Fig. 2. Largescale (N>500) use of the rubric on students' responses at CU and other institutions resulted in changes to the wording of the rubric and the addition of new examples. While a different design for scoring student work might be used, in our design, we considered asking traditional faculty to grade the assessment and how we might achieve consistent results across untrained graders. Our grading procedure does produce consistent results. Inter-grader Reliability Through a series of analyses, we established the reliability of our grading rubric. Our work follows the analysis conducted by Chasteen, et al. to establish a reliable grading rubric for the CUE, 32 but also makes use of an untrained grader who was asked to use the completed rubric to score student responses. The two graders (one untrained) scored responses from 100 students to all 11 questions on the CCMI. The resulting scores assigned to individual responses were compared as well as the overall score for a given students' CCMI. The resulting analysis demonstrated that an untrained grader can score students' responses to the CCMI reliably using the grading rubric. First, the average overall difference in CCMI scores 0-2. [Color online] The absolute difference in CCMI scores assigned by a trained and untrained grader is presented. The average difference on total CCMI score between the trained and untrained grader is 3.5% ± 2.7%. The graders agreed to 5% on overall score for 79% of the exams. assigned to students between a trained and untrained grader is less than 5% (3.5% ± 2.7%) of the total points. Fig. 3 demonstrates that the graders agreed on a total score within 10% for all but two students, and for the majority of students (79%) the graders were below 5% disagreement. While this difference on total score is an intuitive measure of agreement, a more rigorous test of agreement is one that includes where graders agree by chance. It is typical in assessment work to use Cohen's Kappa to characterize the agreement between two (or more) graders. 42,43 However, there are concerns with using Cohen's Kappa were partial credit is awarded, where the scales between items differ, and where the total number of possible scores is high. Moreover, Cohen's Kappa is a relatively conservative measure of agreement. 44 These issues associated with Cohen's Kappa for determining reliability on an assessment like the CCMI appears in Chasteen, et al. 32 As expected (and previously observed), agreement across all possible point distributions is low (κ = 0.23). It is unlikely for each grader to agree on the overall points awarded to each student (Fig. 3), but it is fairly likely for graders to agree within a few points. Like the CUE, Cohen's Kappa calculated for scored binned into two-point intervals (∼5%) provides evidence of moderate agreement (κ = 0.47). When binned into four-point intervals (∼10%), we obtain evidence of substantial agreement (κ = 0.64). Hence, within differences of 5%, we find that our trained and untrained graders agree well. While this overall agreement is reasonable, it may be that specific questions may contribute to these differences more than others. That is, it might be that some combination of a specific question and the rubric describing how to score that question is unreliable. By determining Cohen's Kappa for each question on the CCMI (see Table I), we find some evidence that questions 1 (Common differential questions) and 6 (Vector decomposition) might be contributing to these overall discrepancies. This message is further bolstered by the evidence provided in Fig. 4 where we have shown the percent agreement between a trained and untrained grader on each question. Here, we define "exact" to be the same score for the students' response while "close", "moderate", and "poor" represent agreement to ± 20%, ± 20%-50%, and ± 50-100% respectively. These analyses provide evidence of a robust and reliable grading rubric, but we acknowledge that due to our design there is some information lost, particularly if the CCMI rubric is compared to the CUE rubric. Due to the focus on final answers, information about student difficulties that would be captured in a more detailed rubric is lost. We are developing a separate difficulties rubric to address this issue. 41 However, what is gained (speed, accuracy, and adoption) by this approach to grading should not be understated. V. STATISTICAL VALIDATION OF THE CCMI To establish an assessment as a valid and reliable instrument, further analysis into specific properties of the test must be conducted. Recently, this kind of work has shifted towards using response modeling techniques such as Item Response Theory (IRT). 45,46 While IRT is quite robust and used widely, the body of data needed to use it reliably is more than we have been able to collect. Over the last several years, we have collected data from five CM 1 courses at CU (N = 244) and from eleven similar courses at nine other institutions (N = 218). There are simply not as many users or students taking upper-level assessments of this type. Hence, we make use of Classical Test Theory 47 -following the analysis conducted by Chasteen, et al. 32 and Wilcox et al. 33 a. Internal Consistency An assessment of student learning should be internally consistent. If the assessment aligns with the goals of instruction, students who perform well on a single question should perform well on other questions. Essentially, each question should provide consistent information about a student performance (on the average). It is typical to use Cronbach's alpha to investigate internal consistency -estimating the reliability of scores or the "unidimensionality" of the assessment. We determined Cronbach's alpha treating each part of a question as an item because the total number of test items on CCMI is small. We find that the CCMI is a highly internally reliable assessment (α = 0.83). The acceptable range for α is above 0.7, with greater than 0.8 being "good." 48 b. Criterion Validity If aligned well with the learning goals for a course, we expect that an independent assessment of student learning (i.e., the CCMI) should correlate with other assessments of student learning (e.g., Question on CCMI For each question, the score differences between a trained and untrained grader are shown. These percentage agreement between graders was binned as exact (same score, blue), close (within ± 20%, orange), moderate (within ± 20%-50%, purple), and poor (more than ± 50%, gray) agreement. For all questions, exact agreement was the most prevalent form of agreement (≥50% of exams for all questions). final exams). Students' exams are the most similar measure to the CCMI. Like exams, the CCMI is completed individually in timed and controlled environments. But, unlike exams, it does not affect students' grades. Each class at CU took three exams: two regular hour exams and a final. The averages of those three exams were normalized (using a z-score, z = x−x σ ) to allow comparisons of different instructors. CCMI post-test scores were strongly correlated with these z-scored exam averages (r = 0.71); a linear model can thus account for 50% of the variance in exam scores associated with CCMI scores. Similarly high correlations were observed on the CUE. 32 c. Item-test correlation We expect that the performance on individual items to correlate well with the overall score on the instrument (minus the item being tested). This correlation is expected from the premise that the whole assessment is a measure of a large constructknowledge of CM 1 concepts -and that construct has underlying features -e.g., Taylor series -that will be more or less learned in similar amounts. We use Pearson's r (linear correlation) to determine how well each item connects to the rest of the CCMI (see Table I for values of r for each item). We find that all items are above the established threshold (r ∼ 0.2) for item-test correlation. However, we note that Question 2 (Taylor series) correlates much less than the rest of the items do with the whole instrument. d. Discrimination An assessment of student learning should be able to separate students who demonstrated low understanding from those who demonstrated high understanding. Ferguson's delta is the typical measure of discrimination used for assessments of this type -it provides a measure of how broadly the scores are distributed across the possible scores. In calculating Ferguson's delta, we used the total number of points on the assessment rather than the number of items as each question is worth a different number of points. We find that the CCMI has excellent discrimination on a per-point basis (δ = 0.99). A test with δ > 0.9 is considered to have (N = 462). The average score for students taking the CCMI is indicated (orange arrow): 49.0% ± 1.0% as well as the performance by first-year physics graduate students at CU Boulder (green arrow): 74.5% ± 3.4% (N = 5). good discrimination. 49 e. Additional analyses of discrimination While Ferguson's delta is a typical measure, it might not be an intuitive measure of discrimination. In Fig. 5, we provide the histogram of student performance on the CCMI, which shows the mean score to be 49.0% ± 1.0% (N = 462). Indeed, the CCMI is a difficult assessment. Firstyear graduate students at CU earned an average score of 74.5% ± 3.4% (N = 5). In Fig. 6, we provide a visualization of the difficulty of each item. The mean and median score for each item are plotted along with the mastery score, which is the score that the top 30% of scores lies at or above. 6. [Color online] The mean (blue) and median (orange) of student performance on each question is provided. The median score for question 2 is zero. In addition, we show the mastery score (gray square): the score that separates the top three deciles from the bottom seven deciles, that is, the score which the top 30% of scores lies above. VI. DISCUSSION AND CONCLUSION In summary, we have developed an assessment for Classical Mechanics/Mathematical Methods courses for which we have established validity and developed a reliable grading rubric. Scores on the CCMI correlate well with other measures of student understanding (i.e., inclass exams) and internal measures of validity, reliability, and discrimination are well within the acceptable scope for such an assessment. While it may appear that the instrument is quite specialized, use by and feedback from faculty at other institutions have shaped the assessment to cover a broad range of courses, from those quite similar to CM 1 (e.g., a 2 semester sequence classical mechanics) to quite compressed compared to CM 1 (e.g., a 1 semester course on classical mechanics that surveys all common topics including Lagrangian Hamiltonian dynamics and the orbit equation). That feedback from faculty informed both the design and use of rubric developed to analyze student work on the CCMI. The design of the rubric for the CCMI separated the two traditional roles of assessment in physics education: (1) gaining a reliable understanding of student performance on specific topics, and (2) identifying persistent student difficulties. 14 The former role was presented in this paper as the grading rubric, which demonstrated reliability even when used by an untrained grader. A rubric to address the latter is in development and will be the subject of future work. The CCMI was designed to serve a variety of purposes. Most simply, it is an independent measure of student understanding after instruction in a classical mechanics course. Student performance on specific topics as well as performance across the instrument can serve as a secondary and standardized measure of student understanding after a classical mechanics course. These measures can be used by faculty to improve different aspects of their instruction as they see fit. Most faculty who have used the CCMI have used it for this purpose. Faculty have reviewed their score reports to identify strengths and weaknesses in their instruction based on their inter-pretation of students' scores as well as to provide direct feedback to their students. At a slightly higher scale, the CCMI may serve as a tool for departments looking to assess their physics program. It is becoming increasingly incumbent upon physics departments to demonstrate some form of independent assessment, and the CCMI (along with other standardized instruments) can help serve this purpose. Unlike course final exams, the CCMI is a standardized instrument, which invites comparison over time, between curricula, and across institutions. As such, student performance on the CCMI could be part of a more comprehensive departmental assessment. From a cultural perspective, the CCMI offers opportunities for new (and seasoned) faculty to push on norms for teaching evaluation in their own tenure and promotion cases. Faculty teaching classical mechanics courses can demonstrate their commitment to quality instruction by including student performance on CCMI in their teaching portfolios. These kinds of independent assessments are critical to understanding how student learning is being affected by instruction beyond the typical collection of course syllabi and student responses to end-of-course evaluations. While we have developed a valid and reliable assessment for classical mechanics that can serve a number of purposes, we have accepted certain limiting factors in our design. Given the constraints of administration (i.e., a 50 minute lecture period), the content coverage of the CCMI is limited (Table I). Not every instructor will agree on which topics should appear on an assessment for classical mechanics -making it impossible to satisfy each instructor's needs. To address the issue of topical coverage, we drew from consensus learning goals 28 that were developed by traditional physics faculty. In designing the questions for the CCMI, we worked with these faculty to prioritize the learning goals and, thus, the topics that were evaluated on the CCMI. Furthermore, we collected feedback from instructors across the country to ensure that the CCMI meet most of their needs. It was in this work that two questions on the CCMI were designated optional as these topics were not covered to the degree they were at CU. In a sense, we have developed an assessment that serves as the "common denominator" for many implementations of classical mechanics. A second limitation is our focus on students' final answers for the grading rubric, which under-emphasizes the process by which the student obtained the answer, and, moreover, can make it difficult to judge the prevalence of specific student difficulties. The purpose of this answerfocused grading rubric was to streamline the process by which faculty can obtain information on student performance on the CCMI. For example, a significant challenge for the CUE has been to train new graders to reliably score student responses to the CUE, which informed our decision to simplify the process so that an untrained grader using the rubric could score student responses reliably and have confidence that they had done so (Figs. 3 & 4). Our current grading rubric has achieved this. To deal with this limitation, we are developing a rubric that helps categorize difficulties that manifest on the CCMI. 41 This rubric is being informed by research into student understanding of classical mechanics. 11,12 However, it is worth noting that there is still much that can be learned from scoring the CCMI as we have done: the most prevalent incorrect answers are represented in the grading rubric as partially correct answers (Fig. 2). In fact, our research into student's approaches to vector decomposition 12 was informed by results from grading the vector decomposition problem on the CCMI. Hence, some information about the prevalence of certain kinds of student difficulties are captured by the grading rubric. Wilcox et al. solved the problem of reliably scoring an independent assessment differently by adapting the CUE to a multiple-choice version with a logical scoring system that could be offered online or with scantrons. 33 This work benefited from the large body of student responses to the CUE collected over the years. Now that we have completed the development of the CCMI and collected a similarly large body of student responses, we are exploring the possibility that the CCMI might be adapted into a multiple-choice, machine-gradeable format. Our primary goal for developing the CCMI was to provide a tool for instructors to recognize possible gaps between their instruction and what students learn from that instruction. Our aim is to help instructors to adapt their teaching to align more with their own goals. As it exists presently, the CCMI can serve (and has already served) that purpose for a number of classical mechanics instructors. Expression,a1ẍ + a2ẋ + a3x = 0, and a corresponding graph for the motion of mass on a spring. Identify the units of a1, a3, and resketch the solution if a3 is smaller. Given description of the position and forces between a particle confined to move between two objects that attract it. Write down a differential equation that describes the position of the particle as a function of time.
2016-06-10T12:26:17.000Z
2016-06-10T00:00:00.000
{ "year": 2016, "sha1": "69bef64a232ec3059aead9cccba10e6c5a3af2f6", "oa_license": "CCBY", "oa_url": "http://link.aps.org/pdf/10.1103/PhysRevPhysEducRes.13.010118", "oa_status": "GOLD", "pdf_src": "Arxiv", "pdf_hash": "69bef64a232ec3059aead9cccba10e6c5a3af2f6", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [ "Mathematics", "Physics" ] }
53645108
pes2o/s2orc
v3-fos-license
Interacting quantum walkers: Two-body bosonic and fermionic bound states We investigate the dynamics of bound states of two interacting particles, either bosons or fermions, performing a continuous-time quantum walk on a one-dimensional lattice. We consider the situation where the distance between both particles has a hard bound, and the richer situation where the particles are bound by a smooth confining potential. The main emphasis is on the velocity characterizing the ballistic spreading of these bound states, and on the structure of the asymptotic distribution profile of their center-of-mass coordinate. The latter profile generically exhibits many internal fronts. Introduction Quantum walks have witnessed an upsurge of interest in parallel with the developments of quantum algorithms and quantum information (see [1,2,3] for reviews). Two different types of quantum-mechanical analogues of classical random walks have been investigated. Discrete-time quantum walkers [4,5,6,7] possess, besides their spatial position, a finite-dimensional internal degree of freedom, referred to as a quantum coin. Both spatial and internal degrees of freedom jointly undergo a unitary dynamics. Continuous-time quantum walkers [8,9,10] have no internal degree of freedom. Their dynamics is governed by some hopping operator on the underlying structure. Despite these differences, continuous-time quantum walks can be viewed as a limit of discrete-time quantum walks [11]. Both types of quantum walks exhibit many similar properties. Their main characteristic feature is a fast ballistic spreading. The typical distance traveled by a quantum walker grows linearly in time, as opposed to the diffusive spreading of a classical random walker. If two or more quantum walkers are simultaneously present, the combined effects of interactions, quantum statistics and entanglement give rise to novel features which have no classical counterpart. The Anderson localization of two interacting quantum particles in a random potential has attracted much attention [12,13,14]. More recently, dynamical features of the quantum walks performed by two or more entangled or interacting particles have been the subject of numerous theoretical studies (see e.g. [15,16,17,18,19,20,21,22,23,24,25]). Several experimental groups have also studied the quantum walk of entangled pairs of magnons [26] and of photons in various integrated photonics devices [27,28,29,30]. Here we investigate the continuous-time quantum walk of bound states of two identical bosonic or fermionic particles. The main emphasis is on the distribution profile in the center-of-mass coordinate of the bound states, including the dependence of the ballistic spreading velocity on model parameters, and the generic presence of many internal ballistic fronts. The setup of this paper is as follows. In section 2 we revisit in a pedagogical way the continuous-time quantum walk of a single particle. We analyze the distribution of the position of the particle, emphasizing its dependence on the initial quantum state. We also show that allowing the particle to hop to second neighbors may yield a novel effect, namely the appearance of internal ballistic fronts in the position distribution, besides the usual extremal fronts. We then address similar questions in the more challenging situation of the quantum walk performed by bound states of two identical particles. The same formalism allows one to deal with bosonic and fermionic bound states, as they are respectively described by even and odd functions of the relative coordinate between both particles. We consider bound states generated either by imposing a hard bound on the distance between both particles (section 3) or by a smooth confining potential (section 4). In both situations the emphasis is on the distribution of the center-of-mass coordinate and on the presence of internal ballistic fronts. Section 5 contains a discussion of our findings. Quantum walk of a single particle This section serves as a self-contained pedagogical introduction to the main concepts emphasized in the rest of the paper. We consider the continuous-time quantum walk performed by a single particle on a one-dimensional lattice. Most of the features underlined in this section have been studied, or at least mentioned, in several earlier works in the context of discrete-time quantum walks [20,31,32,33,34,35,36,37]. One of the purposes of this section is to demonstrate that the analysis of the continuoustime quantum walk problem is much simpler, and can therefore be worked out in a more systematic fashion. More specifically, we first revisit the simple quantum walk where the particle hops to nearest neighbors only, focusing our attention onto the asymptotic distribution of the walker, including its dependence on the initial state. We then turn to a generalized quantum walk, where the particle hops to further neighbors as well. We show that allowing hopping to second neighbors gives rise to internal fronts in the distribution profile. Hopping to second neighbors is known to have drastic physical consequences in many other situations. One celebrated example is graphene (see [38,39]), where hopping to second neighbors breaks the chiral symmetry between both sublattices. Simple quantum walk The framework of the simple quantum walk is the usual one of the tight-binding approximation, where the particle hops to nearest neighbors only. Throughout this paper, we use dimensionless units. The wavefunction ψ n (t) = n|ψ(t) of the particle at site n at time t obeys i dψ n (t) dt = ψ n+1 (t) + ψ n−1 (t). (2.1) The dispersion law between wavevector (momentum) q and frequency (energy) ω and the corresponding group velocity v therefore read where the prime denotes a derivative. Initial state localized at the origin Suppose that at time t = 0 the particle is localized on a single site, taken as the origin of the lattice: ψ n (0) = δ n,0 . In Fourier space, ψ(q, 0) = 1, hence ψ(q, t) = e −iω(q)t , and so where the J n are the Bessel functions. Figure 1 shows the probabilities |ψ n (t)| 2 = (J n (2t)) 2 against position n at time t = 50. These probabilities exhibit various regimes of behavior at large n and t. They take appreciable values in the allowed region which spreads ballistically with the maximal velocity (2.4) Furthermore, they exhibit sharp fronts near n = ±2t, and they decay exponentially in the forbidden region beyond these fronts. These asymptotic results are classics of the theory of Bessel functions [40,41], whose physical meaning has been underlined in [9]. These results can be readily obtained the saddle-point method, which will be used later in other situations. • Allowed region (|n| < 2t). In this region, the oscillatory behavior of the wavefunction at long times can be derived as follows. Setting n = −2t sin q ⋆ (|q ⋆ | < π/2) , (2.5) the integral in (2.3) is dominated by two equivalent saddle points at q = q ⋆ and q = π − q ⋆ (modulo 2π). We thus recover the well-known asymptotic form of Bessel functions: J n (2t) = i n ψ n (t) ≈ cos(2t cos q ⋆ − n(π/2 + q ⋆ ) + π/4) √ πt cos q ⋆ . (2.6) By averaging out the oscillations in the probabilities |ψ n (t)| 2 , one arrives [9] at a smooth distribution f (v) for the ratio v = n/t in the long-time limit, i.e., The above result can be alternatively obtained by folding the uniform distribution on the Brillouin zone [−π, π] by the dispersion curve of the group velocity: v = −2 sin q. • Transition region (|n| ≈ 2t). The distribution (2.7) becomes singular as the endpoints of the allowed region are approached (n → ±2t). The vicinity of these endpoints corresponds to the transition region in the theory of Bessel functions. Setting |n| = 2t + zt 1/3 , we have J n (2t) = i n ψ n (t) ≈ t −1/3 Ai(z), (2.8) where Ai(z) is the Airy function. The probabilities |ψ n (t)| 2 therefore exhibit sharp ballistic fronts [9], whose height and width respectively scale as t −2/3 and t 1/3 . • Forbidden region (|n| > 2t). In this region, the exponential fall-off of the wavefunction at long times can be derived by evaluating again the integral in (2.3) by the saddle-point method. Setting |n| = 2t cosh θ with θ > 0, we obtain Arbitrary initial state For an arbitrary initial state ψ n (0), we have Whenever the initial state is reasonably localized, in the sense that ψ n (0) decays fast enough with distance |n|, the time-dependent wavefunction exhibits qualitatively the regimes of behavior described above in the allowed region, ballistic fronts, and forbidden region. Let us analyze more carefully the allowed region (|n| < 2t). With the definition (2.5), the integral in (2.10) is now dominated by two inequivalent saddle points at q = q ⋆ and q = π − q ⋆ (modulo 2π). We thus arrive at Averaging out the oscillations in the above expression yields the following formula for the locally coarse-grained probabilities: (2.12) The limit distribution f (v) of the ratio v = n/t is obtained by folding the distribution on the Brillouin zone with density | ψ(q)| 2 /(2π) by the dispersion curve v = −2 sin q. This line of thought has been used in [35,36], and more thoroughly in [37], for discretetime walks. For definiteness, let us consider the case where the initial wavefunction is spread over three consecutive sites, i.e., ψ n (0) = aδ n,1 + bδ n,0 + cδ n,−1 (|a| 2 + |b| 2 + |c| 2 = 1). (2.13) We have ψ(q, 0) = a e −iq + b + c e iq (2.14) and The asymptotic result (2.12) reads explicitly with the definition (2.5), and where ‡ A = Re(ac), B = Im(b(a − c)). (2.17) The limit distribution f (v) of the ratio v = n/t reads therefore (2.18) ‡ The bar denotes complex conjugation. Analogous expressions have been derived in [5,6,35,36,37] for various models of discrete-time quantum walks. The distribution (2.18) generically has inverse-squareroot singularities at the endpoints of the allowed region (v = ±2), corresponding to the ballistic fronts. The initial state only affects the numerator (see (2.7)). This lack of universality, namely the ever-lasting memory of the initial state, is a genuine quantum feature (it is absent for the classical walker with sufficiently localized initial state). The first moments of the distribution (2.18), match the asymptotic growth at long times of the exact expressions of the position moments [9], namely The inverse-square-root singularities at the endpoints of the allowed region (v = ±2) are generic but not entirely universal. There are indeed special initial states such that either one or both endpoint singularities are absent from the limit distribution (2.18). Figure 2 shows the probability profiles |ψ n (t)| 2 at time t = 50 in the following two special cases of initial states. This kind of non-generic behavior of the distribution profile has also been observed for discrete-time quantum walks, both with the usual two-dimensional internal state [32,20] and with a more exotic three-dimensional quantum coin which may lead to a localization phenomenon, in the sense that part of the probability stays forever at a finite distance from the particle's starting point [33,37]. Reference [37] also contains analytical predictions for f (v), somehow similar to (2.23) and (2.25), in many special situations. Generalized quantum walk Novel phenomena occur in generalized quantum walks, where the particle may hop to further neighbors. Let us consider the minimal generalization where hops are limited to first (nearest) and second (next-nearest) neighbors, with respective amplitudes 1 and g. The wavefunction of the particle at site n at time t now obeys i dψ n (t) dt = ψ n+1 (t) + ψ n−1 (t) + g (ψ n+2 (t) + ψ n−2 (t)) . (2.26) We have therefore ω(q) = 2(cos q + g cos 2q), ω ′ (q) = v(q) = −2(sin q + 2g sin 2q), ω ′′ (q) = − 2(cos q + 4g cos 2q). (2.27) The above dispersion law ω(q) is invariant under the transformation g → −g, q → q+π, ω → −ω. We may therefore restrict the analysis to the domain g ≥ 0, without any loss of generality. Suppose the particle is launched from the origin: ψ n (0) = δ n,0 . Thus ψ(q, 0) = 1, and so (2.28) Some observables have a smooth dependence on the amplitude g. This is especially the case for the position moments (see (2.20)), which read n 2 (t) = 2(1 + 4g 2 )t 2 , n 4 (t) = 2(1 + 16g 2 )t 2 + 6(1 + 16g 2 + 16g 4 )t 4 , (2.29) and so on (odd moments vanish identically by symmetry). The presence of hopping to second neighbors however introduces a novel qualitative feature. Let us for the time being adopt a general standpoint, and consider the wavefunction (2.28) in the regime of long times. Evaluating the integral by the saddle-point method, as we did in section 2.1, we obtain where the sum runs over the solutions q of the saddle-point equation By averaging out the oscillations in the probabilities |ψ n (t)| 2 , we again predict that v has a smooth distribution in the long-time limit. In full generality, the above distribution is again obtained by folding the uniform distribution on the Brillouin zone by the dispersion map of the group velocity v = ω ′ (q), in the sense that dq/(2π) = f (v)dv holds formally. The distribution (2.32) has generically inverse-square-root singularities at all extremal values of v, in correspondence with wavevectors q so that dv/dq = ω ′′ (q) vanishes. It is always singular at the endpoints of the allowed region (v = ±V ), as the maximal velocity V is necessarily an extremal value. The distribution (2.32) may however also have internal singularities within the allowed region, which were absent in the simple quantum walk considered in section 2.1. Internal ballistic fronts of that kind have been observed in two generalizations of the usual discrete-time quantum walk [31,34]. Reference [31] investigates a quantum walk subjected to M independent quantum coins acting cyclically. If M is large, the dynamics exhibits a crossover between classical random walk at short times (t ≪ M ) and quantum walk at long times (t ≫ M ). In the latter regime the distribution of the quantum particle exhibits an array of equally spaced ballistic peaks, whose number grows as M/2, the distance between any two consecutive peaks being ∆v = √ 2/M . The model studied in [34] is closer to ours in its spirit. It consists of a discrete-time quantum walk where hops up to distance j are allowed, whose dynamics is rigidly dictated by (2j + 1)-dimensional Wigner rotation matrices. Here too, the distribution profile exhibits 2j + 1 equally spaced ballistic peaks. In order to pursue the analysis, let us specialize to the continuous-time quantum walk with hops to first and second neighbors only (see (2.26), (2.27)). This minimal example is already too complex to allow one to turn (2.32) to an explicit expression. The location of the ballistic peaks can nevertheless be predicted as follows. The second derivative ω ′′ (q) vanishes for The corresponding values V ± of v are given by The allowed region always spreads ballistically with the maximal velocity V + . The smaller velocity V − may also play a role, depending on the strength of g: • If the amplitude g is small enough (g < g c = 1/4), the situation is qualitatively similar to that of the simple quantum walk, studied in section 2.1. We have indeed cos q − < −1 and 0 < cos q + < 1, so that only q + matters, and so f (v) is only singular at the endpoints ±V + of the allowed region. • If the amplitude g is large enough (g > g c = 1/4), both q + and q − matter. As a consequence, the distribution f (v) has two singularities at the endpoints ±V + and two internal singularities at the smaller values ±V − . The wavefunction exhibits four ballistic fronts: two extremal ones, propagating at the maximal velocity (±V + ), and two internal ones, propagating at a smaller velocity (±V − ). • In the borderline case (g = g c = 1/4), we have cos q − = −1, i.e., q − = π and V − = 0. The dispersion curve exhibits an unusual quartic behavior near q = π: This anomalous dispersion right at g = g c has two consequences. First, the distribution f (v) has a singularity at v = 0, of the form This central singularity with exponent −2/3 is stronger than the generic ones, whose exponent is −1/2. Second, the wavefunction at the origin exhibits an unusually slow fall-off: This t −1/4 decay is slower than the generic t −1/2 decay exhibited e.g. by the Bessel function J 0 (2t) (see (2.3)). Figure 3 shows the dependence of the front velocities against g for g ≥ 0. The larger velocities ±V + of the extremal fronts (blue curves) describe the endpoints of the allowed region for all g. The smaller velocities ±V − of the internal fronts (red curves) exist for g > g c = 1/4 only. At g = g c we have V + = 3 √ 3/2, while V − takes off as (32 √ 6/9)(g − g c ) 3/2 . At large g, both velocities grow linearly in g with the same slope, as V ± ≈ 4g ± √ 2. This non-trivial pattern of front velocities is richer than the periodic arrays of equally spaced peaks observed in [31,34]. Figure 4 shows the probabilities |ψ n (t)| 2 at time t = 50 for a particle launched at the origin and two values of g. For g = 1/2 (left), the probability profile exhibits four fronts. For g = g c = 1/4 (right), the probability profile exhibits three fronts. The central one at the origin corresponds to the anomalous singularity (2.36). Two-body bound states: hard bound on distance We now turn to our main subject: the quantum walk performed by a bound state of two identical particles propagating coherently along a one-dimensional lattice. The main emphasis will be on the asymptotic distribution of the center-of-mass coordinate of the bound state, on the velocity characterizing its ballistic spreading and on the structure of the distribution profile, which generically exhibits many internal fronts. To the best of our knowledge, these matters have only been addressed so far in two papers [21,25]. In order not to interrupt the lengthy developments of sections 3 and 4, we postpone the discussion of those earlier works to section 5. Here again, considering continuous-time walks will allow for a more thorough and systematic investigation of the problem. Bound states obtained by imposing a hard bound ℓ on the distance between both particles are dealt with in this section, whereas those generated by a smooth confining potential will be considered in section 4. The same formalism will allow one to deal with bosonic and fermionic bound states, as they are respectively described by even and odd functions of the relative coordinate m between both particles. Generalities We denote by n 1 = n + m and n 2 = n the abscissas of two identical particles on the lattice, where m = n 1 − n 2 is the relative coordinate. We impose a hard bound ℓ on the distance |m| between both particles, and so m is restricted to the 2ℓ + 1 values m = −ℓ, . . . , ℓ. Figure 5 shows the positions of the particles in the (n 1 , n 2 ) plane for ℓ = 2. Full symbols denote the allowed configurations of the particles. Links between symbols show the allowed hops of any of the particles to a neighboring site. obeys the equation with boundary conditions ψ n,±(ℓ+1) = 0. Bosonic and fermionic spectra A basis of plane-wave solutions to (3.2) reads where the momenta p and q are respectively conjugate to the relative coordinate m = n 1 − n 2 and to the center-of-mass coordinate The resulting dispersion relation has a product form [25]: (3.5) • Bosonic states are described by even functions under the exchange of n 1 and n 2 ; they are obtained by adding the plane waves (3.3) for p and −p: The relative momentum p is quantized by the condition cos(ℓ + 1)p = 0. It therefore takes the ℓ + 1 values and so The bosonic dispersion curve thus consists of ℓ + 1 branches, with group velocities v (B) (3.9) • Fermionic states are described by odd functions of m, obtained by subtracting the plane waves (3.3) for p and −p: As fermionic particles cannot cross each other in one dimension, the range of the fermionic wavefunctions (3.10) can be restricted to the sector n 1 > n 2 , i.e., m = 1, . . . , ℓ. The relative momentum p is quantized by the condition sin(ℓ + 1)p = 0. It therefore takes the ℓ values and so The fermionic dispersion curve thus consists of ℓ branches, with group velocities v (F) (3.13) Figure 6 shows the bosonic and fermionic spectra for a maximal distance ℓ = 6. The 7 bosonic frequencies ω (B) k (q) (black) and the 6 fermionic frequencies ω (F) k (q) (red) are plotted against q/π. Ballistic spreading Let us now investigate the asymptotic behavior of the wavefunction ψ n,m (t), starting from an arbitrary initial state, where both particles are located in the vicinity of the origin. The detailed analysis of the one-body problem performed in section 2 allows one to draw the following picture. The various components ψ n,m (t) of the wavefunction spread ballistically, i.e., their extension in the center-of-mass coordinate n cm ≈ n grows asymptotically linearly in time t and symmetrically with respect to the origin. The group velocity, corresponding to the boundaries of the Brillouin zone. Setting q = ±π in (3.9), (3.13), we get (3.14) In particular, all the components of the wavefunction take appreciable values only in the allowed zone defined by |n| < V t, where the maximal velocities V (B) = 2 cos π 2(ℓ + 1) , respectively correspond to setting k = 0 and k = 1 in (3.14). Both maximal velocities approach the free value V = 2 in the limit of a large confining size (ℓ ≫ 1), with two different correction amplitudes, i.e., For a generic initial state localized in the vicinity of the origin, the various components ψ n,m (t) of the wavefunction will exhibit, besides two extremal ballistic fronts at n = ±V (B) t or n = ±V (F) t, ℓ − 1 internal fronts in the bosonic case (for ℓ ≥ 2) and ℓ − 2 internal fronts in the fermionic case (for ℓ ≥ 3). Continuum limit and corrections Our bound-state problem owes its non-triviality and its richness to the fact that particles live on a lattice. In the continuum limit, the dynamics of the center-of-mass coordinate and of the relative coordinate are exactly decoupled, as a consequence of Galilean invariance. As soon as the line is discretized into a lattice, this Galilean invariance is broken. This phenomenon has already been underlined for discrete-time interacting quantum walkers [21]. Its consequences in the context of diluted Fermi gases and of the BCS-BEC crossover have also been discussed recently [42]. In the present situation, too, the dynamics of the center-of-mass coordinate n cm and of the relative coordinate m are expected to decouple as the continuum limit is approached, where Galilean invariance should be restored. It is worth investigating in a quantitative way how this decoupling takes place, starting from the lattice dispersion curves (3.8) and (3.12). For that purpose, we introduce the lattice spacing a, and define the following dimensionful quantities, denoted by capital letters. The half-width of the potential well is conveniently defined as L = (ℓ + 1)a, while the conjugate momentum to the center-of-mass coordinate is Q = q/a, and finally the continuum energy reads To leading order as a → 0 (i.e., in the vicinity of the center of the Brillouin zone), both dispersion relations (3.8) and (3.12) yield where the ε k are the energy levels of a free particle in a potential well of width 2L in the appropriate sectors, i.e., The expression (3.18) conforms to what we expect from the continuum theory: the total energy E k of the compound system is the sum of the kinetic energy Q 2 /4 of the center-of-mass motion and of the energy ε k of a bound state in the relative coordinate. The corrections to the leading-order result (3.18) can be derived by recasting (3.8) and (3.12) in terms of the dimensionful quantities introduced above, and expanding in powers of a. The first correction thus obtained, 20) already demonstrates that the coupling of both degrees of freedom by the lattice structure affects the energy spectrum in a non-trivial way. A case study: fermionic state with maximal distance ℓ = 4 To close this section, let us investigate in detail the dynamics of a fermionic bound state with maximal distance ℓ = 4. This is the smallest ℓ for which generic dynamical behavior is observed. § The front velocities (3.14) read where τ is the golden mean and τ its reciprocal. Suppose the two fermions are launched from sites 0 and 1 at time t = 0. Then ψ n,m (0) = δ n,0 δ m,1 and the initial mean value of the center-of-mass coordinate is § For ℓ = 3, there exists a central anomalous front with zero speed. n cm (0) = ψ(0)|n cm |ψ(0) = 1/2. We expect there will be two extremal ballistic fronts near n = ±τ t, and two internal ones near n = ± τ t. and so ψ 1 (q, t) = 1 2 √ 5 τ e iγτ t + τ e −iγτ t + τ e iγ τ t + τ e −iγ τ t . We thus obtain (3.29) Figure 7 shows the four probability profiles |ψ n,m (t)| 2 thus obtained at time t = 50. Abscissas are shifted from n to n + (m − 1)/2 = n cm − n cm (0) , in such a way that the plotted profiles are exactly symmetric. The four profiles exhibit the same global features, although they differ in their detailed structure. As predicted, they exhibit the same ballistic fronts, two external ones at n ≈ ±τ t and two internal ones at n ≈ ± τ t (see (3.21)). The probabilities are larger within the internal fronts, and considerably smaller in the wings of the allowed zone, i.e., between the internal and the external fronts. This phenomenon is quite generic; it could already be observed in the left panel of figure 4. Let us now turn to the internal structure of the fermionic bound state. The distance between both particles takes the value m = 1, . . . , 4 with probability P m (t) = n |ψ n,m (t)| 2 . (3.30) These probabilities can be evaluated from (3.29) using identities which can be derived from the integral representations (3.28). Quadratic identities of this kind have their roots in the connection between special functions and representation theory [43]. We thus obtain These probabilities sum up to unity, as should be. They are even functions of t whose power series have rational coefficients. At short times we have In the long-time regime, these probabilities reach the stationary values These numbers agree with the result (A.14) derived in Appendix A. In the present situation, it is indeed legitimate to study the stationary properties of the internal state per se, without referring to the ballistic dynamics of the center-of-mass coordinate, because the compound system has a basis of factorized eigenstates (3.3). This feature is by no means general. Figure 8 shows a plot of the probabilities P m (t) against time t. The maximal time t = 50 corresponds to the profiles shown in figure 7. The stationary values (3.34) (arrows) are reached after a complex pattern of damped oscillations. The envelope of this transient oscillatory behavior falls off very slowly as t −1/2 , while the oscillations themselves follow a quasiperiodic pattern, characterized by the two incommensurate frequencies 2τ and 2 τ . All the frequencies entering (3.32) are indeed integer linear combinations of the latter frequencies. In a generic situation, there will be as many incommensurate frequencies as there are positive ballistic velocities V (3.14)). The mean internal size of the bound state, can be evaluated from (3.32) to read This quantity is plotted in figure 9. It starts from D(0) = 1 and reaches the stationary value D = 5/2, again after a complex pattern of quasiperiodic oscillations dying off as t −1/2 . Generalities We now turn to the quantum walk performed by a bosonic or fermionic bound state of two identical particles generated by a smooth confining potential W m . The latter is assumed to be an even function of the relative coordinate m between both particles, such that W m → +∞ at large distances (|m| → ∞). With the notations of section 3, the time-dependent wavefunction ψ n,m (t) obeys the equation i dψ n,m (t) dt = W m ψ n,m (t) + ψ n,m−1 (t) + ψ n+1,m−1 (t) + ψ n−1,m+1 (t) + ψ n,m+1 (t). where the momentum q is conjugate to the center-of-mass coordinate (3.4). The internal wavefunction φ m obeys with the same dispersive (i.e., q-dependent) hopping amplitude as before (see (3.23)). We shall also use the shorthand notation for the staggered wavefunction, which obeys i.e., (4.3) with the sign of γ reversed. The above formalism applies to an arbitrary confining potential. Bosonic and fermionic bound states are described by wavefunctions φ m which are respectively even and odd functions of m, such that φ m → 0 at large distances. Such bound-state wavefunctions only exist for discrete sequences of dispersive frequencies ω Linear confining potential Our first example is the linear confining potential where the amplitude g is a positive constant. The regime of most physical interest corresponds to small g, so that bound states have a large size and potentially a rich internal structure. Figure 10 shows the bosonic and fermionic spectra for g = 0.4. These spectra exhibit two very distinct regions. Within the band delimited by the blue curves (ω = ±2γ = ±4 cos(q/2)), bosonic and fermionic branches are well separated from each other. They alternate and are strongly dispersive. Above the band, the spectrum consists of an infinite array of approximately equally spaced and non-dispersive branches. These branches appear as red, as bosonic and fermionic branches are superimposed to a very high accuracy. In order to analyze the above observations, we use (4.6) for positive values of the relative coordinate m, i.e., Let us first forget about the constraint m ≥ 0 and extend (4.8) to all values of m. We are thus facing a tight-binding equation for a charged particle in a uniform electric field, the amplitude g giving the strength of the field in reduced units. The corresponding spectrum is a Wannier-Stark ladder [44,45]: it consists of the infinite sequence of equally spaced frequencies, where k runs over the integers. The Wannier-Stark levels are non-dispersive, as (4.9) holds irrespective of the hopping amplitude γ. The normalized eigenfunction corresponding to ω k reads where the J m are the Bessel functions. This eigenfunction is strongly localized around m = k. We have in particular, for large positive k, This estimate shows that the Wannier-Stark eigenfunctions hardly feel the boundary condition on (4.8) at m = 0, and therefore remain essentially unperturbed, as soon as the frequency ω k = gk exceeds a few times the bandwidth γ. In other words, only finitely many lowest Wannier-Stark states, in a number of the order of γ/g, are affected by the boundary condition which distinguishes between fermions and bosons. This explains the main observations made on figure 10. A more quantitative analysis of the problem goes as follows. The general solution to (4.8) falling off as m → +∞ reads The leading correction to the Wannier-Stark spectrum (4.9) at large positive k can be readily derived from (4.14). Skipping details, we obtain a symmetric splitting of the form with where χ 0 is the Wannier-Stark wavefunction at the origin (see (4.11)). Our next goal is to investigate the velocities characterizing the ballistic spreading of a bosonic or fermionic wavefunction in the center-of-mass coordinate n cm , for a generic initial state. These velocities are given by where primes denote derivatives, and maxima are taken over all branches of the spectrum and over all q. It is clear from figure 10 that these maxima are reached for the lowest branch of each spectrum. Furthermore, if the amplitude g is small, so that the bound states have a large internal size, we anticipate that the maximal velocities will be close to the free value 2 and correspond to momenta close to the zone boundary (q → π). To substantiate these expectations we note that whenever g is small, typical wavefunctions vary slowly with m, so one can employ a continuum description. Setting and expanding in (4.8) differences in terms of derivatives, we obtain Then, setting Equation (4.19) becomes the Airy equation whose solution decaying as x → +∞ is χ(x) = Ai(x), the Airy function. • Bosonic states obey χ 1 = χ −1 , hence dχ/dm = 0 for m = 0, and so x 0 = −λδ obeys Ai ′ (x 0 ) = 0. The lowest branch of the bosonic spectrum therefore reads where η 1 = 1.018792 . . . is the opposite of the first zero of Ai ′ (x). Setting q = π − ε, the lowest branch can be expanded as and is reached for • Fermionic states obey χ 0 = 0, and so x 0 = −λδ obeys Ai(x 0 ) = 0. The lowest branch of the fermionic spectrum therefore reads where ξ 1 = 2.338107 . . . is the opposite of the first zero of Ai(x). The maximal group velocity now reads and it is reached for The above scaling results valid in the g ≪ 1 regime can be alternatively derived by analyzing the exact quantization formulas (4.14) in the transition region (see (2.8)). We have in particular Figure 11. Plot of the bosonic and fermionic maximal velocities V (B) (black) and V (F) (red), characterizing ballistic spreading in a linear confining potential, against g 1/2 . Straight lines: scaling results (4.24) and (4.28) at small g. Quadratic confining potential Our second example is the quadratic confining potential where g is again a positive constant. Equation (4.6) reads The Fourier transform χ(p) therefore obeys g d 2 χ dp 2 = −(ω + 2γ cos p) χ. (4.34) The latter differential equation is known as the Mathieu equation [46]. The body of knowledge on the latter equation is however of little use for the present purpose. Hereafter we therefore use general techniques which could be applied to any confining potential. Figure 12 shows the bosonic and fermionic spectra for g = 0.1. These spectra are qualitatively similar to those shown in figure 10. The existence of two very distinct regions, with strongly dispersive modes within the band and weakly dispersive branches above the band, is indeed a common feature of all confining potentials. Let us first consider the weakly dispersive branches above the band. Right at γ = 0 (this corresponds to q = π, i.e., to the right end of figure 12), the eigenstates are strictly localized at specific sites: we have χ m = δ m,k and ω k = W k = gk 2 . If γ is small and/or k is large, the wavefunction χ k±1 at neighboring sites is proportional to the ratio γ/k. We thus obtain the more refined estimate High branches are therefore weakly dispersive, as their bandwidth scales as 1/(gk 2 ). With a linear confining potential, the Wannier-Stark states were not dispersive at all. In the whole region above the band, the splitting between even (bosonic) and odd (fermionic) frequencies can hardly be observed. This splitting can be estimated as follows. First, it is expected on general grounds to scale as χ 2 0 (see (4.16)). Furthermore, χ 0 is exponentially small whenever γ is small and/or k is large. In this regime, (4.33) indeed simplifies to g(k 2 − m 2 )χ m ≈ −γχ m+1 for 0 < m < k. We thus obtain the estimate The velocities V (B) and V (F) characterizing the ballistic spreading of an initially localized wavefunction again correspond to the lowest branches of the spectra. In the regime of most interest where g is small, these velocities can be calculated by means of a continuum description. With the notation (4.18), we obtain For the fermionic spectrum, the lowest branch corresponds to E 1 = 3, and so Confining potential with an arbitrary exponent We now consider the case of a confining potential of the form with an arbitrary exponent α > 0. Let us focus on the maximal velocities V (B) and V (F) characterizing the ballistic spreading of a localized initial state. In the most interesting regime when g is small, these velocities can be calculated by means of a continuum description. With the notation (4.18), we obtain whose eigenvalues E ν (ν = 0, 1, . . .) are not known analytically in general. For the bosonic spectrum, the lowest branch corresponds to E 0 , and so Using again (4.23), we obtain . (4.53) For the fermionic spectrum, the lowest branch corresponds to E 1 , and so . (4.56) The ratio of the amplitudes is . (4.57) The scaling laws (4.52) and (4.55) generalize (4.24), (4.28), (4.41), (4.44). They can also be put in perspective with (3.16), which holds in the situation where a hard bound ℓ is imposed on the distance |m| between both particles. Let us introduce the length ℓ as the distance where the potential (4.47) equals unity, i.e., In terms of this length parameter, (4.52) and (4.55) take the form (4.59) These expressions smoothly match their counterparts (3.16) in the α → ∞ limit, where the potential gets infinitely steep. The exponent slowly converges to the limit value 2, while the amplitudes (4.53), (4.56) also converge to the limits (3.17). The ratio C (F) /C (B) can be viewed as a universal amplitude ratio. This dimensionless quantity depends only on the growth exponent α of the confining potential (see (4.57)), increasing monotonically from 1 in the singular α → 0 limit to 4 in the α → ∞ (i.e., hard-bound) limit. Its values for α = 1 and α = 2 have been given in (4.31) and (4.46). Figure 14 shows a plot of this ratio against α/(α + 1). Finally, the amplitude ratio C (F) /C (B) is always larger than unity. More generally, bosonic bound states always have a larger maximal spreading velocity than fermionic bound states in the same confining potential. We shall come back to this very common property in section 5. Discussion In this work we have investigated the continuous-time quantum walk performed along a one-dimensional lattice by bound states of two interacting particles. The main focus has been on the profile of the wavefunctions describing these bound states in the center-of-mass coordinate, and especially on the velocity characterizing their ballistic spreading and on the structure of the whole profile, which generically exhibits many internal fronts. We have first revisited the problem of a single quantum walker in a self-contained pedagogical fashion. For the simple quantum walk where the particle hops to nearest neighbors only, we have concentrated onto the dependence of the distribution of the particle position on the initial state. This distribution profile has generically two ballistic fronts. Either one or even both fronts may be absent for carefully chosen special initial states, as this was already the case for some examples of discrete-time walks [20,32,33,37]. For the generalized quantum walk, where the particle hops to first and second neighbors with respective amplitudes 1 and g, we have emphasized the possible occurrence of two internal fronts in the distribution profile, propagating at velocities ±V − , besides the two usual external ones, propagating at velocities ±V + , which mark the endpoints of the allowed region beyond which the wavefunction falls off exponentially. The non-trivial dependence of the velocities V ± on the amplitude g is in contrast with the equally spaced internal peaks arising for the discrete-time quantum walks considered in earlier works [31,34]. The quantum walk of bound states of two bosonic or fermionic particles has then been investigated in two situations: either by imposing a hard bound ℓ on the distance between both particles, or by generating the bound states by a smooth confining potential growing as a power of the distance. In both situations, we have focused on the structure of the distribution of the center-of-mass coordinate. We have investigated in detail the maximal velocities V (B) and V (F) characterizing the ballistic spreading of bosonic and fermionic bound states, as well as the many internal fronts of their distribution profiles. In the case of a hard distance bound (section 3), the maximal velocities have the simple expressions (3.15). The distribution profile generically exhibits ℓ − 1 (resp. ℓ − 2) internal fronts in the bosonic (resp. fermionic) case, besides the two extremal ballistic fronts. In the situation of a smooth confining potential (section 4), we have investigated in detail the cases of a linear and of a quadratic (i.e., discrete harmonic) potential. For all potentials of the form W m = g|m| α , growing as a power of the distance between both particles, the maximal velocities exhibit scaling laws of the form (4.52), (4.55) in the regime of a weak potential (g ≪ 1). The associated amplitude ratio C (F) /C (B) is universal, in the sense that it depends only on the growth exponent α of the confining potential. This ratio is larger than unity, in accord with the more general property that bosonic bound states have a larger maximal spreading velocity than their fermionic counterparts in the same binding potential. Let us now compare our findings with those of two recent papers [21,25] which are also devoted to the quantum walk of bound states. The latter works deal with full two-body Hamiltonians, along the lines of earlier investigations of the Anderson localization of two interacting particles [12,13,14]. In such models bound states therefore coexist with a full two-particle continuum. The situations studied in the present work (hard distance bound or confining potential) only possess bound states, so that the investigation of the latter is made easier. As internal ballistic fronts are not discussed in [21,25], we shall henceforth concentrate on the maximal spreading velocity V . Reference [21] deals with the antisymmetric (i.e., fermionic) sector of a discrete-time model, with a local interaction described by the action of a special coin operator whenever both particles sit at the same site. The associated coupling constant g is an angle. The analysis of the bound-state spectrum allows one to derive the maximal velocity V . This quantity (whose expression is not explicitly given in [21]) decreases continuously from 1/ √ 2 to 1/3 as g is increased from 0 to its maximal value of π. Reference [25] describes the continuous-time dynamics of two identical particles (either bosons or fermions) generated by a Hubbard-like Hamiltonian with nearest-neighbor interactions. The maximal velocities are derived by means of a perturbative approach in the regime of very strongly attractive interactions. Both spreading velocities are found to fall off to zero as the inverse of the interaction strength, and to obey the simple relation V (B) = 3V (F) . Even though very different models and regimes have been considered, all the findings recalled above are consistent with the following universal characteristics of the ballistic spreading of two-body bound states. For a given interaction strength, bosonic bound states always have a larger velocity of spreading than their fermionic counterparts. When the interaction strength is increased, the spreading velocity decreases continuously from its free one-body value down to zero or to a much smaller limiting value. A natural extension of the present work consists in considering the quantum walk performed by bound states of more than two identical particles. A classical analogue consists of multi-pedal molecular devices whose legs perform random walks, known as molecular spiders [47]. Their behavior has been investigated theoretically, both on a one-dimensional [48] and a two-dimensional [49] substrate. The quantum version of the problem yields a special kind of N -fermion bound state, which can be analyzed by techniques from integrable systems. This will be the subject of future work [50]. Acknowledgments It is a pleasure to thank R Balian, G Ithier and V Pasquier for fruitful discussions. Appendix A. Stationary properties of a quantum system In this appendix we investigate the stationary properties of a finite quantum system from a very general standpoint. Consider a quantum system whose Hilbert space has finite dimension N , working in a preferred basis |a (a = 1, . . . , N ). In this basis, the Hamiltonian is given by an N × N Hermitian matrix H. Assume that the energy eigenvalues E n (n = 1, . . . , N ) are non-degenerate. Let |n be normalized eigenvectors, so that H|n = E n |n . Assuming the system is initially in state |a , we have |ψ(t) = n e −iEnt |n n|a . (A.1) The probability of observing the system is state |b at time t reads therefore in full generality The matrix Q is non-trivial in general. This is a manifestation of the well-known fact that a finite isolated quantum system does not equilibrate, in the sense that its stationary properties remember its initial state forever. We now turn to the explicit example of a confined quantum walker, i.e., a tightbinding particle on a finite segment of N sites labelled a = 1, . . . , N . The preferred basis is chosen to be local in space. The Hamiltonian reads a|H|ψ = ψ a+1 + ψ a−1 , (A.7) where a|ψ = ψ a and with Dirichlet boundary conditions ψ 0 = ψ N +1 = 0. We have This sum can be worked out explicitly. We thus obtain With respect to its uniform background value 1/(N + 1), the stationary probability Q ab is thus enhanced by a factor 3/2 both at the starting point (b = a) and at the symmetric position (b = N + 1 − a). In the particular situation where the starting point is the middle of an odd segment (N odd and a = (N + 1)/2), the enhancement factor of the return probability reaches 2. The stationary mean value of the position X of a walker launched at site a, i.e., is dictated by symmetry, and therefore independent of the initial state. The corresponding variance,
2015-11-05T13:12:40.000Z
2015-07-06T00:00:00.000
{ "year": 2015, "sha1": "b2c3a28a36338f3be23d22242825d1b6c3dcfe17", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1507.01363", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "b2c3a28a36338f3be23d22242825d1b6c3dcfe17", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
126085647
pes2o/s2orc
v3-fos-license
On Directed Edge-Disjoint Spanning Trees in Product Networks , An Algorithmic Approach In (Ku et al. 2003), the authors have proposed a construction of edge-disjoint spanning trees EDSTs in undirected product networks. Their construction method focuses more on showing the existence of a maximum number (n1+n2-1) of EDSTs in product network of two graphs, where factor graphs have respectively n1 and n2 EDSTs. In this paper, we propose a new systematic and algorithmic approach to construct (n1+n2) directed routed EDST in the product networks. The direction of an edge is added to support bidirectional links in interconnection networks. Our EDSTs can be used straightforward to develop efficient collective communication algorithms for both models store-and-forward and wormhole. Introduction There has been increasing interest over the last two decades in product networks (Day, and Al-Ayyoub 1997;Ku et al. 2003;X and Yang 2007;Imrich et al. 2008;Klavar and Špacapan 2008;Jänicke et al. 2010;Hammack et al. 2011;Chen et al. 2011;Ma et al. 2011;Cheng et al. 2013;Erveš and Žerovnik 2013; Govorčin and Škrekovski 2014).The Cartesian product is a well-known graph operation.When applied to interconnection networks, the Cartesian product operation combines factor networks into a product network.Graph product is an important method to construct bigger graphs, and plays a key role in the design and analysis of networks.A number of spanning trees of a graph are edge-disjoint if no two trees contain the same edge.Edge-Disjoint spanning trees (EDSTs) have many practical applications including enhancing interconnection network fault-tolerance and developing efficient collective communication algorithms in distributed memory parallel computers (Fragopoulo and Akl 1996;Johnsson and Ho 1989;Touzene 2003).In (Ku et al. 2003), the authorshave studied construction of maximum edge-disjoint spanning trees(n 1 +n 2 -1) EDSTs in undirected product network of two graphs, where factor graphs have respectively n1 and n2 EDSTs.The presented construction is more about showing the existence of a maximum number of spanning trees.They did not provide a straight-forward algorithmic way for their construction.In this paper, we propose a new systematic and algorithmic approach to construct (n1+n2) directed rooted edge-disjoint spanning tree in product networks.We assume that the factor graphs are connected graphs and have respectively n1 and n2 EDSTs.Directed rooted edge-disjoint spanning trees have been discussed for different graphs such as the ndimensional hypercube (Johnsson and Ho 1989), k-ary-n-cube (Touzene 2003), star graphs (Fragopoulo and Akl 1996), etc.We assume directed edges: if a and b are two nodes in the graph, the edge (a, b) is different from the edge (b, a).Directed edges support bidirectional links in interconnection networks.The advantage of our method is the direct use of our trees to develop collective communication procedures in product interconnection networks.The remainder of this paper is organized as follows: In Section 2, notations and preliminaries are presented.In Section 3, the construction of edge-disjoint spanning trees in product networks is proposed.In Section 4, we conclude this paper. Notations and Preliminaries The Cartesian product G =G 1 ×G 2 of two undirected graphs is the undirected graph G = (V, E), where V and E are given by: V= { <x1, x2> | x1∈V1 and x2∈V2}, and for any u =<x 1 , x 2 > and v = <y 1 , y 2 > in V, (u, v) is an edge in E if, and only if, either In all what follows we consider directed edges in the sense that the edge (u, v) is different from the edge (v, u). Construction of EDSTs in a Product Network Consider two graphs having the following properties: the graph G1 contains n1 EDST all rooted at x denoted: X 1 (x), X 2 (x) , … , X n1 (x).Each X i (x) tree is assumed to be formed of an edge (x, x i ), where x i is the i th neighbor of x, and a sub-tree denoted X i (x)/x rooted at x i that spans all the G 1 nodes other than x (Fig. 1.a).The graph G 2 contains n 2 EDST all rooted at y denoted: Y1(y), Y2(y), … , Yn2(y), Each Yj(y) tree is assumed to be formed of an edge (y, y j ), where y j is the j th neighbor of y, and a sub-tree denoted Yj(y)/y rooted at yj that spans all the G2 nodes other than y (figure 1.b).In Fig. 1 (a, b) straight lines correspond to G 1edges and dashed lines correspond to G 2 -edges.In what follows, we fix a specific node <x 0 , y 0 > in G as a desired root for the EDST to be constructed.We denote by <xi, y0>, i = 1,…, n1, the n1 neighbors of <x0, y0>in G reached from <x0, y0> via G1-edges, and by <x0,yj>, j = 1, …, n2, the n2 neighbors of<x0, y0> reached from <x0, y0>via G2-edges.For a given node x in G1 and a given tree Y in G 2 , we denote by <x, Y> the tree in G 1 ×G 2 obtained by fixing the G 1 -component to x and following the edges of tree Y in G 2 .Similarly, <X, y> denotes the tree in G1×G2 obtained by following the edges of a tree X in G1 while the G 2 -component is fixed to node y. The Special T 1 and T 2 EDSTs for G We present a construction algorithm for the directed EDSTs in the product graph G denoted T 1 and T 2 . To illustrate our construction algorithm, we give a complete example of product of two interconnection networks the 3-cube (3 directed rooted EDTS's (Johnsson and Ho 1989)) and a ring with three nodes (a, b and c) (2 directed rooted EDST's).Dark circles represents the root node of the trees and the numbers on the edges Conclusions In this paper, we presented a new systematic and algorithmic approach to construct n1+n2 (without using non-tree edges) directed rooted edges-disjoint spanning trees for product networks.The previous work on undirected EDSTs of the product networks (Ku et al. 2003) focuses more on the existence of n 1 +n 2 -1 but did not provide an explicit algorithmic way for their construction.Our n 1 +n 2 EDSTs can be used straight-forward to develop efficient collective communication algorithms for both models store-and-forward and wormhole using bidirectional links. Figure 1 Figure 1.a.ith EDSTX i (x) rooted at x in G 1 Figure 1.b.ith EDST Y i (y) at y in G 2 and its Xi (x) sub-tree.andits Yj (y)/y sub-tree. Figure 3 . Figure 3. Construction of spanning trees T 1 and T 2 . Figure 4 . Figure 4. Three EDSTs of the 3-cube and two EDSTs of the ring (3 nodes). represent the dimension number relative to the 3-cube, see Figs.4 and 5.The trees are directed from the root nodes to leave nodes.
2018-12-12T15:45:22.213Z
2014-12-01T00:00:00.000
{ "year": 2014, "sha1": "5f34d68146c6b02ccb37fd1049ba999eb6bdacab", "oa_license": "CCBY", "oa_url": "https://journals.squ.edu.om/index.php/tjer/article/download/149/161", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "5f34d68146c6b02ccb37fd1049ba999eb6bdacab", "s2fieldsofstudy": [ "Mathematics", "Computer Science" ], "extfieldsofstudy": [ "Mathematics" ] }
52255329
pes2o/s2orc
v3-fos-license
A Test for Progressive Myopia and the Role of Latent Accommodation in its Development C l i n M e d International Library Citation: Avudainayagam KV, Avudainayagam CS, Nguyen NH (2015) A Test for Progressive Myopia and the Role of Latent Accommodation in its Development. Int J Ophthalmol Clin Res 2:021 Received: December 15, 2014: Accepted: April 07, 2015: Published: April 10, 2015 Copyright: © 2015 Avudainayagam KV. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. Avudainayagam et al. Int J Ophthalmol Clin Res 2015, 2:2 ISSN: 2378-346X Introduction The multivergence hologram is a phase hologram that resembles a transparent glass plate in appearance and contains a holographic record of real and virtual images of various test characters located at Subjects The spectacle correction in the later years for 25 myopes who participated in our earlier studies on the measurement of the limiting blur with the multivergence hologram was obtained from clinical records.Ethics approval was obtained from the Human Research Ethics Committee, UNSW Australia.The mean sphere of the spectacle correction (MS) for the myopes was in the range of −0.375 D to −5.5 D. The astigmatism of the subjects included in the study was ≤ 0.5 D. The spectacle correction recorded for the subjects was determined by subjective refraction in the clinic using a phoroptor.The maximum plus lens for best visual acuity was the criterion for the subjective end point.The best corrected visual acuity was 6/7.5 or greater and the subjects had no significant pathology.Using the data obtained from the records, the rate of progression of myopia was calculated for these subjects and the subjects whose progression rate was greater than or equal to −0.20 D per year were classified as progressive myopes and the others as non-progressive myopes. The test for progressive myopia The initial mean sphere (MS), age, time elapsed before the next refraction, mean sphere after the time elapsed, the limiting blur that was obtained with the hologram in their first visit and the progression rate of myopia are given in Table 1 for non-progressive myopes and in Table 2 for progressive myopes.The data obtained on the emmetropic subjects are given in Table 3.The pupil size when it was recorded is also included.The measurements were made in a dimly lit room and the pupil size was measured on the fellow eye using the digital pupillometer from NeurOptics (Model 59001). The limiting blur is plotted against the mean sphere for the nonprogressive myopes in Figure 3a and for the progressive myopes in Figure 3b.The mean limiting blur for non-progressive myopes was 0.55 D with a standard deviation of 0.33 D. The mean limiting blur for progressive myopes was 1.32 D with a standard deviation of 0.75 D. Thus the mean limiting blur for the progressive myopes was 0.77 D greater than that for the non-progressive myopes and this difference was statistically significant in a one-tailed t-test with a p value of 0.0018 obtained for unequal variances. The view obtained through the hologram is simulated for nonprogressive myopes in Figure 4a and for progressive myopes in Figure 4b respectively.The upper limit for the limiting blur of non-progressive myopes at the 95% confidence level is 1.21 D. To test for progressive myopia, if we use the criterion that any subject with a limiting blur greater than 1.21 D is a progressive myope, we see from Table 2 that 7 out of the 13 progressive myopes pass the test satisfying this criterion and would be counted as true positives giving 54% sensitivity for the test.From Table 1 we see that none of the non-progressive myopes satisfy this criterion and would fail the test as true negatives giving 100% specificity for the test. Further, investigation of the results obtained with the multivergence hologram also suggested that progressive myopes have some latent accommodation like hyperopes.In this paper, we discuss the role of latent accommodation in the measurement and correction of ametropia using a phoroptor in the clinic and its possible consequence on the development of progressive myopia.We also present results which show that the hologram could be used to confirm whether a low ametrope measured using a phoroptor in the clinic is a myope or a hyperope.We also found that the hologram is able to classify all subjects into two distinct categories: one with latent accommodation and the other without any latent accommodation.We present the details of these investigations and our findings. To begin with we briefly outline how the multivergence hologram is recorded and used to test the subject.More details can be obtained from our earlier publications [3,4]. The multivergence hologram The 3-D target that was used to record the multivergence hologram is shown in Figure 1.High contrast upside down mirror images of printed test letters are glued to the end faces of an array of wooden rods.The rods are placed at designed distances from a +20 D lens such that the vergences of the rays leaving the lens from these various test letters are in the range of -6.5 D to +1.0 D in steps of 0.5 D. The character V of the target is designed to be located at the focal plane of the lens.It is therefore imaged at infinity and the vergence of the image rays leaving the lens for this letter is 0 D. The height of each of the letters is designed to subtend an angle of 50' at the lens.The target is illuminated with a diverging beam of laser light derived from a He-Ne laser.The image forming wave fronts emerging from the lens are recorded in a hologram by interference with a plane reference wave derived from the same laser. The measurement of limiting blur with the hologram To measure the limiting blur of the subject with the hologram monocularly, the subject is positioned behind the hologram in such a way that his eye is at the same location as the +20 D lens was in the recording geometry.The hologram is illuminated from behind by a plane reference wave that is travelling in the reverse direction to the reference wave that was used while recording the hologram as shown in Figure 2. When the hologram is thus illuminated, the phase conjugate of the recorded waves are recreated in which the image forming rays reversed in direction of propagation emerge from the hologram.The vergence of the rays reaching the eye from the images of the test letters recorded in the hologram would therefore be in the range of +6.5 D to -1.0 D. The plane wave shown in the sketch represents the light coming from the image of the character V formed at infinity.The diverging wave corresponds to the light coming from the real image of a test letter in front of the subject's eye and the converging wave corresponds to the light that is travelling towards the virtual image of a test letter.The test letters corresponding to the converging waves at the eye would appear blurred to the subject.The subject is asked to call out the letters that he can recognize.The vergence corresponding to the letter with the most positive blur that is recognized by the subject, gives a measure of the limiting blur for the subject. Discussions Our earlier studies with the multivergence hologram had indicated that for hyperopes viewing a multivergence target or a test chart at infinity in a hologram the latent accommodation is not in play [2,4].In the current study as we have found the progressive myopes to respond like hyperopes in viewing through the hologram we wonder if progressive myopes are indeed hyperopes who have been initially misdiagnosed as myopes due to their latent accommodation. Accommodation during refraction and the phoropter In the past, Reese and Fry [5] had found that in the refraction process using the phoroptor in the clinic it is ostensibly assumed that subjects relax their accommodation through the fogging lenses.They found that positive lenses used for fogging need not necessarily relax a subject's accommodation.It follows then that this could result in an emmetropic or a low hyperopic subject being measured as a low myope, or, a more myopic error being measured for a low myope.Further, the coefficient of repeatability for subjective refraction performed by two different examiners on 86 subjects has been reported to be about 0.76 D in the literature [6].So we believe that the refractive error measured using a phoroptor could be in error especially in the diagnosis of low ametropes. Hologram for the classification of low ametropes In our earlier study with the multivergence hologram [1], when we found that the mean limiting blur of hyperopes was greater than that of myopes by about 0.9 D, the p value for the mean difference was found to be 0.0000015 in a one-tailed t-test.Data from this earlier study is reproduced in Tables 4-6.Subjects with refractive error of ± 0.25 D obtained using the phoropter were classified as emmetropes in that study.However, if these low ametropes are classified into myopes and hyperopes (i.e. the −0.25 D emmetropes included with the myopes and the +0.25 D emmetropes included with the hyperopes) the p value for the mean difference falls to 0.00007 and the mean difference falls to 0.67 D. If instead, the limiting blur obtained with the hologram is used to classify the emmetropic subjects into myopes and hyperopes (i.e., the emmetropes whose limiting blur is less than 1.21 D as myopes and the emmetropes whose limiting blur is greater than 1.21 D as hyperopes), the p value improves to 0.000000023 and the mean difference remains close to 0.9 D and is equal to 0.93 D. This suggests that the hologram may offer an improved way of identifying a low ametrope as a myope or a hyperope.In the former case, a less positive lens is given to a hyperopic subject.When this subject lets go of his +1 D of accommodation he will be left with +1 D of hyperopia.The image of a distant object will then be formed behind the retina, but the error will be only half as much as when he went for correction.This is not disastrous. Latent accommodation and measured refractive error In the latter case, a negative lens is given to a hyperopic subject.When this subject lets go of his +1 D of accommodation he will be left with +1 D of hyperopia due to the -0.5 D lens correction that is given to him.This leaves him with twice the error he had initially, giving a feedback signal that is twice as strong for eye growth.Being hyperopic, their eyes are inclined to grow longer.May be the latter hyperopic subject becomes a progressive myope? The unknowns when refractive error is measured with a phoroptor are the latent accommodation, and the accommodation response of individual subjects to fogging lenses.A low hyperope stands a greater chance of receiving a negative correction due to his/her latent accommodation.Giving a negative correction to a hyperopic subject would enhance the hyperopic defocus, and hyperopic defocus is known to encourage eye growth [7].Providing a negative lens would upset the feedback loop in a hyperope and could eventually lead to a loss of control over the mechanism [8,9] that triggers eye growth.It thus seems possible that progressive myopia could result from hyperopes being driven to myopia!Negative and positive lenses over the eyes have been shown to affect eye growth [10].The probability of the above error taking place is quite high considering that most children under 10 years of age are hyperopic and that the eye can grow up to the age of thirteen [11].The fear of the child becoming a progressive myope and the parents' concern for the child in this regard could promote more children to go for correction when it may not be needed [12,13].Although emmetropization occurs in early development, changes in eye growth could occur in young adults as well [14].like myopes.The authors believe that environmental factors would become a minor issue in myopia progression if low ametropia is identified correctly.The brain has remarkable ability to cope with a wide range of lighting for example.On the other hand the brain could be easily confused if incorrect prescription however small is prescribed especially to growing children [9,10].This view is supported by the fact that the literature is divided when it comes to environmental influences [13], but there is strong evidence on the progression of myopia and eye growth with incorrect lenses given to the eyes in the animal models [15]. Discussion on the results obtained with emmetropes The refractive error data obtained on the emmetropic subjects pursued in later years is shown in Table 3.The first 8 subjects responded with low limiting blur, and their refractive error is stable confirming the high specificity of the test.The last three subjects responded with high limiting blur.One is a young subject (11 year old) who was measured with 0 D refractive error in his first visit and who is developing into a progressive myope.The 50 year old subject is a +0.25 D hyperope with a positive progression rate, indicating the emergence of latent hyperopia.The 30 year old emmetrope could be a latent hyperope based on his response to the hologram and could possibly need reading glasses earlier than normal as it happened for one of the authors.It is possible that the 11 year old subject was a latent hyperope who was diagnosed as a 0 D emmetrope in his first visit when he was also tested with the hologram.This subject was measured as having −1.875 D of myopia four years later.We don't know when this subject was first prescribed a negative lens.Rendering a small negative lens correction to this subject between the two visits may possibly have induced progressive myopia. Latent accommodation and progressive myopia If a hyperope is diagnosed as a myope incorrectly and prescribed a negative lens then he will need to accommodate more in doing near tasks.He would also become more weary while doing near tasks with the result that the sharp image would frequently be formed behind the retina signalling the brain for eye growth, leading to progressive myopia.Our earlier study indicated that the latent accommodation is responsible for the large limiting blur of hyperopes.As some progressive myopes show large limiting blur like that of hyperopes, it appears that progressive myopes do have latent accommodation similar to hyperopes.One could then expect some correlation of age, pupil size, and refractive error with the limiting blur for the progressive myopes as accommodation is correlated to these factors.A significant medium correlation was obtained for progressive myopes which was not observed for non-progressive myopes (Table 7). The fact that atropine which arrests the accommodative ability of a subject temporarily serves as a deterrent in the development of progressive myopia [16] lends support to the idea that progressive myopes have some latent accommodation similar to hyperopes. Classification of subjects based on limiting blur It appears that the hologram is able to differentiate between subjects who have some latent accommodation and subjects who have no latent accommodation based on their limiting blur, irrespective of their refractive status.If we consider the data on the limiting blur that we obtained for various subjects in our earlier study (reproduced in Tables 4-6), and classify all the subjects based on the limiting blur into two groups, one having a high limiting blur (>1.21 D), and the other having a low limiting blur (<1.21 D), 33 subjects are found to have a high limiting blur, and 23 subjects are found to have a low limiting blur.The mean value of the high limiting blur is 1.98 D with a standard deviation of 0.3 D. The mean value of the low limiting blur is 0.75 D with a standard deviation of 0.2 D. The mean difference between the high limiting blur and the low limiting blur is 1.23 D with a p value of zero (3.7×10 -25 ).This difference perhaps gives a measure of the mean level of the latent accommodation when it is present, for subjects who show hyperope like vision when tested with the hologram.These results also suggest that any given subject has a vision characterised by low limiting blur (no latent accommodation) or characterised by high limiting blur (indicative of latent accommodation). Progressive myopia and overcorrected myopes We have used progression rate to define progressive and nonprogressive myopia in this study.As some of the myopes classified as progressive myopes by this definition did not show hyperopic level of limiting blur in the test with the hologram, it may be that these myopes were overcorrected myopes who were rendered artificially hyperopic.These myopes would then experience hyperopic defocus which could eventually trigger progressive myopia.It is also possible that these myopes are in the process of developing some latent accommodation due to the constant accommodation resulting from overcorrection.This might show up as high limiting blur in the test with the hologram when their refractive error goes beyond −1.5 D, a value close in magnitude to the suspected mean level of latent accommodation of 1.23 D that was obtained in the previous section.Looking more closely at Table 2 or Figure 3b we see that all the progressive myopes whose myopia was greater than −1.5 D responded with a high level of limiting blur.Therefore it is possible that −1.5 D is close to the turning point for the progression of myopia for the overcorrected myopes.Alternatively, even though these subjects show high progression rate, their refractive error in the later years may stabilise and they may turn out to be non-progressive myopes.A retest with the hologram when the second refraction was carried out would have helped resolve this further, but is currently beyond the scope of this study.These observations and findings can be confirmed with further research. Conclusions Currently, there is no test which can predict which subject diagnosed as a myope would become a progressive myope.The multivergence hologram can be used to test for progressive myopia.Initial results indicate a sensitivity of 54% and a specificity of 100% for the test. Our results suggest that progressive myopes have some latent accommodation like hyperopes and that progressive myopia could result from incorrect diagnosis of hyperopia as myopia brought about by the play of latent accommodation.Progressive myopia could also result from overcorrection of low myopes.Hence progressive myopia may be preventable by correct diagnosis of low hyperopia/myopia.Our studies also show that the hologram can help diagnose low ametropia correctly.Based on our findings we suggest that if a subject, diagnosed in the clinic using the phoroptor as a low myope (−0.25 D to −1.00 D) responds as a true positive in the test with the hologram, then no corrective lenses be prescribed to the subject.Alternative preventive measures may include cycloplegic refraction and closer follow ups.For higher myopes, we suggest under correction when they respond as true positives.Under correction has been shown to slow down the progression of myopia [17,18].However the literature is divided on the role of under correction in slowing down myopia progression [19].It is possible that the role of under correction in slowing down progressive myopia may prove to be significant if it is tried only on those classified as progressive myopes by the test with the hologram. It is interesting that the hologram is able to divide all the subjects significantly into two distinct groups, irrespective of their refractive error: one having high limiting blur (indicative of subjects having latent accommodation) and the other having low limiting blur (indicative of subjects having no latent accommodation).Further research with the multivergence hologram would prove to be very useful in gaining an understanding of the nature of latent accommodation. Figure 1 :Figure 2 : Figure 1: A sketch of the 3-D target that was used to record the hologram Figure 3 : Figure 3: a) Plot of the limiting blur vs the mean sphere of the spectacle correction for non-progressive myopes.The dashed line indicates the mean limiting blur b) Plot of the limiting blur vs the mean sphere of the spectacle correction for progressive myopes.The dashed line indicates the mean limiting blur. Let us consider the following examples:• A +2 D hyperopic subject accommodating by +1 D when measured with the phoroptor.This would result in a +1 D lens being prescribed for the +2 D hyperope which implies +1 D of under correction.• A +0.5 D hyperopic subject accommodating by +1 D when measured with the phoroptor.This would result in a −0.5 D lens being prescribed for the +0.5 D hyperope which implies +1 D of under correction. Figure 4 : Figure 4: Simulation of the view through the hologram for a) non-progressive myopes b) progressive myopes. Table 1 : Data on the mean sphere measured initially and with a time lapse for non-progressive myopes who participated in the study with the hologram Table 2 : Data on the mean sphere measured initially and with a time lapse for progressive myopes who participated in the study with the hologram Table 3 : Data on the mean sphere measured initially and with a time lapse for emmetropes who participated in the study with the hologram Table 4 : Data obtained with the hologram for Myopes Table 5 : Data obtained with the hologram for Hyperopes Table 6 : Data obtained with the hologram for Emmetropes Table 7 : Correlation of Age, Pupil size, and Refractive error with the limiting blur for progressive and non-progressive myopes
2018-09-13T06:56:05.396Z
2015-04-30T00:00:00.000
{ "year": 2015, "sha1": "694db52af53cb13c72569f83732bdc96536dc3a4", "oa_license": "CCBY", "oa_url": "https://doi.org/10.23937/2378-346x/1410021", "oa_status": "HYBRID", "pdf_src": "ScienceParseMerged", "pdf_hash": "694db52af53cb13c72569f83732bdc96536dc3a4", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Psychology" ] }
119117102
pes2o/s2orc
v3-fos-license
New lower bounds for the energy of matrices and graphs Let $R$ be a Hermitian matrix. The energy of $R$, $\mathcal{E}(R)$, corresponds to the sum of the absolute values of its eigenvalues. In this work it is obtained two lower bounds for $\mathcal{E}(R).$ The first one generalizes a lower bound obtained by Mc Clellands for the energy of graphs in $1971$ to the case of Hermitian matrices and graphs with a given nullity. The second one generalizes a lower bound obtained by K. Das, S. A. Mojallal and I. Gutman in 2013 to symmetric non-negative matrices and graphs with a given nullity. The equality cases are discussed. These lower bounds are obtained for graphs with $m$ edges and some examples are provided showing that, some obtained bounds are incomparable with the known lower bound for the energy $2\sqrt{m}$. Another family of lower bounds are obtained from an increasing sequence of lower bounds for the spectral radius of a graph. The bounds are stated for singular and non-singular graphs. Notation and Preliminaries In this work we deal with an (n, m)-graph G which is an undirected simple graph with vertex set V (G) of cardinality n and edge set E (G) of cardinality m. As usual we denote the adjacency matrix of G by A = A(G). The eigenvalues of G are the eigenvalues of A (see e.g. [7,8]). Its eigenvalues will be denoted (and ordered) by λ 1 ≥ · · · ≥ λ n . We denote the spectrum of a graph G by σ(G) = σ(A(G)). If e ∈ E(G) has end vertices i and j then it is denoted by ij. If i ∈ V (G), N G (i) denotes the set of neighbors of the vertex i in G. For the i-th vertex of G, the cardinality of N G (i) is the degree of i and it is denoted by either d(i) or d i . The number of walks of length k of G starting at i is referred as the k-degree of the vertex i and is denoted by d k (i) (see [10]). For convenience, we set d 0 (i) = 1, d 1 (i) = d(i), and If G is a connected graph, then A(G) is a non-negative irreducible matrix [7]. The complement of a graph G is usually denoted by G. A graph G with n vertices is called a regular graph (or r-regular ) if d i = r, 1 ≤ i ≤ n. A star and the complete graph with n vertices is denoted by S n and K n , respectively. We recall now some concepts from Matrix Theory used throughout the text. In this paper R stands for a Hermitian complex matrix of order n and M represents any square complex matrix. It is well known that for a Hermitian matrix its singular values and the absolute values of its eigenvalues coincide. The energy of R, denoted by E (R) , is the sum of the absolute values of the eigenvalues of R. Note that, if R is a non-negative matrix, then R is symmetric and its spectral radius, ρ = ρ(R), and its largest eigenvalue coincide. For an arbitrary square matrix M of order k with eigenvalues µ 1 , . . . , µ k , its nullity, denoted by η(M), corresponds to the multiplicity of its null eigenvalue. Thus, if M is non-singular then η(M) = 0. Note that, for a graph G, the nullity of A(G) is called the nullity of G and it is denoted by η(G). Consequently, a graph G is called non-singular if η(G) = 0 otherwise, G is called singular. In the text we denote by e the all ones vector. The k-th elementary symmetric sum of the eigenvalues µ 1 , µ 2 , . . . , µ n of a square matrix M of order n is defined as Note that Υ n (M) = det(M) and Υ 1 (M) = tr(M), with tr(.) denoting the trace of a square matrix. For a square matrix M of order n, let M[i 1 , i 2 , . . . , i k ] be the principal submatrix of M whose j-th row and column are labeled by is a principal minor of order k of M and it is denoted by ∆ M (i 1 , i 2 , . . . , i k ). In [21] it is shown that The Frobenius matrix norm of a square complex matrix M, denoted by |M| , is defined as the square root of the sum of the squares of its singular values. In consequence, if R is a symmetric matrix of order n with eigenvalues α 1 , α 2 , . . . , α n , The paper is organized as follows. At Section 2 some motivation in connection with Chemistry and known lower bounds for E(G) and the main results without proof are introduced. At Section 3 three cases where the lower bound 2 √ m introduced by Caporossi et al. in [5] is improved by the lower bound at Theorem 2, are presented. At Section 4 the main theorems and corollaries presented at Section 2 are proved. Namely, in this section one lower bound for E (R) is given and generalizes the lower bound for the energy in [20] to the case of Hermitian matrices with a given nullity. In [6] an increasing non-negative sequence that converges to the spectral radius of a non-negative symmetric matrix was constructed and a decreasing sequence of upper bounds for the energy of R was obtained. Therefore, using the same sequence, an increasing sequence of lower bounds for E (R) , where R has given nullity, is obtained at Section 5. Moreover, some results are applied to the adjacency matrix of a graph to obtain lower bounds for the energy of graphs. Equality cases are studied. Motivation and the main results The concept of energy of graphs appeared in Mathematical Chemistry and we review in this section its importance. In Chemistry the structure of molecules are represented by molecular graphs where its vertices stand for atoms and edges for bonds. Molecular graphs can be split into two basic types: one type representing saturated hydrocarbons and another type representing conjugated π -electron systems. In the second class, the molecular graph should have perfect matchings (called "Kekulé structure"). In the 1930s, Erich Hückel put forward a method for finding approximate solutions of the Schrödinger equation of a class of organic molecules, the so-called conjugated hydrocarbons (conjugated π-electron systems) which have a system of connected π-orbitals with delocalized π-electrons (electrons in a molecule that are not associated with a single atom or a covalent bond). Thus, the HMO (Hückel molecular orbital model) enables to describe approximately the behavior of the so-called π-electrons in a conjugated molecule, especially in conjugated hydrocarbons. For more details see [17] and the references therein. Following to HMO theory, the total π-electron energy, E π , for conjugated hydrocarbons in their ground electronic states, E π is calculated from the eigenvalues of the adjacency matrix of the molecular graph: where n is the number of carbon atoms, α and β are the HMO carbonatom coulomb and carbon-carbon resonance integrals, respectively. For the majority conjugated π-electron systems where λ 1 , . . . , λ n are the eigenvalues of the underlying molecular graph. For molecular structure researches, E is a very interesting quantity. In fact, it is traditional to consider E as the total π-electron energy expressed in β-units. The spectral invariant defined by (3) is called the energy of the graph G, and it will be denoted here by E(G) (see [11]). It is worth to be mentioned that in the contemporary literature this graph invariant is widely studied, namely the search for its upper bounds. On the other hand, lower bounds for energy are much fewer in number, probably because these are much more difficult to deduce. Some of these, recently determined, the reader should be referred, for instance, to [2,3,16,19,24]. For an arbitrary graph G, in [20] McClellands obtained the following lower bound for E(G): where det(A) denotes the determinant of the matrix A = A(G). The following simple lower bound for a graph G with m edges was introduced by Caporossi et al. in [5] and the equality case was discussed. In fact, with equality if and only if G consists of a complete bipartite graph K a,b such that ab = m and arbitrarily many isolated vertices. A lower bound for the energy of symmetric matrices and graphs was introduced in [1]. Necessary conditions for the equality were studied. Some computational experiments were presented shown that, in some cases, the obtained lower bound is incomparable with the lower bound 2 √ m. In [9], Das et al. obtained the following lower bound for a connected nonsingular (n, m)-graph: where det(A) denotes the determinant of the adjacency matrix A = A(G). The equality holds in (6) if and only if G is the complete graph K n . The last lower bound was obtained firstly considering that, for a connected graph, the following relationship holds: In [9] it was shown that the graph that attains equality in (7) is the same graph that attains equality in (6). The present work generalizes the lower bound in (4) for Hermitian matrices R, such that η(R) = κ and the lower bound in (7) for non-negative symmetric matrices R, such that η(R) = κ. The equality cases are discussed. We present now the main results of this work to be proven at Section 4. Additionally, using the increasing sequence of lower bounds for λ 1 given in [6] an increasing sequence of lower bounds for the energy of graphs with nullity κ, is obtained at Section 5. The equality holds in (8) where κ = n − 2 and S is a rank one matrix. Note that if in the above result the symmetric matrix R is replaced by the adjacency matrix of a graph G the following result is obtained. where The equality holds in (10) if and only if the nonzero eigenvalues of G have the same absolute value. Moreover, if G is connected the equality holds if and The equality holds in (11) if and only if the nonzero eigenvalues of R have all modulus equal to 1, except maybe for its largest eigenvalue. Moreover, if R has largest eigenvalue greater than 1 and tr(R) = 0 then κ, the number c of eigenvalues equal to −1 and, the number f of eigenvalues equal to 1 satisfy: Moreover, the inequality (11) is strict if R has a submatrix of order 3, say R 1 , where either The result in (7), can be generalized for all graphs, including singular graphs. The result is stated in Theorem 4. Theorem 4. Let G be a graph with n vertices with largest eigenvalue λ 1 and η(G) = κ. Then The equality holds in (12) if and only if the nonzero eigenvalues of G, except maybe for its largest eigenvalue, have all modulus equal to 1. If the largest eigenvalue of G is 1 then As a consequence of Theorem 3 the following result can be obtained. Corollary 5. Let R be a non-negative symmetric matrix of order n with largest eigenvalue ρ such that η(R) = κ and there exists a non-negative vector x such that Then The equality holds in (13) if and only if x is an eigenvector of R associated to ρ and all the nonzero eigenvalues of R have absolute values equal to 1, except maybe for its largest eigenvalue. Remark 6. If R (with R reducible) is partitioned into irreducible blocks with one principal main block, say W, whose spectral radius is the spectral radius of R, say ρ such that W y = ρy, then R has an associated eigenvector x = (y, 0, . . . , 0) T , and if all the nonzero eigenvalues of R have absolute values equal to 1, except maybe for its largest eigenvalue, the equality in (13) is also obtained. Three cases where the lower bound 2 √ m is improved by the lower bound at Theorem 2 In this section we present some cases where the lower bound for E(G), given in (4), 2 √ m, is improved by the lower bound in (10) presented at Theorem 2. On the other hand, if one of the parameters is fixed, say r 1 , from the inequality (18), the lower bound in (10) improves the lower bound 2 √ m whenever r 1 (0.16) ≤ r 2 ≤ r 1 (6.23) . Let G be a graph with n vertices and consider the generalized composition of the family of graphs Recall that, each vertex of V (G) is assigned to the graph H j ∈ F (see [4,23]). Then, from [4, Theorem 5] Therefore, Thus, if 0 ∈ σ(G) has multiplicity κ then 0 ∈ σ(H) has multiplicity κ + n(t − 1). The following equalities are easy to compute: Suppose that G is an (n, m)-graph with nullity κ such that inequality in (17) holds. Then, from previous equalities, H = G[H 1 , . . . , H n ], is an (n, m)-graph with nullity κ such that Proof of the main results. In this section we prove Theorems 1, 2, 3 and 4 and the Corollaries 5 and 7 described at Section 2. Proof. of Theorem 1 Let α j 1 ≥ α j 2 ≥ · · · ≥ α j n−κ be the non-zero eigenvalues of R. It is clear that Recall that ∆ R [i 1 , i 2 , . . . , i n−κ ] denotes the k × k principal minor of R. Since the geometric mean of a set of positive numbers is not greater than the arithmetic mean, and the equality holds if and only if all of them are equal, we have: By the equality in (2), the term |Υ n−κ (R)| 2 n−κ changes from a spectral invariant to a matrix invariant. Finally, the equality holds if and only if From (19), attending to the definition of imprimitivity h in [22, Section III], we have h = n − κ. Additionally, as R is symmetric, its imprimitivity index must be h = 2. Therefore κ = n − 2. Moreover R is cogredient (that is, permutationally similar), to a matrix of the form in (9) and as κ = n − 2, the block S is a rank one matrix. By [22,Theorem 4.2] it is clear that, in this case, ρ(R), (the spectral radius of R), and −ρ(R) are the only nonzero eigenvalues of R. Proof. of Theorem 2 The proof of the inequality is obtained following the same steps of the proof of Theorem 1 replacing the Hermitian matrix R by the adjacency matrix of G. For the equality case, if G is connected then A(G) is irreducible and from the equality case in Theorem 1, necessarily G = K a,b . If G is not connected then, by Theorem 1, each connected component verifies the condition (19). Therefore, it is a complete bipartite graph and the described conditions for G in the statement hold. Proof. of Theorem 3 Let α j 1 ≥ α j 2 ≥ · · · ≥ α j n−κ , with α j 1 = ρ, be the non-zero eigenvalues of R. In [9] it was proved that the real function Note that the equality holds if and only x = 1. Using the above result, we get where the equality holds if and only if 1 = |α j 2 | = |α j 3 | = · · · = α j n−κ . Now, suppose that R has largest eigenvalue greater than 1 and tr(R) = 0. Recalling that |R| 2 its the sum ob the squares of the absolute modulus of the eigenvalues of R, then the first equalities 1., 2. and 3. are obtained by searching solutions κ, c and f as function of n, ρ and |R| in the following system: Now we discuss the case when the inequality (11) is strict. For the sufficient conditions 1. and 2. the interlacing of eigenvalues is used considering the smallest eigenvalues of R and R 1 , respectively (see, for instance [12,Corollary 2.2]). As the smallest eigenvalue of R 1 in 1. is − √ a 2 + b 2 and imposing that this eigenvalue is smaller than −1 (note that, in this case its modulus is greater than 1 and therefore R doesn't fulfill the equality condition) then √ a 2 + b 2 > 1. For the condition in 2. the Rayleigh quotient is used and the fact that the smallest eigenvalue of a symmetric matrix is at most a Rayleigh quotient of the matrix ( [12,22]). Now, by noticing that either in 1. or in 2. we impose that R has the smallest eigenvalue not equal to −1, (using the same argument as before) the result follows. Proof of Theorem 4 The proof follows straightforward from the arguments used in the proof of Theorem 3 replacing the non-negative symmetric matrix R by the adjacency matrix of the graph G. For the equality case, and when ρ = 1, by Theorem 2 (attending that all the eigenvalues are of equal modulus) any connected component of G has nonzero eigenvalues 1 and −1 implying that they are isolated edges and therefore G is the union of isolated vertices and isolated edges, that is G = ⌊ n−κ 2 ⌋K 2 ∪ κK 1 . On the other hand, if ρ > 1 then G must have a connected component with at least three vertices and one see that any induced subgraph with three vertices of this component must be a cycle otherwise it would be a path and by 1. and from Theorem 3, A(G) would have a submatrix of the form R 1 as in 1. (and using interlacing) the smallest eigenvalue of G would not be −1. Therefore, if there exists a connected component of G with at least three vertices, it must be a complete graph, and then G = K n−ℓ ∪ κK 1 ∪ ⌊ ℓ−κ 2 ⌋K 2 with κ ≤ ℓ ≤ n − 3. Proof. of Corollary 5 Recall that from the Rayleigh quotient ρ = α 1 ≥ x T Rx x T x with equality if and only if (ρ, x) is an eigenpair of R (see e.g. [22]). Recalling that, as in the proof of Theorem 3, the real function f (x) = x − 1 − ln x, x > 0 is strictly increasing for x ≥ 1 and decreasing in 0 < x ≤ 1 ( [9]) then f (x) ≥ f (1) = 0, which implies x ≥ 1 + ln x, x > 0. Moreover, the real function g(x) = x + n − κ + ln |Υ n−κ (R)| is a strictly increasing function, then the function h = g • f is strictly increasing for x ≥ 1. From the condition x T Rx . Therefore, as E(R) ≥ h(ρ) as proved in Theorem 3, the inequality follows. If equality holds then for all nonzero eigenvalue of R, α, and different from the largest one the equality |α| = 1+ln |α| occurs only when |α| = 1 implying that α = ±1 and ρ = x T Rx x T x , as h is strictly increasing. Proof. of Corollary 7 Let G 1 be an induced (n 1 , m 1 )-subgraph of G with n 1 ≥ 1. The proof follows directly from the proof of Corollary 5 changing the non-negative symmetric matrix R by the adjacency matrix of the graph G. At this point recall that, if x is as in the statement of Remark 6 then with equality if and only if G 1 is a regular graph (see [7], for example). Moreover, the real function g(x) = x + n − κ ln |Υ n−κ (G)| is strictly increasing, then the function h = g • f is strictly increasing for x ≥ 1. From the condition 2m 1 n 1 ≥ 1 we have h(λ 1 ) ≥ h( 2m 1 n 1 ), Therefore, as E(R) ≥ h(ρ) as proved in Theorem 3, the inequality in (14) follows. If equality in (14) holds then for all nonzero eigenvalue (and different from the largest eigenvalue) λ of G the equality |λ| = 1 + ln |λ| occurs only when |λ| = 1 implying that λ = ±1 and λ 1 = 2m 1 n 1 . Therefore, G 1 is a regular connected component and then the graphs in the statement proceed. An increasing sequence of lower bounds for the graph energy In this section we obtain an increasing sequence of lower bounds for the energy of graphs. In [14], the authors built an increasing sequence, {γ (k) } k≥0 of lower bounds for λ 1 . Where, . . . Then the following results were obtained. Theorem 9. [14] Let G be a connected graph with largest eigenvalue λ 1 and k ≥ 0. Then with equality if and only if A k+2 (G)e = λ 2 1 A k (G)e. Proof. Observe that γ (k) 1 ≥ 1, for all k ≥ 0. This is an immediate consequence of Theorem 10 and that Since {γ (k) 1 } ∞ k=0 is an increasing sequence and converges to ρ then, the first statement follows from the continuity of h. If equality holds in (22), then for all nonzero eigenvalue λ (that is not equal to the largest eigenvalue of G) the equality |λ| = 1 + ln |λ| occurs only when |λ| = 1 implying that λ = ±1. Additionally, if the equality occurs, h(γ (k) 1 ), and we are in the conditions of Theorem 4. Therefore G is as in the statement. The inequality in (23) follows from the fact if G 1 is a r 1 -regular graph, then γ (k) 1 = r 1 , for all k ≥ 0. Recalling the result in (7) obtained in [9], the result given in [15] is here re-obtained considering κ = 0.
2019-03-04T16:14:19.000Z
2019-03-04T00:00:00.000
{ "year": 2019, "sha1": "3ad415ae6bd5efd24c9f3a01c510f8cb7c0c62ec", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "3ad415ae6bd5efd24c9f3a01c510f8cb7c0c62ec", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
53645775
pes2o/s2orc
v3-fos-license
Are building users prepared for energy flexible buildings—A large-scale survey in the Netherlands (cid:1) Awareness of smart grids is the highest among respondents aged 20–29 years old. (cid:1) Willingness to use smart technologies and change energy behaviour are interdependent. (cid:1) Potential flexible building users were found to be 11% of the respondents. Introduction Smart grids can facilitate flexible electricity consumption, which is crucial for a future where energy demand will have to be in phase with energy generation, due to inevitable fluctuations in the availability of renewable energy [1,2].Buildings account for one-third of total energy consumption in most developed countries, which amounts to considerable potential for activating flexible electricity consumption.Building energy flexibility is related not only to physical building characteristics, but also to building users.Unlocking building energy flexibility requires building users to adapt energy use behaviours to the needs of the smart grid [2,3].Willingness to accept these changes determines how much potential flexibility buildings can provide, and thus has a considerable impact on the development of smart grid technology [4]. User interaction and user perspective were part of one of the earliest pilot project of demand side management, the Olympic Peninsula Project [14,15].With 112 households participated.At the end of the project, most of the participants (95%) would have liked to continue using the program due to the positive impact the program had on their electricity usage.This is promising for the development of smart grids, but the findings may not be representative for other districts or regions. Several other studies on the interaction between users and smart products and services have shown significant impact of users on energy consumption and load management.In general, building automation system can have large impact on building energy performance, as showed by Ippolito et al. [17].The user impact and user interaction with building control system was further presented by Graditi et al. [18].Their study showed that the real time electric and thermal control systems can reduce energy consumptions.In the study conducted by Ayodele et al. [20], providing feedback to building residents on the building peak load reduction was shown to be effective in reducing the peak load of buildings. In these existing studies, user interaction with smart technologies was one of the key testing elements to identify motivating factors for users to adopt smart grid products.However, the relatively small scale of such pilot projects restricts the validity of the results.As the development of smart grid and smart grid technologies is a global issue, a user study on a larger scale is thus essential.Therefore, we developed a questionnaire and conducted a large-scale survey in the Netherlands to gain insights into this issue on a national scale. In the questionnaire, a list of smart grid related products and services are included because they are expected to influence the daily decisions and routines of users, leading to more flexible electricity demand patterns.According to Geelen et al. [21] and Obinna et al. [22], in a smart grid context, the products and services available to households can be categorized as: micro-generators (e.g.photovoltaic systems), energy storage e.g.domestic hot water tanks and batteries), smart appliances e.g.heat pumps, air conditioners, dishwashers, washing machines, and clothes dryers), smart meters, building energy management systems, and dynamic energy pricing and contracts.The above technologies are included in this study and these technologies are named smart technologies, as used in [16]. Motivating factors are also part of the questionnaire.In most pilot projects in Europe, to get users involved, three motivational factors are commonly used, namely, environmental concerns, reduction of or control over electricity bills, and better comfort [23][24][25].Toft et al. [26] reported that the acceptance of smart grid technology depends on perception of the technology as helpful and effortless to use.These factors were therefore also included in our questionnaire. Kobus et al. [6] stated that the most important factor in motivating and activating users is to provide them with a dynamic price signal.Users then have a monetary incentive to move their electricity consumption to off-peak hours.Dynamic pricing is the key to utilising demand flexibility, and this mechanism has been used in various pilot projects, such as PowerMatcher [27] and Your Energy Moment [16] in the Netherlands, the EcoGrid EU demonstration in Denmark [19] and the Olympic Peninsula Project in the U.S. [28].A study on the performance of all types of smart white good appliances under dynamic pricing demand response scheme was conducted in a Belgium pilot project, Linear [29] with 58 households participated.A significant shift of flexible electricity consumption to lower price periods was observed.A high variation was also found in the energy consumption and energy flexibility among the project participants.We therefore included dynamic pricing in the present survey to investigate users' perception of it on a large scale. Unlike previous studies which have focused on community or district, this study aims to give a broader perspective by surveying a large representative sample of all households in the Netherlands.We aimed to understand the influence of individual/household characteristics, dwelling characteristics, household energy consumption, and knowledge and acceptance of smart grid technologies on the willingness of occupants to use smart technologies and change their energy use behaviour.We assessed how well building users are prepared to contribute to the energy flexibility of their buildings.We also investigated building user perceptions of smart grids and their readiness to adopt smart technologies. The paper is structured as follows: in Section 2 the methods for designing the questionnaire, conducting the survey and analysing data are presented.Section 3 includes two major parts.In Section 3.1, the survey data is presented in figures and tables to give the reader an overview of the results.Section 3.2 presents statistical analyses to identify the characteristics of potential flexible building users.Section 4 concludes the work. Questionnaire design The questionnaire consisted of questions about (1) user perceptions of smart grids, smart technologies, their willingness to use smart technologies and change energy use behaviours, and (2) sociodemographic characteristics and current energy use behaviours.These questions are listed in the Appendix. First, a short description of smart grids was provided, as we assumed that the concept of smart grids would be unfamiliar to the majority of building users.This description was about the concept and working principle of smart grids, and some of their possible influences on the daily lives of users.The text was as follows: An introduction to smart grids and how they work There is a mismatch between moments when the largest amount of renewable energy is generated and moments when the maximum energy is consumed.The renewable energy generation is high when the sun shines brightly and the wind blows fast.The energy consumption in our homes occurs after we wake up in the morning and when we are at home in the evening.Our energy consumption should be adjusted to match the renewable energy production to make full use of the available renewable energy.This can be done with smart grids.In smart grids, your energy consumption can be adjusted according to the renewable energy generation.When the energy generation exceeds the energy demand, the energy price will fall.At this time, you can use your home appliances (e.g., washing machine) and charge batteries cheaply.In contrast, when energy generation is insufficient, the energy price will increase and you will pay more for using your appliances.This energy price will be communicated by your energy suppliers if you have a smart meter installed, which also gives you insights into your energy consumption.Based on this, you can switch your appliances on or off.However, if you have appliances that can communicate with smart grids, these actions can be done automatically.Such appliances are called smart appliances. Next, the survey participants were asked to answer questions about their perception, willingness, and motivation to use smart grid products and services.In these questions, a 5-point Likert scale was used.Finally, the participants were asked for respondent and household characteristics, dwelling characteristics, and current energy usage, including energy bill information and heating habits. Survey and response The questionnaire was translated into Dutch and completed by ten Dutch locals with different educational backgrounds.The feedback from them was implemented so that the questionnaire would be easily understood by all test participants.The final version of the questionnaire was used for a large scale survey in July and August 2016 through a professional online survey company.The survey was restricted to subjects who were fully or partly responsible for paying their household energy bills, so the population segment younger than 20 years old was excluded from this survey.The questionnaire link was sent to contacts in the company's database selected by the following interlocked stratification, which was intended to be representative of the Dutch population: gender (female: 50% and male: 50%), age (20-29 years old: 19%, 30-44 years old: 25%, 45-59 years old: 27%, and 60 years old and above: 29%), and education level (low: 23%, middle: 48%, and high: 29%).The online survey was closed when 835 questionnaires had been completed. Data analysis The time it took to answer the questionnaire was used as a filter to select effective respondents.This filter is the same as that used in a comparable survey by Toft et al. [26].In our study, respondents who completed the survey in less than 5 min were excluded from the analysis as they were assumed to have answered arbitrarily.As a result, 785 respondents were classed as reliable and used in the data analysis.For these 785 effective respondents, the average time spent answering the questionnaire was 16 min. Descriptive analysis was performed to uncover user perceptions of smart grids and their impact on the daily lives of users, which is presented in Section 3.1.Statistical analysis was conducted using the statistical analysis software SPSS to analyse user readiness for energy flexible buildings, which is presented in Section 3.2.Our aim was to understand the influence of individual/household characteristics, dwelling characteristics, household energy consumption, and knowledge and acceptance of smart grid technologies on the willingness of occupants to use smart technologies and change their energy use behaviour.We therefore performed several regression analyses on the dependent variables as measures of willingness to use smart technologies, postpone home appliance start times, turn off the heating or air-conditioning for a short time, and reduce the heating temperature setting.The willingness to use smart technologies and the willingness to postpone the start times of home appliances were analysed using linear regressions.Their willingness to turn off heating or airconditioning for a short time and their willingness to reduce the heating temperature settings were examined using ordinal regressions.The regression analysis type was chosen according to the type of the dependent variables, which will be discussed in Section 3.2.In these analyses, we also used the dependent variable for each estimation as an independent variable in other analyses.In this way, information overlap between the analyses was avoided and any relations between each measure of respondents' acceptance of smart grids could be determined. Results and discussion The group characteristics of the survey respondents are shown in Table 1.The sample was compared with data from the Central Bureau of Statistics in the Netherlands [30] to verify that it is representative of the general population.For the CBS data, the percentage of each age segment was calculated using only the population above 20 years old.The education level of the Dutch population was compiled based on data from the total population aged 15-64 in 2013, which was the latest available.It can be seen that the survey sample was representative of the Dutch population. Familiarity with smart grids and smart technologies After providing an introduction to smart grids and their possible influence on the daily lives of users (as explained in Section 2.1), the respondents were asked about their awareness of smart grids prior to the survey.Five options were presented, from ''never heard of it" to ''know a lot about the concept."As shown in Fig. 1, more than 60% of the respondents were not previously aware of the concept.The rest of the respondents were aware of the concept, but only a small number of them (less than 5%) stated that they understood the concept and its consequences. When we look at familiarity across age groups (as shown in Fig. 2), we found that 48% of young people (20-29 years old) were already aware of smart grids.The highest degree of awareness was found within the age category 20-29, followed by 30-44, 45-59, and lastly 60 and above.This may indicate that younger people are more aware of smart grids than older people. Awareness about each smart technology is shown in Fig. 3.The options for answering this question ranged from ''never heard of it" to ''I own one."Although the awareness about each individual product or service was different, on average, more than half of the respondents knew about smart technologies, which was higher Characteristics Survey sample Dutch population data source [30] Gender 50.4% male, 49.6% female 49.5% male, 50.than their awareness about smart grids.The reason for the discrepancy might be that products or services are closer to the daily lives of respondents than power grids.Solar panels (PV), smart meters, and electric vehicles were the top three products that respondents knew about prior to the survey. Household energy saving actions and smart technology ownership Actions that were reported as already being taken to reduce household energy consumption or to use renewable energy are shown in Fig. 4. It can be seen that some action was taken by about half of the respondent households to save energy or reduce their energy bill.It also shows that energy labelling of appliances is a meaningful way to promote energy efficiency.It could therefore be advantageous to use a similar labelling system for smart technologies to promote renewable energy use.Use of night tariffs to reduce energy bills had been adopted by one-third of the respondents, even though the night tariff was only 10% 0.02 €/kWh) lower than the normal tariff.This result is in line with the statement made by Kobus et al. [6] that by providing a dynamic price signal, users have a monetary incentive to move their electricity consumption to off-peak hours.Although renewable energy generation accounted for only 5.8% of the Dutch energy generation in 2015 [30], 14% of the respondents declared that their electricity was generated partly) from renewable energy resources.This indicates that renewable energy is used as a positive aspect in the marketing of electricity.This agrees with a Danish study which also found that people are willing to pay for green energy [31].All in all, these actions together show a promising future for user acceptance and adoption of smart grids, at least in affluent countries. Current smart technology ownership in respondent households is shown in Fig. 5. Except for smart meters and solar panels, smart grid related energy systems and services are owned by fewer than 2% of the respondents.This indicates that smart technologies and smart grid services are still at an early stage in their adoption.During this early adoption stage, user engagement should be encouraged in order to support successful implementation of smart grid technologies [22]. Willingness to use smart technologies Although the ownership of smart technologies was low, respondents stated that they were positive about using smart grid products and services in the future, as shown in Fig. 6.Among the smart technologies, smart dishwashers and smart refrigerator/ freezers were the most popular, being favoured by 60% or more of the respondents.However, a willingness to use an electric vehicle with a smart charging and discharging system was the lowest of all.This is a negative finding that was unexpected from the smart grid development scheme published by the International Energy Agency [32], in which electric vehicles were expected to play an important role in the major economies, including the Netherlands.The reasons that respondents were less willing to use an electric vehicle with smart charging and discharging is outside the scope of this study, but it should be investigated further.It seems possible that respondents were reporting their unwillingness to own an electric vehicle, with or without smart charging and discharging, rather than an unwillingness to use. Willingness to change energy use behaviour Dynamic pricing was included in the survey.The price information was embodied in questions about willingness to change energy use behaviours and to choose a control option. ( 1) Postpone the start times of smart appliances The willingness to postpone the start times of smart appliance is shown in Table 2.The majority of the respondents were willing to postpone the start of a dishwasher, washing machine, or tumble dryer.For these three appliances, the maximum delay was flexible with a substantial amount of respondents choosing any time between 20 min and 24 h.These results are comparable with real time measurements from the LINEAR pilot [5], in which flexible hours were distributed throughout 24 h, with an average time around eight hours. The willingness to postpone the use of irons, vacuum cleaners and heating systems, and the charging of electric vehicles, was also relatively high, with more than half of the respondents giving positive answers.Although the flexibility potential of irons and vacuum cleaners has not been a focus in most studies of demand flexibility, they have good potential for providing energy flexibility in Dutch residential buildings.In this study we found that irons and vacuum cleaners are mostly used in the daytime, so they could play an important role in shifting electricity demand during daytime hours.More than half of the respondents were unwilling to postpone the use of an oven, which is mostly used in the evening when there is peak energy demand in residential buildings.This indicates that the energy flexibility potential of ovens is relatively low, as expected. (2) Turning off heating and air-conditioning systems, and reducing heating temperature settings The willingness to change energy use behaviour, including turning off heating or air-conditioning for a short time and the willingness to decrease heating temperature set points, is shown in Table 3.The respondents were willing or slightly willing to turn off their heating or air-conditioning for a short time when electricity price peaks.On the other hand, respondents had a slightly lower willingness to reduce the room temperature during high electricity price periods.This might be because temperature is a word that is more directly related to thermal sensations.Some respondents commented, ''My wife and I always prefer to have a warm living room during the long winter," indicating that heating systems are an important aspect of comfortable living.It could be hard for some of the respondents to imagine that a slightly lower temperature setting might not affect their comfort.A study in Denmark discovered that two degrees of temperature variation was accepted by occupants [31].Further research in the Netherlands should focus on field tests to evaluate the effect of short duration temperature changes on the thermal comfort of Dutch residents. Real time indoor temperature displays should be provided for residents in their home energy displays or by other means. Preferred control options Dynamic energy pricing is assumed to be the key factor that stimulates active control over appliances.Possible control options with their availability and potential economic benefit were described as follows: You can reduce your energy bill using the following three control options.(1) You can check the hourly energy price on your home energy display and manually control your appliances based on your own decision (called manual control).However, this will only yield a small energy bill reduction.(2) Your home energy management system can automatically turn your appliances on or off based on your preferences and the hourly energy prices received from your energy suppliers (called home automatic control).This involves you being partly in control and still reduces your energy bill by a medium amount.(3) You can set the final finishing time for your appliances and let your electricity supplier remotely control the start time of your appliances (called grid remote control).This control can generate relatively high savings on your energy bill. In this study, grid remote control (also called direct load control) by the utility and home automatic control via home energy management systems were considered as possible solutions.Fig. 7 shows respondent preferences for control options.For each appliance, the preferred control was either grid remote control or home automatic control.On average for all technologies, half of the respondents stated that they preferred one of these two options.This result is in line with a finding in a Danish study of heat pumps that heat pump owners are willing to let their heat pump be controlled when incentives are applied [33].The willingness to accept non-manual control of their technologies was much higher in comparison to previous studies carried out in Portugal [34] and Great Britain [35].In the Portuguese study [34], only the option of direct load control from the utility was given.With three options (remote control, automatic control, and switch from manual control to remote or automatic control) provided in this study, the willingness to accept non-manual control increased dramatically, with on average only a minority (33%) of respondents preferring manual control or no control.This result indicates that user preferences for control options differ.Multiple control options should be considered and investigated in the development of smart technologies.Table 3 Behaviour change potential (1 = ''strongly willing", 2 = ''willing", 3 = ''slightly willing", 4 = ''unwilling", 5 = ''I do not care"). Behaviour change Mean r Willingness to turn off the heating or air-conditioning system for a short time when energy price peaks 2.59 1.04 Willingness to reduce the room temperature setting for the heating system when energy is expensive The issues of privacy and control might be judged important by users, since information about energy usage patterns and energy consumption for their buildings would be collected via smart meters and sent to the electricity company.In a previous study, privacy issues were cited as one of the primary barriers to choosing remote or automatic control and accepting smart grid technologies [36].However, this factor does not seem to have much impact for Dutch residents.In fact, privacy was only stated by 28% of the respondents to be an important factor in considering a smart appliance. Motivating factors for adopting smart technologies Previous research has discussed several factors that motivate users to adopt smart technologies in smart grids or smart homes [26,34,[36][37][38][39].Based on these studies, we developed eleven factors for our questionnaire and used a Likert scale with 1 = ''strongly motivating," 4 = ''not motivating," and 5 = ''I do not care" as choices.The results are shown in Table 4. Across all of the factors, the mean value was 2.54, or motivating to slightly motivating.The two most motivating factors were reduced energy bills and financial rewards from the energy supplier.For these two only 9 and 11%, respectively, of respondents stated ''not motivating" or ''I do not care."One clearly not motivating factor was sharing the results on social media, with 51% of respondents choosing ''not motivating" and 12% choosing ''I do not care."Besides the financial benefits, other motivating factors included seeing the effects of their energy use actions, reducing CO 2 emissions, and being acknowledged.This indicates that people expect to see the effects of their efforts and to be recognized for their contribution.These factors should be considered during product design, for instance by incorporating them into a home energy display, and into smart grid business model development. Identification of potential flexible building users based on statistical analysis In this section, the identification of potential flexible building users is presented based on the data shown in Section 3.1.Respondent willingness to use smart technologies and change their energy use behaviours were used as dependent variables in statistical analysis.The independent variables consisted of individual and household characteristics, dwelling characteristics, household energy usage, familiarity with smart grid technologies, and respondent energy attitudes.The correlation of these independent variables were checked prior to regression analyses and the results show that none of the variables are correlated with each other.I.e., the variables found to be significant in the regression analysis were independent of each other.Table 5 shows the results of these analyses.In the table, the values indicate the coefficients, and each significant coefficient is marked with an asterisk ( ⁄ ). Willingness to use smart technologies In the questionnaire, respondents were asked whether they would be willing to use each smart technology or not, and the results can be seen in Fig. 6.For the statistical analysis, we summed the number of smart products that each respondent would be willing to use.This summation gave us an overall score for each respondent.This overall score indicates the willingness of a respondent to use smart technologies in general.The higher the number of smart technologies that a respondent is willing to use, the more willing the respondent is to use smart technologies in general.Since this overall score is a continuous value, we used a linear regression.The results can be seen in Table 5 (column: Use ST).The adjusted R-square was found to be 0.164.Young respondents, aged between 20 and 29 years old, were found to be significantly more willing to use smart technologies, which is also in line with the findings of their familiarity with smart grids.If the respondents were living in a dwelling that was less than 50 m 2 in floor area, then they were less willing to use smart technologies.This might be due to low energy consumption or low income.In addition, if respondents did not know the impact of their energy bill on their household budget, they were more willing to use smart technologies.Compared to those with low energy bills, respondents paying energy bills between 100 and 150 Euro per month were found to be more willing to use smart technologies. When we look at their attitudes, respondents who were more willing to temporarily reduce the set-point temperature of heating, postpone the start time of appliances, and use one of the control options were significantly more willing to use smart technologies. Willingness to postpone the start time of appliances In the questionnaire, respondents were asked for how long they would be willing to postpone the start time of each appliance, given that they would obtain some financial benefits.We assigned a score of one if they were willing to postpone an appliance at all, regardless of for how long, and cumulated their scores for all appliances.The total score was used in the statistical analysis.This score indicates the number of appliances that each respondent would be willing to postpone using.Since this score is a continuous variable, we used a linear regression to estimate the model.The adjusted Rsquare was found to be 0.309. The results indicate that middle income respondents were willing to postpone more appliances, compared to low income respondents.Moreover, respondents with energy bills between 100 and 200 Euro per month were willing to postpone more appliances compared to respondents paying less than 100 Euro per month.Using a heating system with a constant temperature setting was found to have a strong positive influence on the number of appliances that a respondent was willing to postpone using.In addition, using the heating only when someone is present was also found to have a positive effect on the number of appliances that respondents were willing to postpone using. It was found that increasing familiarity with the smart grid increased the number of appliances that a respondent was willing to postpone using.When we looked at their attitudes, we saw that respondents that were willing to turn off heating or cooling for a short time were willing to postpone using a larger number of appliances.Lastly, people who were more willing to use one of the control options were more willing to postpone the start time of their appliances. Measure 3 ''Turn-off" stands for the willingness to turn off heating or air-conditioning for a short time. 4''Reduce temp" stands for willingness to reduce the heating temperature setting. 5''Other" includes students and unemployed. 6''Other" includes a shared house or apartment and student dormitory. 7''Constant" means that the heating system is always turned on at a constant temperature set-point. 8Different" means that the heating system is always turned on, but has different temperature set-points for different times of the day. 9''Someone constant" means that the heating system is turned on only when someone is at home at a constant temperature set-point. 10''Someone different" means that the heating system is turned on only when someone is at home, and has different temperature set-points for different times of the day. 11''Not in use" means that the heating system is not often used. Willingness to turn off heating or air-conditioning for a short time The respondents were asked how willing they would be to turn off their heating or air-conditioning system for a short time when the energy price peaked.They were given a 5 level Likert scale, from ''strongly willing" to ''I do not care" as described in Section 3.1.4.Since the dependent variable was ordinal, we conducted an ordinal regression analysis.The McFadden R-square was found to be 0.112.In social sciences, R-square around 0.20 is considered to indicate a good fit.Falk and Miller [40] recommended that Rsquare values should be equal to or greater than 0.10 in order to explain the variance adequately.Therefore, the results of this analysis are valid for obtaining insights into the willingness of respondents. According to the results, no sociodemographic influences, dwelling characteristics, or energy bill related variables were found to be significant.Only familiarity and attitudinal effects were observed.If respondents were familiar with smart technologies, then they were more willing to turn off their heating or air-conditioning for a short time when energy price peaked.This finding is in line with some studies of knowledge and attitudes regarding energy savings in other countries, such as [41], and [42], in which correlation between knowledge and attitudes were identified. Moreover, as the number of technologies that they were willing to use increased, their willingness to turn off their heating or airconditioning also increased.In addition, with an increase in willingness to reduce the heating temperature setting, the willingness to turn off heating or air-conditioning systems also increased.Finally, as the willingness to control smart technologies increased, willingness to turn off the heating or air-conditioning system also increased. Willingness to reduce heating temperature settings Respondents were asked how willing they would be to reduce the room temperature setting for their heating system when energy price peaked.This was measured by a 5-level Likert scale, with ''strongly willing," ''willing," ''slightly willing," ''unwilling," and ''I don't care."Since the dependent variable was ordinal, we applied an ordinal logit regression analysis.The McFadden R-square was found to be 0.088.Although this value is low, these results can be used for understanding the behaviour of respondents. According to the results, male respondents were found to be less willing to reduce the temperature setting of the heating system compared to female respondents.Moreover, households with more members were less willing to reduce the heating temperature setting.Respondents living in detached houses were more willing to reduce the heating temperature setting compared to respondents living in other types of dwellings. In the attitudinal effects, it can be seen that with an increase in the willingness to use smart technologies, the willingness to reduce the heating temperature setting when energy price peaks also increased.Finally, we found that, with an increase in the willingness of respondents to turn off heating or air-conditioning, their willingness to reduce the temperature setting of the heating system when energy was expensive also increased. Potential flexible building users The statistical analysis shows that some individual, household, and dwelling characteristics, such as age, gender, house type, house size, household size, and income, influence willingness to adopt some smart grid technologies.However, these variables were not found to have a significant impact on overall willingness.These results therefore do not allow us to make generalized conclusions regarding the identification of population groups in terms of their readiness for energy flexible buildings.The reason could be that smart grids and their related technologies are in general unfa-miliar to the population.This is different from other studies of user perception of energy conservation in dwellings, such as Hara et al. [43].Their study based on a large-scale survey in Japan found that family size, age, household income and number of air conditioners are determinant factors of the respondents' perception of household energy conservation.We believe this can be explained by building energy conservation being a familiar topic to residents while smart grids and energy flexible buildings are not. Furthermore, we find that household energy attributes, such as the average energy bill, the impact of the energy bill on the family budget, and their habitual usage of heating systems all influenced their willingness to adopt smart grid technologies.These influences were more marked for willingness to postpone the start of home appliances and to use smart technologies.In addition, increasing familiarity with smart grid technology also increased willingness to change energy use behaviour, as expected. When we look at the attitudinal variables, we can see that there was an interdependency between the variables that define willingness to adopt smart grid technology (as illustrated in Fig. 8): (1) people who are willing to postpone the start of home appliances are also willing to use smart technologies and vice versa; (2) people who are willing to use smart technologies are also willing to turn off heating or air-conditioning for a short time and to reduce the heating temperature setting; (3) people who are willing to turn off heating or air-conditioning for a short time are also willing to reduce the heating temperature setting and vice versa. According to these results, we define potential flexible building users as those who are willing to use smart technologies and change their energy use behaviours, including postponing the start of appliances, turning off heating or cooling for a short time, and reducing the heating temperature setting.To estimate the number of potential flexible building users, we assume that flexible building users are: willing to postpone the start time of half or more of their appliances, willing to use half or more of the smart technologies listed in the questionnaire, willing to turn off their heating or air-conditioning, and willing to reduce the heating temperature setting. Based on this assumption, we found that 11% of the respondents were potential flexible building users.Although this value is somewhat arbitrary and dependent on the above criteria, it gives a rough understanding of the readiness of inhabitants for energy flexible buildings. Conclusions This paper presented the investigation of the readiness of building users on energy flexible buildings, an area that has not been greatly explored in existing literature.A large-scale survey with usable results from 785 respondents was conducted in the Netherlands to investigate residential building occupants' perceptions of smart grid technologies and their readiness to use energy flexible buildings.The survey respondents were representative of the Dutch population based on comparison with data from the Central Bureau of Statistics in the Netherlands.According to a descriptive analysis, more than 60% of the respondents were unaware of smart grids.However, young respondents were more aware of smart grids than older respondents.The two smart technologies that respondents were most willing to use were smart dishwashers (65%) and smart refrigerator/freezers (60%).A majority of the respondents were willing to change their energy use behaviour, including turning off their heating or air-conditioning for a short time, reducing the room temperature setting for the heating system, or postponing the start time of home appliances.For the control of smart technologies, a majority accepted one of the four control options: grid remote control, home automatic control, manual control, and try manual control first later switch to grid remote control or home automatic control.The level of acceptance was much higher than has been found in other studies.This result indicates that multiple control options should be included in the development of smart technologies to achieve high user acceptance and therefore realize the energy flexibility of home appliances.The top three motivating factors for users adopting smart technologies were found to be: reduced energy bills (strongly motivating), financial rewards from the energy supplier (motivating), and seeing the effects of energy use actions (motivating). The regression analysis indicated that young people (20-29 years old) were more willing to use smart technologies.We also found that household energy consumption, in terms of the average monthly energy bill (100-200 Euro) and heating system usage (keeping heating constantly on) influences the willingness of users to adopt smart grid technology.This might be due to the monetary considerations of people regarding their electricity and heating expenditures.Moreover, increasing familiarity with smart grid technology had a positive influence on the willingness to change energy use behaviours.These analyses also reveal interdependency between variables that determine willingness to adopt smart technologies and changing energy use behaviours.Accordingly, we defined potential flexible building users as those who are willing to use smart technologies and change their energy use behaviours, including turning off heating or air-conditioning for a short time, reducing the heating temperature setting, and postponing the start times for their home appliances.With certain assumptions, 11% of the respondents were found to be potential flexible building users. These results obtained from the study provide important insights for energy policy and energy companies to make policies and strategies towards the development of future power grid and encouraging building users to be more flexible.For example, in order to encourage people to adopt smart grid technology, awareness of smart grids must be increased.Awareness should not be limited to young people, but should be disseminated to the entire population.The adoption of smart grids can also be increased through financial incentives by focusing on residents with midlevel energy bills.It appears that people who are willing to use smart technologies are also willing to change their energy use behaviour, and can thus be defined as flexible.In order to unlock building energy flexibility, the adoption of smart technologies should be encouraged by providing incentives such as financial rewards.For the following appliances, which control do you prefer to use? -smart washing machine, smart tumble dryer, smart dishwasher, smart refrigerator/freezer (ensuring no loss of food quality), smart heat pump heating/cooling system (ensuring comfort temperature), hot water storage tank with smart charging and discharging, battery for electricity storage with smart charging and discharging, electric vehicle with smart charging and discharging -(grid remote control with big savings, home automatic control with medium savings, manual control with small savings, no control and no savings, try the manual control first and later on switch to automatic control or remote control) How willing are you to turn off your heating or air-conditioning system for a short time when the energy price is at a peak?How willing are you to lower the room temperature setting for your heating system when energy is expensive?(strongly willing, willing, slightly willing, unwilling, I do not care) Motivating factors Are the following measures motivating you to accept smart grids and use smart appliances?-being flexible at using energy, contributing to reliability of the electricity grid, seeing the effects of your energy use actions, reducing CO 2 emission, reducing energy bill, being acknowledged for your efforts, receiving a financial reward from your energy supplier, giving your house more sustainable character (e.g., installed pv on the roof), making your house high-tech, comparing with other households, sharing your results on social media -(strongly motivating, motivating, slightly motivating, not motivating, I don't care) Standard deviation. Fig. 8 . Fig. 8. Interdependencies found among four measures of willingness to adopt smart technologies. Table 2 Willingness to postpone smart appliance start time. Table 5 Aggregation analysis of user willingness. Table A1 ( continued) Aspects Questions How long are you willing to postpone the start of the following appliances in order to use cheap energy?-washing machine, tumble dryer, dishwasher, refrigerator/freezer, clothes iron, vacuum cleaner, oven, air-conditioning, heating system, charging electric vehicle battery -(0 hour, 20 min, 40 min, 1 hour, 2 hours, 4 hours, 6 hours, 8 hours, any time within 24 hours)
2018-11-10T23:15:33.286Z
2017-10-01T00:00:00.000
{ "year": 2017, "sha1": "60ee26aa1d8c24380111aa49b9c9c582d3cf85b3", "oa_license": "CCBY", "oa_url": "https://backend.orbit.dtu.dk/ws/files/134815622/Untitled.pdf", "oa_status": "GREEN", "pdf_src": "ScienceParsePlus", "pdf_hash": "0326e170c8342047c993afdb4c77bdee770584b3", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Engineering" ] }
239659396
pes2o/s2orc
v3-fos-license
Proactive changes in clinical practice as a result of the COVID‐19 pandemic: Survey on use of telepractice by Quebec speech‐language pathologists Abstract Background The coronavirus disease2019 (COVID‐19) pandemic has led to important challenges in health and education service delivery. Aims The present study aimed to document: (i) changes in the use of telepractice by speech‐language pathology (SLP) professionals in Quebec since the start of the COVID‐19 outbreak; (ii) perceptions of the feasibility of telepractice by SLPs; (iii) barriers to the use of telepractice; and (iv) the perceptions of SLP professionals regarding the main issues of telepractice. Methods & Procedures An online survey with closed and open, Likert scale and demographic questions was completed by 83 SLPs in Quebec in June and July 2020. Outcomes & Results The survey responses showed that within the cohort responding, telepractice use has increased significantly as a response to the COVID‐19 pandemic. Most respondents planned to continue using telepractice after the pandemic ends. In addition, the respondents considered telepractice to be adequate for many clinical practices but less so for others (e.g., swallowing disorders, hearing impairment). Most of the reported barriers to the use of telepractice concerned technological problems and a lack of clinical materials for online use. Confidentiality and privacy issues were also raised. Conclusions & Implications SLP professionals rapidly took advantage of existing technologies in their clinical settings to cope with the pandemic's effects on service delivery. The discrepancy between their perceptions and the evidence in the literature for some practices and populations strengthens the need for more information and education on telepractice. What this paper adds What is already known on the subject The proportion of speech‐language pathologists (SLPs) in Canada who use telepractice for clinical activities is unknown. Knowing this information became crucial in the context of the coronavirus disease 2019 (COVID‐19) pandemic because non‐essential activities were interrupted to halt the spread of the disease. What this paper adds to existing knowledge The findings from this survey study confirmed that the use of telepractice in SLP in Quebec increased significantly during the COVID‐19 pandemic. Moreover, the majority of the respondents began using telepractice because of the pandemic, and most planned to continue doing so after it ends. This demonstrates how SLP professionals rapidly took advantage of existing technologies in their clinical settings to cope with the pandemic's effects on service delivery. What are the potential or actual clinical implications of this work? Although the SLPs expressed an overall positive perception of telepractice, they also highlighted barriers to its optimal use. The findings of this study should help employers and regulatory bodies in Quebec to bring down those barriers and make telepractice in SLP a durable, effective and efficient service delivery model. became crucial in the context of the coronavirus disease 2019 (COVID-19) pan- INTRODUCTION Service delivery in speech-language pathology (SLP) is mostly provided via one-on-one assessment or treatment sessions. However, travelling to an outpatient clinic, rehabilitation centre or private clinic is sometimes a major issue for people with mobility problems. A growing body of literature has shown that telepractice is an efficient and effective SLP service delivery method (Molini-Avejonas et al., 2015). Telepractice has been used for the assessment of language in neurodegenerative diseases (Adams et al., 2020), intelligibility in dysarthria (Ziegler & Zierdt, 2008), dysphagia from various aetiologies (Ward et al., 2012) and childhood speech disorders (Waite et al., 2006). The efficacy of treatment interventions delivered remotely through telepractice has been studied in many clinical populations, such as individuals with stuttering (McGill et al., 2019), post-stroke aphasia (Macoir et al., 2017), dysphonia (Rangarathnam et al., 2015), developmental reading and spelling difficulties (Kohnen et al., 2020), and children with language disorders (Wales et al., 2017) . However, the adoption of this service delivery mode represents a significant change in SLP practice, which is based on direct communication, interaction, continuous adjustment with the interlocutor, corrective feedback and encouragement. Hence, studies have highlighted the need for information and education on the implementation and use of telepractice in SLP (Keck & Doarn, 2014;Overby, 2018). Moreover, there are barriers to the use of telepractice in SLP, including limited access to technology and a lack of information technology support (Henry et al., 2017;Kim et al., 2020;Nittari et al., 2020). The coronavirus disease 2019 (COVID-19) pandemic has resulted in serious public health concerns. The confinement and social/physical distancing practices implemented in many countries have led to important challenges in health and education service delivery. To reduce the impact of this pandemic, professionals of various domains, including SLP, have adopted or increased the use of telepractice (Finkelstein et al., 2020;Hincapié et al., 2020;Tenforde et al., 2020). Professional regulatory bodies, such as the American Speech-Language-Hearing Association (ASHA) and Speech-Language & Audiology Canada (SAC) quickly adapted and intervened, providing clinicians with telepractice guidelines through webinars (Speech-Language and Audiology Canada, 2020), courses (Speech Pathology Australia, 2020) and various information resources. (American Speech-Language & Hearing Association, 2020a, 2020b Canada is an extremely large country, and telepractice has typically been used in SLP to provide services to populations in remote and rural areas (Picot, 1998). However, the proportion of SLP professionals in Canada who use telepractice for clinical activities is unknown. This information is even more crucial in the context of the COVID-19 pandemic because non-essential activities have been interrupted to halt the spread of the disease. In this context, the main goal of the present study was to estimate the number of SLP professionals who have adopted telepractice in order to avoid issues, such as service breaks and long waiting lists. More specifically, the objectives were to document: (i) changes in the use of telepractice by SLP professionals in Quebec since the COVID-19 lockdown on 11 March 2020; (ii) the perceptions of the feasibility of telepractice by SLPs for clinical activities, practices and populations; (iii) the perceived barriers to the use of telepractice; and (iv) the perceptions of SLP professionals regarding the main issues with telepractice. Study population and recruitment procedure A non-probability sampling method was used to recruit French-speaking SLP professionals in Quebec. Participants were sought through the June 2020 monthly newsletter of the Ordre des orthophonistes et audiologistes du Québec (OOAQ; 'College of Speech-Language Pathologists and Audiologists of Quebec') which has an estimated membership of 2900 SLPs. In addition, an invitation to participate in the study was emailed to all lecturers and practicum supervisors at Laval University (n = 350). Participants were excluded if they had a student status, never used telepractice or did not complete the survey. Since the study aimed to collect non-nominative information about respondents' professional practices, it was exempted from institutional ethical approval. Survey questionnaire The survey was developed by the authors of this article, who are all SLP professors at Laval University. In total, 55 closed, open, Likert scale and demographic questions were created based on similar surveys and guidelines established by professional regulatory bodies (American Speech-Language & Hearing Association, 2020a, 2020b). The closed questions focused on clinical aspects of telepractice, and some were followed by open questions designed to obtain clarifications and detailed answers. Other open questions focused on preferred communication platforms and software, tests used in telepractice assessments, and pros and cons of telepractice use. A 5point Likert scale, from strongly disagree to strongly agree, was used for questions about attitudes toward telepractice. Finally, multiple-choice demographic questions included type of clinical practice, type of establishment (public or private), clinical populations served, Quebec administrative region of practice and years of practice. The survey was available online via Limesurvey (limesurvey.org) for 6 weeks, and a reminder was sent to potential respondents after 4 weeks. Data analysis Descriptive statistics were used to analyse the data. Responses to open questions were analysed through thematic grouping following the method proposed by Braun & Clarke (2006). An undergraduate student, familiar with qualitative methodology, first conducted all the steps of the analysis, namely the identification of the pattern of meaning and issues of potential interest in the data, the identification of the main and less prevalent themes, and the codification of the responses. All along with the analysis, the first author reviewed the thematic identification, the coding process and the respondents' responses, and codes were adjusted as required. Finally, the three co-authors proceeded to the final revision, and a consensus was obtained through discussion with the entire team. RESULTS The results sketch out the portrait of the use of telepractice by SLPs in Quebec, approximately 4 months following the COVID-19 first wave of lockdown, declared on 13 March 2020. The main survey results are presented in five sections: (i) demographics of respondents; (ii) changes in the use of telepractice in SLP in the context of the COVID-19 pandemic; (iii) perceived feasibility of telepractice in SLP; (iv) perceived barriers to the use of telepractice in SLP; and (v) perceptions of the respondents on the main issues surrounding the use of telepractice. Demographics of respondents The survey completion time was 30 min on average. A total of 85 SLP professionals completed the survey. The average length of SLP practice for the 85 SLP professionals was 11.3 years (SD = 8.7 years). Of these SLP professionals, 83 (98%) used telepractice; therefore, only their responses were included in the analysis. Although the majority of the respondents came from Capitale-Nationale (n = 16), Montreal (n = 16) or their metropolitan regions, 14 of the 17 regions of Quebec (i.e., the province of Quebec is officially divided into 17 administrative regions) were represented in the survey results. As shown in Table 1, the 83 respondents were a fairly accurate representation of the entire SLP profession in Quebec in terms of practice setting and case type. The majority worked in public sector and private clinics in proportions similar to those reported by OOAQ for all SLPs in Quebec (OOAQ, 2020) (public: present study = 77.6%; OOAQ = 65.8%; private: present study = 21.2%; OOAQ = 19.1%). Changes in the use of telepractice in SLP in the context of the COVID-19 pandemic As shown in Figure 1, the use of telepractice in SLP in Quebec has increased considerably since the start of the pandemic (World Health Organization, 2020) compared to the immediate pre-pandemic situation. Interestingly, 70 respondents (84%) began to use telepractice during the pandemic. Among the 13 responders who had used telepractice before the COVID-19 pandemic, six indicated 6-month experience, while the remaining seven indicated more than 1-year experience. Therefore, it is important to stress that most of the survey responses were provided by users with limited experience of telepractice. This pandemic-linked increase was observed for all clinical activities (mean number of respondents using telepractice: before pandemic = 10.14, SD = 4.6; since pandemic = 51.4, SD = 22), with a greater proportion for treatments, assessments and examination result reporting. As shown in Table 2, respondents also used telepractice for other reasons, such as providing services to patients with reduced mobility or patients in geographically remote areas. Perceived feasibility of telepractice in SLP Respondents were invited to give their opinion on the clinical activities, practices and populations for which telepractice is deemed reasonably appropriate and practical or infeasible and impractical. Telepractice was judged to be appropriate for most clinical activities, including therapeutic interventions, assessments, result reporting, discussions with parents and relatives, and counselling. It was also considered feasible for clinical practices dealing with articulation processing, oral and written language, reading and spelling disorders, grammatical skills and developmental language disorders. However, telepractice was considered less feasible for clinical practices requiring touch and proximity (e.g., oral peripheral mechanism examination) and practices related to swallowing, dysphagia and feeding disorders, orofacial myofunctional disorders, childhood apraxia of speech, phonological skills and hearing loss. Finally, respondents believed that telepractice can be used for all clinical populations except hospitalised patients, people with hearing or visual impairment, people with little or no access to technology, children whose parents do not speak or speak little French, children with autism spectrum disorders who have little or no expressive speech skills, and populations unfamiliar with technology. Perceived barriers to the use of telepractice in SLP Overall, the respondents had a very positive perception of telepractice, and most of them wanted to continue using it after the pandemic. In total, 82% (n = 68) reported having experienced a positive change in their opinion of the use of telepractice in SLP since the start of the pandemic. Most of them, however, mentioned barriers to the use of telepractice in SLP. A total of 137 barriers were identified by the respondents. Seven barriers were mentioned by at least 5% of participants, hereafter presented in descending order: (i) limited technical equipment (tablet, computer, microphone, headset) available (n = 33; 24.1%); (ii) absence of a reliable internet connection (n = 23; 16.8%); (iii) problems related to sound and bandwidth quality (n = 21; 15.3%); (iv) limited clinical materials available for online use (n = 19; 13.9%); (v) difficulties in accessing confidential, calm and closed premises (n = 15; 10.9%); (vi) an increase in the time needed to prepare for the sessions (n = 12; 8.8%); and (vii) insufficient number of licenses for telepractice platforms (n = 7; 5.1%). Less frequent responses included patient wariness, employer resistance and lack of training on the optimal use of telepractice. Perceptions of the respondents on the main issues surrounding the use of telepractice The respondents were invited to give their general impression of the use of telepractice in SLP and share their perceptions of confidentiality, technological, clinical practice, organisational, environmental and workspace issues. General impression of the use of telepractice in SLP In general, the respondents were satisfied (82.2%: agree = 62.2%; strongly agree = 20%) with their telepractice experience. Many mentioned being surprised by the ease of telepractice (84%: agree = 67%; strongly agree = 17%) and its benefits, such as seeing patients in their living environment and better involvement of caregivers. However, despite its effectiveness, 51.3% (agree = 37.2%; strongly agree = 14.1%) believed that assessments performed remotely via telepractice will never be as valid as those performed in person. Confidentiality issues Most respondents (83%) used secure platforms recommended by their employer or organisation, such as Zoom or Microsoft Teams. As shown in Table 3, they believed that telepractice platforms were ethical and safe and ensured the confidentiality of interventions. Nevertheless, 68% of the respondents would like to have examples of informed consent forms for telepractice at their disposal. Regarding security, 23% of the respondents relayed that they performed connection security verifications. Moreover, 35% mentioned recording meetings on secure platforms or with a password to facilitate rating, assessment and verbatim transcription; and most of them deleted the recordings following their analysis. Technological issues As shown in Table 4, the respondents were largely familiar with technology and computers (64%), but some of them thought that the clinical population that they served is not. Many of them (73%) felt comfortable in front of the camera. Most (65%) considered computer equipment flexible enough to address all clinical needs. Some feared that technical problems related to sound quality and, to a lesser extent, image quality disrupt or interfere with remote interventions. In the event of a technical interruption, approximately 60% communicated with their patients via phone to try to solve the problem. Finally, 98% evaluated the general hearing and visual abilities of their patients before and during sessions, and approximately 45% evaluated the session quality at the end of each remote meeting. Clinical practice issues As presented in Table 5, only a minority of respondents noted the increase in the ecological validity of remote assessments. Half of the respondents believed that telepractice limited qualitative observations. In addition, creating and maintaining a therapeutic alliance with patients was an issue for approximately half of the respondents. Most respondents believed there was a lack of assessment tests for the French-speaking population in Quebec for telepractice and standard use. Finally, most (73.5%) would like to have access to training sessions on telepractice in SLP. Organisational, environmental and workspace issues A relatively low proportion of respondents believed that telepractice saved work time (Table 6). However, approximately 60% believed that this service delivery option made it easier for schedule management to accommodate patients. The advantages of using telepractice to provide SLP services to people with reduced mobility and people in geographically remote areas was widely acknowledged. The majority of the respondents stated that they had adequate working space for telepractice. However, approximately half stated that the patients' environment was not optimal for telepractice. DISCUSSION Most governments throughout the world have rapidly adopted drastic measures (e.g., social and physical distancing, curfew, lockdown, self-quarantine) to face the COVID-19 pandemic. These measures have led to major side-effects on health and rehabilitation service delivery. In SLP, for example, service delivery in the UK during the acute COVID-19 period, changed dramatically with approximately 62% reduction in clinical caseload, 50% reduction in referrals and 50% reduction of clinical activities delivered face-to-face (Chadd et al., 2021). Telepractice was shown to be useful for coping with public health emergencies (Lurie & Carr, 2018) and has been used in many disciplines to minimise the impact of the pandemic on health and educational service delivery (Daniel, 2020;Hare et al., 2020;Mann et al., 2020). This is the first study to investigate the use of telepractice among French-speaking SLP professionals in Quebec and the first to address the change in their telepractice use during the COVID-19 pandemic. In total, 83 respondents, who represented the entire SLP profession in Quebec fairly well, participated in this study, which was conducted during the pandemic's first wave. The study findings confirmed that the use of telepractice in SLP in Quebec increased significantly during the COVID-19 pandemic. Moreover, the majority of the respondents began using telepractice because of the pandemic, and most planned to continue doing so after it ends. This demonstrates how SLP professionals rapidly took advantage of existing technologies in their clinical settings to cope with the pandemic's effects on service delivery. Similar findings have been reported in studies of SLP professionals in other countries (Aggarwal et al., 2020;Chadd et al., 2021;Fong et al., 2021;Kraljević et al., 2020). The survey included no question about the clinicians' knowledge of the literature on telepractice in SLP. Their responses must therefore be considered as a basic understanding of this field of practice, thereby reflecting common perceptions in SLP. According to the respondents in the present study, telepractice lends itself well to many SLP-related clinical activities and practices but less so to others. For instance, the assessment and treatment of swallowing disorders were considered less feasible in telepractice. These activities must be performed safely, as dysphagia is known to contribute significantly to mortality, especially in elderly people (Nawaz & Tulunay-Ugur, 2018). The reluctance of SLP professionals to use telepractice for dysphagia is therefore legitimate. However, a growing body of literature has shown that the clinical assessment and management of swallowing disorders through synchronous or asynchronous telepractice is safe, reliable and effective, provided that conditions for safety are met (Borders et al., 2021;Burns et al., 2019;Malandraki et al., 2013Malandraki et al., , 2021. COVID-19 is a highly contagious respiratory syndrome, that can be transmitted by aerosol during coughing, sneezing or loud speaking. In this context, the treatment of dysphagia is particularly critical for professionals across disciplines, including SLP. However, according to a recent guidance document, the assessment and treatment of dysphagia in SLP can be provided efficiently, without in-person consults and proximity procedures through telepractice (Miles et al., 2020). Although instrumental assessments of swallowing (e.g., videofluoroscopy) can be remotely supported by clinicians via telepractice, patients still need to attend the clinical service to be present for the assessment (Burns et al., 2016). The remote assessment of phonological skills in children was also considered less feasible by the respondents. This could be explained by the necessity to conduct an oral peripheral exam and closely observe the articulators' positions while producing speech sounds during assessments, namely actions that are difficult to carry out via telepractice (McLeod et al., 2013). Although previous studies have demonstrated the potential of telepractice for this clinical population (Constantinescu, 2012;McCarthy et al., 2010), the respondents in the present study judged telepractice as less adapted to providing treatment sessions to patients with hearing impairment. Compromised sound signals as well as difficulty maintaining the focus on the face required for lip reading are key factors in the reluctance to use telepractice for impairments associated with hearing loss. Finally, even though there is evidence of the efficacy of telepractice for motor speech disorders in adults and in children (Hill et al., 2006(Hill et al., , 2009Molini-Avejonas et al., 2015), respondents expressed reluctance to use telepractice for this population. Acoustic integrity issues as well as concerns about the clinical validity of practices requiring touch and proximity are potential reasons for the reluctance of the respondents. The discrepancy between the perceptions of SLP professionals and the evidence in the literature for some practices and populations strengthens the need for more information and education on telepractice, which was expressed in previous studies (Keck & Doarn, 2014;Overby, 2018) as well as by many respondents of the present study. The results of the present study also point out the importance to ensure clinicians have access to the most up-to-date evidence to support telepractice. Additionally, they prompt researchers to adopt and test methods adapted to the actual clinical practice. Many of the barriers to the use of telepractice identified by the respondents are similar to those reported in previous studies (Chadd et al., 2021;Mashima & Doarn, 2008;Molini-Avejonas et al., 2015) and include problems with technology and the internet as well as a lack of clinical materials for online use. Organisational barriers included access to appropriate premises and a longer time required to prepare sessions. Some respondents reported a lack of support from employers, which was manifested in an insufficient number of individual licenses available for telepractice or a resistance to change. Similar organisational barriers were identified in a recent systematic review on telemedicine around the world (Kruse et al., 2018). Barriers related to optimal service delivery conditions included management of the patients' environment, patients' loss of interest or wariness, and patients' compliance with treatment. These factors are still poorly understood in the literature on telehealth and some are linked to user acceptance of telepractice (Huis in 't Veld et al., 2010), resistance to technology (Kamal et al., 2020), and equipment training and support (Wade et al., 2012). Finally, respondents addressed confidentiality and privacy. Their belief that confidentiality and privacy are preserved in telepractice platforms might explain the low proportion of respondents who performed connection security verifications before and during sessions. However, it is the responsibility of organisations and clinicians to protect confidentiality and privacy in the context of telepractice, as is the case in traditional face-to-face practice. This not only involves using secured networks but also providing patients with information on the type of data that is collected and for what purpose (Chaet et al., 2017). The large proportion of respondents who would like to have examples of informed consent forms for telepractice at their disposal certainly represents a step in the right direction. CONCLUSION Like professionals in other domains, SLP professionals have proved responsive to the changes and turbulence brought on by the COVID-19 pandemic. They expressed an overall positive perception of telepractice, although they also highlighted barriers to its optimal use. This study had some limitations, especially the sample size. Although we consider this sample to be representative of SLP professionals in Quebec who have used telepractice for service delivery since the start of the COVID-19 pandemic, our results must be interpreted with caution. Moreover, most of the survey responders began to use telepractice during the pandemic and thus it is possible that their responses may not reflect perceptions of clinicians with experience of telepractice. Further studies are therefore needed to supplement and confirm our results. Moreover, further studies should also be conducted once the COVID-19 pandemic is over to document the continued use of telepractice in SLP and to find lasting solutions for the barriers to its use, using for example a qualitative approach with focus groups including SLPs, managers and patients. The training of SLP students to the optimal use of telepractice should also be enhanced in university programs. Another limitation lies in the absence of data collected from the clinicians on their familiarity and experience with telepractice at the time of the survey. This would have allowed us to better appreciate the novelty of the change in the delivery of SLP services. Our study must therefore be considered as a global picture of the challenges and barriers to the use of telepractice, perceived by Quebec SLPs at a particular point in time during the COVID-19 pandemic. Nevertheless, it is hoped that employers and regulatory bodies in Quebec will use the findings of this study to help SLP professionals bring down barriers and make telepractice a durable, effective and efficient service delivery model. The management of the work environment (e.g., access to confidential, calm and closed premises), the provision of technical equipment, software licenses and reliable internet connections are avenues that employers should explore for minimising or reducing the barriers to telepractice. The development of assessment and treatment tools adapted to telepractice in SLP should be encouraged and stimulated by regulatory bodies, in collaboration with researchers and SLP academic programs. C O N F L I C T S O F I N T E R E S T The authors report no conflict of interest. The authors alone are responsible for the content and writing of the paper. D ATA AVA I L A B I L I T Y S TAT E M E N T The data that support the findings of this study are available from the corresponding author upon reasonable request.
2021-08-31T06:24:00.969Z
2021-08-29T00:00:00.000
{ "year": 2021, "sha1": "1eff7b7185d3d558c89c37074d30d171bb420b0d", "oa_license": null, "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/1460-6984.12669", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "08bd497f81958a8ea59ffff2fecf2a8cc5e00fe7", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine", "Psychology" ] }
23891418
pes2o/s2orc
v3-fos-license
Chirped-pulse oscillators: a unified standpoint A completely analytical and unified approach to the theory of chirped-pulse oscillators is presented. The approach developed is based on the approximate integration of the generalized nonlinear complex Ginzburg-Landau equation and demonstrates that a chirped-pulse oscillator is controlled by only two parameters. It makes it easy to trace spread of the real-world characteristics of both solid-state and fiber oscillators operating in the positive dispersion regime. INTRODUCTION High-energy laser oscillators nowadays allow highintensity experiments such as direct gas ionization [1], where the level of intensity must be of the order of 10 14 W/cm 2 . One can expect soon pump-probe diffraction experiments with electrons and direct high-harmonic generation in gases and production of nm-scale structures at the surface of transparent materials. Each case calls for an intensity of the order of that demonstrated above or higher, which means generating tens up to hundreds of µJ pulses at the fundamental MHz repetition rate of an oscillator [2]. From the examples above it can be seen that high repetition rates are preferable to kHz rates (now commercially available) because the signal rates in, for example, electron experiments are usually low and an improvement factor of 10 3 -10 4 due to the higher repetition rate of the pulses significantly enhances the signal-to-noise ratio. In addition to this physical factor, existing kHz systems are more expensive, complex and less stable. There are a few ways of increasing the oscillator pulse energy E, which is a product of the average power and the repetition period: by increasing the cavity length and/or increasing the power [3,4,5,6,7,8]. The catch is that a long-cavity oscillator suffers from instabilities owing to nonlinear effects caused by the high pulse peak power P 0 . The leverage is to stretch a pulse and thereby decrease its peak power below the instability threshold. Recent progress demonstrating the feasibility of this approach has been achieved for Ti:sapphire oscillators operating in both the negative-(NDR) [9,10] and positive-dispersion regimes (PDR) [11,12], for near-infrared Yb-doped solidstate oscillators operating in both the NDR [5,8,13] and the PDR [13,14], and for fiber oscillators operating in the all-normal dispersion (ANDi) (i.e. positive dispersion) regime [15,16]. The fundamental difference between the NDR and PDR is that, in the former, the Schrödinger soliton develops [17]. The soliton width T and energy E can be expressed as [18] T = |β|/γP 0 , E = 2 |β|/γT, where γ is the self-phase modulation (SPM) coefficient of a nonlinear medium (active crystal, fiber, air, etc.), and β is the net-group-delay-dispersion (GDD) coefficient of an oscillator cavity. Since the peak power P 0 has to be kept lower than the threshold value P th in order to avoid soliton destabilization, one can estimate the maximum attainable energy as E = 2P th T . Energy scaling thus requires pulse stretching. However, the latter results from the substantial GDD growth [quadratically with energy, see Eq. (1)]: Hence, the pulse width increases linearly with energy (correspondingly, the spectrum narrows with E). As a result, i) energy scaling requires a huge negative GDD, ii) the soliton obtained has a large width, and iii) it is not compressible linearly. In contrast to the soliton regime, the pulse is stretched in the PDR [19] and its peak power is reduced due to chirp ψ [20,21]. The chirp compensates the narrowing of the spectrum with energy [20]: where ∆ is the spectrum half-width. The advantage is that such a pulse [chirped solitary pulse (CSP)] is compressible linearly down to T ≈ 2/∆ (the compression factor is ≈ ψ). As was found, the CSP can be described as a solitary pulse solution of the cubic nonlinear complex Ginzburg-Landau equation (CGLE) [21] or, more generally, the cubic-quintic nonlinear CGLE [20,22,23,24]. This equation is the generalized form of the master mode-locking equation [17,21,25,26], which provides an adequate description of mode-locked oscillators (both fiber and solidstate ones). Furthermore, the nonlinear CGLE is used in quantum optics, modeling of Bose-Einstein condensation, condensed-matter physics, study of non-equilibrium phenomena, and nonlinear dynamics, quantum mechanics of self-organizing dissipative systems, and quantum field theory [27]. Therefore, analysis of the CSP solutions of nonlinear CGLE is of interest not only from the practical but also from the theoretical point of view. Since the underlying problem is multiparameter and not integrable in the general form, there is no a uniform standpoint on CSPs developing in, for example, the ANDi fiber oscillator [28] and the chirped-pulse oscillator (CPO) [12]. In particular, the physical parameters of oscillators vary greatly and it is not clear a priori whether the mechanisms governing the PDR are unified. In this work, we propose an approximate method of integrating the generalized nonlinear CGLE and show that the CSP is its two-parametrical solitary pulse solution. As a result, the CSP characteristics are easy to trace on a two-dimensional diagram ("master diagram"). Comparison of the PDR parameters demonstrates that the CSPs formed in the ANDi fiber oscillator and in the CPO i) lie within distinct sectors of the unified master diagram, ii) belong to mainly distinct branches of the solution, and iii) vary with the parameters in different ways. The variation of the main CPO characteristics (spectrum shape and width as well as pulse stability) with the PDR parameters is analyzed along with the numerical and experimental results. A comparison of the models based on the different versions of the master equation is made. CSP SOLUTION OF THE GENERALIZED NONLINEAR CGLE The evolution of the visible, near-and mid-infrared electromagnetic fields in an oscillator can be described on the basis of the slowly-varying field approximation, when the spectral width is much smaller than the carrier frequency of the field. The field envelope A is affected mainly by i) GDD, ii) SPM (non-dissipative factors), as well as iii) saturable gain and linear loss, iv) spectral filtering, and v) self-amplitude modulation (SAM) (dissipative factors) [17,25]. When the effects of higher-order dispersion [29] and the field variation along a single oscillator round-trip [20] are negligible, the oscillator dynamics obeys the nonlinear CGLE: where z is the propagation distance normalized to the oscillator length (for a ring oscillator model; or to double its length for a linear oscillator), t is the local time [the reference frame is co-moving with a solitary pulse solution of Eq. (4)], σ = g − l − µ is the dimensionless saturated net gain (g, l and µ are the saturated gain, non-saturable and saturable loss coefficients, respectively), α is the squared inverse transmission bandwidth of an oscillator, β is the GDD coefficient, ς is the inverse saturation power of the self-amplitude modulator, γ is the net SPM coefficient. The normalization of the slowly varying field amplitude A(z, t) is chosen such that P (z, t) ≡ |A| 2 is the instantaneous power. The SAM corresponds to a perfectly saturable absorber. Such a character of SAM corresponds to a semiconductor saturable mirror (SESAM). SESAMs are extensively used in the sub-and over-µJ oscillators operating in both PDR [12] and NDR [8]. Such a technique provides stability and self-staring ability of modelocking and, to date, has no alternative for high-energy femtosecond lasers. The main condition, which allows describing the SESAM response in the form of (4), is excess of the pulse width (few picoseconds for the PDR) over the SESAM relaxation time (≈100 fs) [30]. Positivity of Υ requires that b < 2 − 2a − 4 √ −a, with −1 < a < 0. The condition a < 0 provides stability against continuum growth, and the inequality a > −1 means that σ cannot exceed the SAM depth. The positive ["+"sign in the first equation in Eq. (8)] and negative ("−"sign) branches of the solution (8) coincide along the curve Eq. (8) demonstrates an existence of two control parameters defining the CSP: a and b. Physical meaning of the former one is contribution of the saturated net-gain σ (i.e. saturated gain minus unsaturable loss) regard-ing the saturable loss (µ is the maximum saturable loss coefficient, i.e. the SAM depth). The µ-parameter is a few or fraction of percents for the solid-state CPO, but can be substantially larger for the ANDi oscillator (see Section 4). The a-parameter is not free, in fact, because it depends on the pulse energy and, thereby, on the bparameter (see below). Physical meaning of the b-parameter is closely related to the mechanism of CSP formation [19,21,29]: i) phase variation of the propagating chirped pulse can be balanced in the presence of positive GDD, and ii) spreading of the chirped pulse can be balanced by spectral filtering. In the cubic nonlinear version of Eq. (4), the CSP chirp (ψ ≫1) is On the one hand, the peak power P 0 relates to the SAM saturation power, that is P 0 ∝ ς −1 . On the other hand, Since the CSP spreading is balanced by spectral filtering, one can roughly assume ∆ 2 ∝ 1/α. As a result of these rough estimations and Eq. (9), one can see that the ratio b ≡ γα/βςµ = const ∼ = 1 represents a balance of factors providing the CSP existence. Returning to Eq. (8), its integration results in the implicit expression for the CSP profile where Ω = ± ∆ 2 − γP (t)/β from Eq. (6). Equation (10) allows the spectral chirp to be expressed as The frequency dependence of Ψ defines the CSP compressibility [23,31]: parts of the spectrum where the chirp is strongly frequency-dependent, belong to the pulse satellites after pulse compression. The flatness of the chirp in the vicinity of ω =0 enhances as the stability border σ =0 is approached. In contrast to the case of Ref. [20], the chirp is always minimum at the central frequency ω =0. The next step is to assume that φ(t) is a rapidly varying function for the CSP [20,23]. The stationary-phase method of [23,32] allows one to express the spectral power from the first of Eqs. (6): where Θ(x) is the Heaviside function and Π = min {∆, Ξ} is the least of ∆ and Ξ. Equation (12) demonstrates that the CSP has the spectrum truncated at ±Π (i.e. |ω| < Π). Integration of Eq. (12) allows the pulse energy to be expressed as It is clear from Eq. (13) that the truncation parameter Π is equal to ∆, i.e. ∆ < Ξ. In contrast to the cubic-quintic CGLE, whose solution is the truncated Lorentz function in the spectral domain [20], the spectrum in our case is In an oscillator, the σ-(a-)parameter is energydependent owing to saturation of the gain g. The simplest law of saturation is g = g 0 /(1 + E/E s ) [26], which is valid for the case if the active medium is small in comparison with the confocal length of laser beam (g 0 is the gain for a small signal and E s is the saturation energy). Such a law is typical for the fiber and solid-state thin-disk oscillators. Reverse relation between the confocal and active medium lengthes results in g = g 0 1 + E/E s [31]. These expressions can be expanded in the vicinity of σ = 0 The E * -parameter is the energy defined as the averaged power of a free-running oscillator multiplied by the cavity period T cav . E * = (g (0) − l − µ) E s /(l + µ) for the first law of gain saturation (see above and Ref. [26]). is the parameter permitting the gain saturation. For the considered law of gain saturation, one has ρ ≡ E The normalizations presented in Table I reduce the three-parametrical space of Eq. (4) to a two-parametrical one (a, b) for the CSP. The resulting dimensionless equations are shown in Table II. Thus, the CSP is easily traceable on a two-dimensional plane ("master diagram") as in the case of the nonlinear cubic-quintic CGLE [20,23,33]. The master diagram on the plane (b, E * ) is shown in Fig. 1. The black curve is the stability threshold σ =0 (that is, E = E * ). Above this curve, the CSP does not exist (hatched region). Along this curve, the dimensionless pulse parameters (see Tables I,II) (15) The dashed curve in Fig. 1 shows the border between the positive (+) and negative (−) branches of Eq. (8). The CSPs providing a constant value of the saturated net-gain parameter σ correspond to the gray curves in Fig. 1 (so-called, isogain curves). The isogain σ = 0 is the stability threshold (solid black curve), and only three nonzero isogain curves are shown (the corresponding values of a are superscribed in Fig. 1). Since σ is a function of E/E * , the isogain curves explain the meaning of the +-and − branches (see also [20,33]). The + branch corresponds to the energyscalable CSP. This means that E grows ∝ E * along the isogain faster than the b-parameter (b ∝ 1/β) decreases with the dispersion β. That is, keeping in such an isogain with the energy scaling (note that E * ′ ∝ E * /β 2 ) needs a comparatively slow GDD increase. As a result, the SPM increases faster than the spectrum degrades with the GDD. That is the spectrum broadens (see Fig. 2, where b-decrease along a + isogain corresponds to the E * ′ -growth in Fig. 1). The spectra are broadest on the stability border. For the + branch, the spectrum narrows with α ( Fig. 2; b ∝ α). This can be explained as a result of the E *decrease, which is necessary for keeping in the isogain (E * ′ ∝ α 3/2 E * ) (Fig. 1). That is, the SPM contribution decreases and the spectrum narrows. The − branch corresponds to the energy-unscalable CSP. This means that b scaling (b ∝ 1/β) weakly affects E * (E * ′ ∝ E * /β 2 ) (Fig. 1). Thus, the energy remains almost constant along this isogain when the GDD scales. Certainly, energy scaling is possible as well. However, that is not isogain process for this branch of the CSP. For the − branch the spectrum narrows with the b-decrease (b ∝ 1/β) due to growth of the GDD contribution, which stretches the pulse when the energy remains almost constant (Fig. 2). When E * changes weakly along the isogain corresponding to the − branch, the spectrum broadens with α (b ∝ α) (Fig. 2). The explanation is that the growth of spectral filtering enhances the cutoff of red (blue)shifted spectral components located on the pulse front (tail). The spectral shift at the pulse edges is a consequence of chirp [19,21]. The growth of cutoff shortens the CSP and, for a fixed energy, P 0 increases. Since P 0 ∝ ∆ 2 , the spectrum broadens. One can additionally clarify the division into the + and − branches by considering the nonlinear cubic limit of Eq. (4). Such a limit describes a low-energy CPO [21]. In this case, the exact CSP solution is ∝ sech(t/T ) 1+iψ , and its spectral profile can be expressed through beta functions. By the method described in the previous section it is readily found that Here ξ ≡ µς. Equation (16) demonstrates that i) CSP exists only for σ <0, ii) its spectral width increases with b and |σ|. Comparison with Fig. 2 suggests that such a behavior corresponds to the − branch. ANDi fiber oscillator In Section 2, the CSP solution of the generalized nonlinear CGLE was obtained in the limits α ≪ β and β ≪ T 2 . The master parameter b ≡ αγ/βςµ controlling the CSP is defined by the relative but not absolute contributions from the dissipative and non-dissipative factors of the CGLE. This allows a unified standpoint on CPOs with parameters which vary within a broad range. Figure 3 shows the sector of the master diagram covering the ANDi fiber oscillator parameters (see Table III; the estimations for the parameters are based on Refs. [28,36]). Let us start at point A ( Fig. 3 and Table III) corresponding to a typical set of ANDi-oscillator parameters but with the comparatively small SAM depth µ. Although the GDD value is large in comparison with that in a CPO, the spectral filter bandwith (25 nm) is small. As a result, the excess of the ratio β/α over that for a Ti:sapphire CPO is only tenfold (see below and Ref. [20]). Simultaneously, an excess of the ratio γ/µς over that for a Ti:sapphire CPO is tenfold as well. As a result, the b-parameter is ≃1 and, dynamically, there is no substantial distinction in kind between the ANDi fiber and the solid-state CPOs. One difference is that a large GDD and a comparatively small E * shift the operational point into the − branch region (Fig. 3). A of Fig. 4 shows the spectrum of the numerical solution of Eq. (4) (gray circles) and the analytical profile (12) (solid curve) corresponding to point A in Fig. 3. The analytical profile reproduces the averaged numerical one. As can be seen, the numerical solution is strongly perturbed. This effect was first reported in Ref. [37] and attributed to excitation of the solitonic internal modes. Such modes grow with the GDD and are excited when the value β/α is large. A way of suppressing such a perturbation is to increase the SAM depth µ (transition 1 from A to B in Fig. 3). The µ-growth is not the isogain process, as Table III; ρ =0.38. the |a|-parameter increases. As a result, the spectrum broadens (Fig. 4, B ) according to the analytical model (transition from the lower branch of curve 2 to that of 4 in Fig. 2). An important factor governing the CPO is spectral filtering, since the pulse lengthening due to GDD has to be compensated by its shortening owing to frequency filtering of the chirped pulse [21]. In the framework of the model under consideration, the filter band narrowing can be illustrated by transition 2 to point C in Fig. 3. Such a transition results in almost isogain growth of the bparameter. Hence, for the "−"branch of the CSP (Fig. 2) the spectrum broadens (Fig. 4, C ) [28]. The further filter band narrowing transforms the pulse into the "+" branch of the CSP (Fig. 4, F ). Further growth of b ∝ α due to filter band narrowing destabilizes the pulse (σ >0 at point G). The next important factor is the inverse power of the loss saturation (the SAM parameter ς). Its growth (transition 3 from C to D in Fig. 3) corresponds to the b-decrease. First, this is not an isogain process, i.e. the |a|parameter increases, which broadens the spectrum (Fig. 2). Second, the spectrum narrows with the b-decrease for the − branch of the CSP (Fig. 2). In our example, point D is located in the vicinity of the border between the − and + branches of the CSP. In accordance with Figure 2, this means that the spectrum broadens owing to the |a|-parameter growth (Fig. 4, D ). D of Fig. 4 demonstrates that, unlike the analytical profile, the numerical one is distinctly concave. This issue will be discussed in Sec. 4. The characteristic of the ANDi fiber oscillator is that it is possible to vary the positive GDD within a wide range [28]. The GDD growth decreases the b-parameter and, as a result, narrows the spectrum of the CSP relating to the − branch (Figs. 2 and 4, E ) [28]. Such a conclusion is valid for both isogain and non-isogain variation. In the latter case, the |a|-parameter decreases with the GDD growth for a fixed E * , which enhances the narrowing of the spectrum for the "−" branch ( Fig. 2). Solitonic internal modes again occur [37] and the spectrum becomes perturbed (Fig. 4, E ). Broadband solid-state CPO As was pointed out, there are two distinctive differences between the ANDi fiber oscillator and the solidstate CPO: the former has substantially larger GDD and SPM. Nevertheless, both oscillators can be described from a unified standpoint because their properties are defined by only two dimensionless parameters, b and E * . From this point of view, the main difference between them is that the ANDi oscillator belongs mainly to the − branch of the CSP, whereas the CPO belongs to the + branch. It should be noted that this statement need not be considered categorically, because the growth of SAM and spectral filtering shifts the operational point of an ANDi oscillator into the + branch region (see points D and F in Fig. 3). Let us consider a broadband (Ti:sapphire) CPO with the parameters presented in Table IV. The mode-locking is provided by SESAM with the inverse saturation power ς, which corresponds to a saturation energy fluence of 100 µJ/cm 2 , a relaxation time of 0.5 ps and a mode radius of 100 µm. Table IV. The solid curve is the stability border. Table IV; ρ =0.17. (for a given set of parameters) spectrum (≈70 nm, see Fig. 6 A). An attempt to increase the energy would destabilize the CSP. Therefore, at first it is useful to increase the modulation depth (transition 1 to from A to B in Fig. 5). This is not an isogain process, and therefore the spectrum (Fig. 6, B) does not change substantially in spite of the b-decrease (the b-decrease broadens the + branch spectrum, but the | a |-growth narrows it, see Fig. 2). The increase of the inverse saturation power ς (e.g. by means of mode reduction) would be useful for subsequent energy growth. However, as a rule, the SESAM operates in the vicinity of the damage threshold, and so the ςgrowth can be problematic. The stability reserve obtained (point B in Fig. 5) allows energy scaling (transition 2 from B to C in Fig. 5). Nevertheless, the twofold energy growth destabilizes the CSP in our case, and multipulsing occurs. The way out is to increase the GDD (transition 3 from C to D in Fig. 5), which shifts the operational point inside the stability range. Since the A → D transition is an isogain process following the b-decrease (curve with a =0 in Fig. 2), the spectrum broadens (Fig. 6,D). Again, the spectrum becomes concave. It should be noted that, in an experiment, the main control parameter used for oscillator stabilization at some fixed E * -level is the GDD value. The GDD growth (corresponding to the b-decrease) is not an isogain process for the + branch of the CSP and is almost an isogain one for the − branch. Figure 2 shows that the spectrum narrows in the latter case. For the + branch, the |a|parameter increases with GDD, which enhances the CSP stability against the continuum growth. Numerical simulations demonstrate that the effect of the GDD growth on the CSP spectrum in the case under consideration is its narrowing and stretching of the CSP. Such a conclusion agrees with the experiment [12]. Figure 7 shows the spectra from the Ti:sapphire CPO corresponding to the positive net GDD growing step by step. The spectra narrow with the GDD growth and reshape from concave via flattop to parabolic. Equation (12) reproduces the growth of convexity with narrowing of the spectrum. The concave spectra appear in the numerical simulations, when the spectrum becomes sufficiently broad (Figs. 4, D and 6, D ). In contrast to Ref. [33], the present analytical theory fails to reproduce this phenomenon (see Section 4). Narrowband solid-state CPO As was found in Ref. [20], there is a limit to CPO energy growth with resonator lengthening, because the modulation depth µ has to increase ∝ T cav /T r (here T r is the gain relaxation time). Since broadband solidstate active media such as Ti:sapphire, Cr-doped zincchalcogenides, and Cr:YAG have comparatively short gain relaxation times (a few microseconds), it is preferable to use media with a long relaxation time, such as Yb-doped crystals. Moreover, the cavity length realized so far is already in the MHz-range in terms of repetition rate, meaning that the only scaleable parameter remains the power. Power scaling is realizable in a Yb-doped thin-disk oscillators. For example, the Yb:YAG oscillator operating in the NDR has exceeded the 13-µJ energy frontier [8]. Such a regime requires a fair amount of negative GDD (≈-0.2 ps 2 in the case of Ref. [8]) and the pulse obtained is linearly incompressible. It is interesting to consider the prospects of such a regime within the PDR. The issue is that a Yb-doped medium has a comparatively narrow gainband (α ≈1000 fs 2 ) and it is not clear a priori that the CPO can operate at GDD levels close to α. Nevertheless, Yb-doped CPOs have been demonstrated experimentally [13,14]. The CSPs obtained were compressible down to ≈450 fs 2 and the positive net GDD varied within the range ≈250 -2250 fs 2 . Let us consider a Yb:YAG thin-disk CPO mode-locked by SESAM and aiming at an intracavity pulse energy level ≈80 µJ. At such an energy level, an important factor is the SPM due to air filling the resonator [7]. Such a factor has to be taken into account. Let the cavity length be 15 m, and the averaged mode diameter be equal to 2.4 mm. The Yb:YAG-disk thickness is 0.4 mm. The SESAM saturation energy fluence is 100 µJ/cm 2 , its relaxation time is 0.6 ps, and the mode diameter on SESAM is 1.2 mm. Other parameters are presented in Table V. 2550 700 2900 γ (GW −1 ) 1.9 0. 15 2.4 Point A in Fig. 8 corresponds to the narrowband CPO operating in the vicinity of the stability border. The CPO resonator is filled with air. The CSP belongs to the + branch; the corresponding spectrum is shown in Table V; ρ =0.05. 9, A. Its width is ≈3.5 nm, which allows compressing linearly down to ≈300 fs. Although the ratio β/α is only 2.8, the analytical profile provides quite precise fitting of the numerical spectrum. Violation of the assumption of β ≫ α underlying the analytical model results in smoothed spectrum edges. Such smoothing increases (Fig. 9, B ) when the resonator becomes airless and only the active medium nonlinearity contributes to the oscillator dynamics. Point B in Fig. 8 corresponds to the neighborhood of the analytical stabil- ity border, i.e. the almost minimum possible GDD value providing the broadest spectrum. Since β < α, the analytical model is not valid, although the pulse is chirped and remains almost fourfold compressible (down to ≈800 fs). The net-gainband narrowing (growth of α in comparison with β) can also result from spectral filtering produced by comparatively narrowband SESAM. In this case, the α-parameter is defined by the SESAM bandwidth, and the approach of the α-parameter to the βparameter results in smoothing of the spectrum edges in a Ti:sapphire oscillator as well (Fig. 10). Spectrum control of the narrowband CPO operating in the vicinity of the stability border can be provided by inserting a plate (e.g., a sapphire plate) introducing additional SPM (point C in Fig. 8; 0.1-cm sapphire plate). Figure 9, C demonstrates that the b-growth (b ∝ γ) provides excellent agreement between the numerical and analytical solutions. It can be concluded that the absolute value of GDD required for high-energy pulse stabilization is substantially lower in the PDR than that in the NDR, so that one can avoid the resonator helium filling or vacuumization of the resonator. Nevertheless, vacuumization of the oscillator can provide the most direct way to substantial energy growth. Let us consider an example with an intracavity energy of ≈0.8 mJ for a configuration corresponding to point C in Fig. 8. The scaling rules E * ′ = E * γα 3/2 / √ µβ 2 , b = αγ/βςµ (Table I) demonstrate that tenfold reduction of SPM (as a result of resonator vacuumization) and SAM (as a result of, for instance, mode growth and/or reduction of the saturation energy fluence) allow the system to be kept at point C. This guarantees that the dynamics is preserved. However, the direct way can be unusable due to P 0 -growth. One can then increase β and, simultaneously, decrease the modulation depth µ as well as ζ, γ (e.g. by means of mode growth in a gas-filled resonator) in accordance with the scaling rules but keeping P 0 below P th . BRIEF COMPARISON OF THE MODELS Section 2 reports the development of a model of a CPO mode-locked by an ideally saturable absorber. Such a model is applicable to both the ANDi fiber oscillator [28] and solid-state CPO mode-locked by SESAM. Simultaneously, there are models of a CPO based on the nonlinear cubic-quintic CGLE [20,22,24,31,38]. Such models take into account the SAM saturation (i.e. the decrease of SAM with power overgrowth), which is important in high-power Kerr-lens mode-locked oscillators as well as fiber oscillators mode-locked by a polarization modulator. As a result of cubic-quintic SAM, a variety of the nonlinear regimes appear [25]. In particular, the flat-top pulse envelope develops. Since the cubic-quintic SAM confines the pulse peak power, there is the limit for the pulse energy growth [20]. It should be noted, that the SAM saturation parameter (i.e. parameter defining the quintic term in the cubic-quintic SAM) is not truly a free parameter in a real-world oscillator. It is closely related to the inverse saturation power ς or to the selffocusing power in a Kerr-lens mode-locked oscillator (see [20]). In the last case, it is not possible to manipulate with SAM and SPM independently because both processes result from the nonlinear refraction in an active medium. Therefore such a type of SAM is out of use for the microjoule oscillators. One has to note, that underestimation of the SAM saturation parameter in the cubic-quintic CGLE can lead to a huge CSP spectral width and peak power (e.g. see [38]). Constraint on such a power growth can be provided by the gain saturation and, as a result of |a|-growth, the CPO operates in the − branch of CPO (in terms of this article, see above). Switching to the + branch of the CPO can be provided by i) GDD decrease, ii) energy growth, or iii) spectral filter band narrowing. Finally, the low-energy (− branch) sector (as well as the large positive GDD sector) of CPO can be described by the nonlinear cubic CGLE (i.e. by dissipative generalization of the nonlinear Schrödinger equation). Such a model is useful for a fiber oscillator. For the cubic SAM, approach to the + branch of CSP due to the GDD decrease can cause the collapse-like instability because there is no the peak power confinement due to SAM saturation. An important issue is the appearance of concave spectra like those in Figs. 4, D and 6, D. It is known that, for the SAM described by the nonlinear cubic-quintic CGLE, the CSP has either parabolic-or finger-top spec-tra [20,24]. However, both numerical simulations [24,38] and experiment [24,28] demonstrate more complicated spectral profiles: concave and convex-concave. In Ref. [24] the analytical concave spectra appear only for the cubic-quintic SAM, which enhances the collapse-like instability (i.e. there is no SAM saturation). In Ref. [33], the stable analytical CSPs with concave spectra are obtained for nonzero quintic SPM. For the SAM type considered in the present work, the analytical concave spectral profiles are not admissible because the first equation of (8) demonstrates that ∆ ′2 ≤ 3(1 + a). Hence, the existence of concave spectra cannot be explained on the basis of the present dissipative soliton models (with the exception of the model presented in [33]). Figures 4, 6 show that the concavity grows with spectrum broadening when 1/(2∆) 2 tends to α. Simultaneously, there is no concave spectrum, when SPM is suppressed (Fig. 9, where γ < µς). Numerical analysis demonstrates that the appearance of concave spectrum is not rooted in the character of SAM. Surprisingly, however, the concave spectrum profile exists even in the case of the nonlinear cubic CGLE and develops with the spectrum broadening. Figure 11 shows the spectra corresponding to point B in Fig. 3, but for the different SAM types: µς |A| 2 1 + ς |A| 2 (black curve, see Eq. (4)) and µς |A| 2 (nonlinear cubic model, gray curve). It can be seen that the absence of SAM saturation in the second case leads to spectrum broadening due to power growth (∆ 2 ∝ P (0), see Eqs. (8,16)). Analytical estimation for ∆ from Eq. (16) gives 0.0076 fs −1 vs. numerical value 0.0074 fs −1 . As a result of the spectrum broadening, the concavity increases. The P (t)-profile does not have any anomalies. Thus, growth of spectral components at the spectrum edges can be treated as a fundamental feature of the regime considered (see also Ref. [37]), without obligatory addressing of higher-order dispersions [29]. CONCLUSIONS Analytical theory of the CSP has been developed. The theory treats the CSP as a solitary pulse solution of the generalized nonlinear CGLE. The main advantage of the theory developed is the possibility of representing the CPO parametrical space in the form of a two-dimensional master diagram. As a result, the CSP characteristics become easily traceable. It has been demonstrated that both ANDi fiber and chirped-pulse solid-state oscillators can be described from a uniform standpoint and represented on a unified master diagram. The main difference between them is that they realize mainly two different branches of the CSP solution. Such branches differ in the energy and dispersion scaling rules as well as in the behavior of the CSP parameters. Comparison with the results of numerical simulations has shown that the analytical solution provides a good approximation of the spectrum shape, which is truncated and has a flat or parabolic top. The approximation is quite precise even in the case where the net-gainband is so narrow that the squared inverse bandwidth verges towards the GDD. This provides an adequate description of both ANDi fiber and thin-disk narrowband solid-state chirped-pulse oscillators. Thus, the theory allows the CPO characteristics to be optimized and demonstrates the feasibility of at least, sub-mJ energy scaling in a thindisk CPO.
2009-06-12T12:26:14.000Z
2008-11-07T00:00:00.000
{ "year": 2008, "sha1": "f24515e0555809b129d3b848bace6bb56e213ea9", "oa_license": null, "oa_url": "http://arxiv.org/pdf/0811.1078", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "f24515e0555809b129d3b848bace6bb56e213ea9", "s2fieldsofstudy": [ "Engineering", "Physics" ], "extfieldsofstudy": [ "Physics" ] }
119067258
pes2o/s2orc
v3-fos-license
Disagreement in cardiac output measurements between fourth-generation FloTrac and critical care ultrasonography in patients with circulatory shock: a prospective observational study Background Cardiac output measurements may inform diagnosis and provide guidance of therapeutic interventions in patients with hemodynamic instability. The FloTrac™ algorithm uses uncalibrated arterial pressure waveform analysis to estimate cardiac output. Recently, a new version of the algorithm has been developed. The aim was to assess the agreement between FloTrac™ and routinely performed cardiac output measurements obtained by critical care ultrasonography in patients with circulatory shock. Methods A prospective observational study was performed in a tertiary hospital from June 2016 to January 2017. Adult critically ill patients with circulatory shock were eligible for inclusion. Cardiac output was measured simultaneously using FloTrac™ with a fourth-generation algorithm (COAP) and critical care ultrasonography (COCCUS). The strength of linear correlation of both methods was determined by the Pearson coefficient. Bland-Altman plot and four-quadrant plot were used to track agreement and trending ability. Result Eighty-nine paired cardiac output measurements were performed in 17 patients during their first 24 h of admittance. COAP and COCCUS had strong positive linear correlation (r2 = 0.60, p < 0.001). Bias of COAP and COCCUS was 0.2 L min−1 (95% CI − 0.2 to 0.6) with limits of agreement of − 3.6 L min−1 (95% CI − 4.3 to − 2.9) to 4.0 L min−1 (95% CI 3.3 to 4.7). The percentage error was 65.6% (95% CI 53.2 to 77.3). Concordance rate was 64.4%. Conclusions In critically ill patients with circulatory shock, there was disagreement and clinically unacceptable trending ability between values of cardiac output obtained by uncalibrated arterial pressure waveform analysis and critical care ultrasonography. Trial registration Clinicaltrials.gov, NCT02912624, registered on September 23, 2016 Electronic supplementary material The online version of this article (10.1186/s40560-019-0373-5) contains supplementary material, which is available to authorized users. Background Critically ill patients with circulatory shock have increased risks of multi-organ failure, long-term morbidity, and mortality [1]. Advanced hemodynamic monitoring in these patients may inform diagnosis and simultaneously guide management by providing insight into cardiac function, cardiac preload, and afterload [2]. Several methods for measuring cardiac output (CO) exist, varying from invasive (e.g. thermodilution by pulmonary artery catheter (PAC)) to minimally invasive (e.g. pulse contour analysis by FloTrac™ (Edwards Lifesciences, Irvine, USA)) or even non-invasive (e.g. transthoracic Doppler ultrasound by critical care ultrasonography (CCUS)). These methods all have their own merits, disadvantages and requirements [3]. One type of pulse contour analysis is the uncalibrated arterial pressure waveform analysis method to estimate CO (APCO). Reliability of APCO is questioned in patients with hemodynamic instability, and this occurs frequently in patients admitted to the ICU [4]. Therefore, CO measurements obtained by APCO should be interpreted with caution in critically ill patients with circulatory shock [5,6]. The FloTrac™ system using the APCO method calculates CO based on the principle that aortic pulse pressure is proportional to stroke volume (SV) and inversely related to aortic compliance using a proprietary algorithm. FloTrac™ has been widely studied in more than 70 validation studies as of yet, mostly showing adequate performance in normo-and hypodynamic conditions, but not in patients with large changes in vascular tone which typically occur in patients with circulatory shock [7]. However, these studies vary by the statistical methods and versions of the algorithm used. Recently, the fourth-generation algorithm was developed to improve performance. Evaluation of the trending ability rather than the agreement of absolute values of CO monitoring devices is increasingly considered in validation studies for assessment of potential clinical usefulness [8]. In addition to one single CO measurement for diagnosing circulatory shock, repeated measurements of CO informing the trending ability could be informative for monitoring and guidance of supportive treatments of patients with circulatory shock. The aim of our study was to compare both agreements and trending ability for APCO measurements of CO (CO AP ) with CO routinely measured by CCUS (CO CCUS ) in critically ill patients with circulatory shock. CCUS was chosen as the reference standard since it is the preferred method for diagnosis, but not for monitoring, of circulatory shock in critically ill patients and is widely available [2,9]. Importantly, it should be noted that CCUS is not a gold standard reference technique for method comparison studies aiming to evaluate the validity of CO monitors [10]. Methods This study was a substudy of the Simple Intensive Care Studies-I (SICS-I), which was a single-centre, prospective, observational cohort study in which all consecutive acutely admitted adult patients expected to stay beyond 24 h were included (NCT02912624) [16,17]. The STROBE guidelines for reporting observational studies were used (Additional file 1) [11]. The checklist for CO monitor method comparison studies was used [10]. The local institutional review board (Medisch Ethische Toetsingscommissie, University Medical Center Groningen) approved the study (M15.168207 and M16.193856). Written informed consent was obtained from all patients. Selection criteria In this substudy, all consecutive acutely admitted adult patients with suspected circulatory shock and expected to stay beyond 48 h were included from June 2016 to January 2017. The circulatory shock was defined as the requirement of any dose of vasopressor to maintain a mean arterial pressure (MAP) of 60 mmHg or if the MAP remained below 70 mmHg despite fluid resuscitation (defined by at least 1000 mL of crystalloids). In addition, at least one other sign of organ or tissue hypoperfusion had to be present: altered state of mind (Alert-Voice-Pain-Unresponsive scale) [12], mottled skin (Mottling score ≥ 1 [13]), decreased urine output (≤ 0.3 mL kg −1 h −1 ) or increased serum lactate level (≥ 2 mmol L −1 ). Exclusion criteria were inability to obtain sufficient quality CCUS images; no arterial line; atrial fibrillation; and aortic valve or mitral valve diseases known to impair the arterial waveform. We included this group of patients because CO measurements are indicated to identify the type of shock, select necessary therapeutic interventions and evaluate patient's response to therapy [2]. Objectives The primary objective was to evaluate CO AP measurements in terms of the agreement and trending ability against CO CCUS as reference technique in patients with circulatory shock. Definitions and bias Patient characteristics including clinical, hemodynamic and laboratory variables as well as Acute Physiology And Chronic Health Evaluation (APACHE) IV and Simplified Acute Physiology Score (SAPS) II values were recorded [14,15]. Measurements were performed following protocolized definitions and procedures [16,17]. In short, CO CCUS was measured by transthoracic echocardiography using the Vivid-S6 system (General Electric, Horton, Norway) with cardiac probe M3S or M4S, and with default cardiac imaging setting. The parasternal long axis was used to measure the left ventricular outflow tract diameter. In the apical five-chamber view, a pulse wave Doppler signal in the left ventricular outflow tract was used to measure the velocity time integral. CO CCUS was calculated using an established formula [18]. CCUS was performed after ICU admission within 6 h and repeated once every 24 h after admission provided there was no interference with clinical care. Researchers were trained in performing CCUS by experienced cardiologist-intensivists. The FloTrac™ sensor was connected to an indwelling radial artery catheter and an EV1000™ monitor (version 4.00; Edwards Lifesciences, Irvine, USA), which continuously displayed CO AP values. The value of CO AP displayed on the EV1000™ monitor was registered simultaneously (i.e. 'beat-to-beat') with each CO CCUS measurement. All measurements, including CCUS findings, were kept blind for the caregivers. Quality of CCUS images and CO CCUS measurements were validated by an independent specialized core laboratory (Groningen Image Core Lab) blinded for the CO AP measurements Statistical analysis No formal sample size calculation was performed due to lack of data on CO AP variation in patients with circulatory shock. Therefore, this study has an exploratory nature. Data were presented as means with standard deviations or medians with interquartile ranges depending on distributions. Normality of data was checked using the Shapiro-Wilk test. Dichotomous and categorical data were presented in proportions. Correlations were assessed by scatter plot, and the strength of linear correlation was determined by calculating a Pearson (r) coefficient. Bland-Altman analyses of repeated measurements in each patient were performed and means (bias) and SD of the differences, 95% limits of agreement (LOA) (=mean difference ± 1.96 × SD of the difference) as well as the percentage error of CO AP versus CO CCUS were calculated [19,20]. To evaluate the trending ability of CO AP versus CO CCUS a four-quadrant plot was used and the concordance rate was calculated using an exclusion zone of 0.5 L min −1 [21]. For statistical analysis, we used STATA version 15.0 (StataCorp, College Station, USA). Participants During the study period, 184 patients were diagnosed with circulatory shock, but only 24 patients appeared eligible for this study. One hundred patients who had circulatory shock were not included as they were expected to stay for less than 48 h, and 60 patients with circulatory shock were not included because CCUS was not possible or image quality was insufficient to perform measurements. Six patients had to be excluded because study procedures interfered with clinical care, leaving 18 patients to be included. One patient was excluded afterwards for invalid CO AP measurements due to improper use of a FloTrac™ sensor. Thus, 17 patients were included in the final analyses (Fig. 1). Trending ability For assessment of trending ability 72 paired measurements were analysed. Trending of measurements was evaluated using a four-quadrant plot (Fig. 4). Forty-five paired measurements showed a clinically relevant change, which was defined as larger than 0.5 L min −1 . The concordance rate was 64.4%. Discussion In this prospective observational study, agreement and trending ability of CO AP was compared with CO CCUS in critically ill patients with circulatory shock. CO AP showed a low bias of 0.2 L min −1 but a large percentage error of 65.6% when compared with CO CCUS , indicating disagreement [20]. Trending ability was poor with a concordance rate of 64.4%. The new FloTrac™ algorithm should not be used for diagnosis or guidance of treatment in critically ill patients with circulatory shock. Interpretation There are no data on the reliability of CO measurements with the fourth-generation FloTrac™ software algorithm in critically ill patients with shock as of yet. The main concern with the previous version(s) of the APCO algorithm was the lack of reliability in tracking CO changes after hemodynamic interventions or in patients with sepsis [7,22]. The low bias and the high percentage error of CO measurements are in accordance with results from another study, which tested the fourth-generation algo rithm for tracking CO measurements after administration of phenylephrine to increase vasomotor tone in patients prior to cardiac surgery (bias − 0.7 L min −1 ; percentage error 55.4%) [23]. Concordance rate for trending ability was 87% which was higher than in our study. In that study, the chosen reference technique for measuring CO was thermodilution. In a more recent study in patients undergoing cardiac surgery, the new FloTrac™ algorithm also showed lack of agreement and trending ability (bias − 0.4 L min −1 ; percentage error 37.1%; concordance rate 76%) [24]. The reference technique was thermodilution and Clinical characteristics on study inclusion Heart rate (bpm) 95 (26) Systolic arterial pressure (mmHg) 102 (15) Diastolic arterial pressure (mmHg) 55 (6) Mean arterial pressure (mmHg) 69 (7) Norepinephrine therapy, n (%) 16 Another study tested the fourth-generation FloTrac™ algorithm in patients undergoing abdominal aortic aneurysm surgery and also found a low bias and high percentage error (bias 0.4 L min −1 ; percentage error 46.7%) of CO measurements [25]. The concordance rate for trending ability was 26.9% before and after aortic clamping and 47.3% before and after first unclamping of the iliac artery. The reference technique chosen in this study was transoesophageal echocardiography. Advanced hemodynamic monitoring techniques are currently used to identify the type of shock, to guide choices of interventions and to evaluate the response to therapy. Less invasive hemodynamic monitoring techniques such as APCO are currently not recommended for use in patients with shock, especially when receiving vasopressors [2,26]. Our findings support this statement. Implications and generalizability Even though CO monitoring is considered a cornerstone in diagnosing and managing circulatory shock, the sequential evaluation of the hemodynamic state during shock is only a level 1 recommendation based on low quality of evidence [2]. The abovementioned studies validating the new fourth-generation FloTrac™ algorithm were performed in different target populations and contained different reference techniques, which limit comparability. There is a concern about the interchangeability of CO CCUS and CO measurements by thermodilution, and tracking ability of the two methods has only been scarcely assessed and needs evaluation by larger studies [27]. Considerations and limitations There are several considerations and limitations when interpreting the results of our study. First, since only parallel and no serial CO measurements were performed for each time point, the precision of individual measurements could not be assessed. While only few studies determined the precision of the CCUS and FloTrac™ technologies, it is a given that both methods have some degree of variation which influences precision of agreement [28]. This might influence-and possibly overestimate-the observed bias and precision to an unknown extent, since the precision of the CCUS as reference method was not incorporated. Second, a stepwise approach and checklist for the complete presentation of CO method comparison research have been published [10]. This checklist includes a design study phase where it is encouraged that criteria for acceptable bias and LOA or percentage error are defined, and a sample size calculation should be performed prior to the conduct of method comparison studies. In our study, we defined clinically acceptable limits based on available literature, but we did not specify a sample size in advance. The current study could serve as a pilot for a further validation study. Third, during the study period, we included only 17 patients. Patients with circulatory shock were eligible only if they were expected to stay for longer than 48 h and if it was possible to perform CCUS. We chose this definition to ensure that a complete picture of shock treatment could be presented which allowed for the best comparison between the two methods. Last, CCUS was used as a reference technique in our study despite pulmonary or transpulmonary thermodilution being the gold standard for CO method comparison studies [10]. Therefore, we cannot prove direct superiority of either method. In order to do this, a comparison with a thermodilution method will have to be performed. We chose CCUS as reference because it is currently the first-line evaluation modality in patients with circulatory shock and also because it is widely available and used in the ICU for diagnostic purposes [2,29]. However, images required to make CO CCUS measurements are unobtainable in up to 20% of patients [30]. FloTrac™ measurements of CO are still not recommended in critically ill patients [5,6], and further clinical studies comparing minimally invasive techniques for CO estimation with a reference technique are needed for further validation of these techniques and also for extending applicability to other types of patients, who were initially not the target population. Conclusions In critically ill patients with circulatory shock, there was disagreement and clinically unacceptable trending ability between values of cardiac output obtained by uncalibrated arterial pressure waveform analysis and critical care ultrasonography. Additional files Additional file 1: STROBE Statement-Checklist of items that should be included in reports of cohort studies. Checklist of items that should be included in reports of cohort studies according to the STROBE statement. (DOCX 42 kb) Additional file 2: Table S1. Detailed patient characteristics.
2019-04-13T19:01:47.567Z
2019-04-11T00:00:00.000
{ "year": 2019, "sha1": "461d3902a83c73b68dc095e0a7d596f2fc3559e5", "oa_license": "CCBY", "oa_url": "https://jintensivecare.biomedcentral.com/track/pdf/10.1186/s40560-019-0373-5", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "39c8b2d29a39082247e32263822d712eb09a82b2", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
260274492
pes2o/s2orc
v3-fos-license
Sedentary Behavior and Lack of Physical Activity among Children in Indonesia Sedentary behavior and lack of physical activity among children in Indonesia is an important issue that needs to be addressed. It is estimated that 57% of children in Indonesia have insufficient physical activity. Studies have shown that children who engage in sedentary behaviors are at an increased risk for various negative health outcomes, including obesity, type 2 diabetes, cardiovascular disease, and poor mental health, compared to physically active ones. This article aims to provide recommendations to increase physical activity and reduce passive behavior in children in Indonesia. This is a commentary article developed from observing the recent progress of sedentary behavior and lack of physical activity among children in Indonesia and the potential consequences. The level of inactive behavior in children in Indonesia is relatively high. Factors that contribute to sedentary behavior and lack of physical activity among children in Indonesia are the increasing use of electronic devices and screen time, the lack of safe and accessible places to be physically active, the COVID-19 pandemic, as well as cultural and social norms that prioritize academic achievement over physical activity. To address sedentary lifestyles among children, there is a need for a comprehensive approach that addresses both the individual and societal factors contributing to the problem. This might include increasing access to healthy food options, promoting physical activity, and implementing education programs to raise awareness about the importance of healthy eating and physical activity, as well as limiting screen time. Introduction Childhood is crucial for developing physical, cognitive, and social skills. However, sedentary behavior and lack of physical activity among children in Indonesia is an important issue that needs to be addressed. Children's lifestyles in Indonesia are becoming increasingly sedentary, with a growing trend of spending long hours sitting in front of screens and engaging in minimal physical activity [1,2]. As defined by the World Health Organization (2020), sedentary behavior is defined as any waking behavior characterized by an energy expenditure of 1.5 METS (metabolic equivalents; multiples of basal metabolic rates) or lower while sitting, reclining, or lying, while physical activity is defined as any bodily movement produced by skeletal muscles that requires energy expenditure [3]. The increase in energy expenditure and resulting challenge to pulmonary-cardiovascular systems have several beneficial effects on physical health and remarkably improved cardiovascular functioning. It is estimated that 57% of children in Indonesia have insufficient physical activity [4]. The consequences of sedentary behavior and lack of physical activity among children in Indonesia are far-reaching and severe as this is a concern and could lead to long-term health problems for these children, impacting their academic performance and life satisfaction [5,6]. It is well known that regular physical activity is essential for children's physical and mental health, and a lack of physical activity can lead to a range of adverse outcomes such as obesity, diabetes, and cardiovascular disease [7][8][9][10][11][12]. In addition to the health risks mentioned above, inactive children are likelier to experience poor academic performance, as physical activity improves cognitive function and concentration [13] Furthermore, lacking physical activity can lead to social and emotional problems, such as low self-esteem and depression [14]. One of the significant causes of sedentary behavior and lack of physical activity among children in Indonesia is the increasing reliance on technology and screens [8]. With the proliferation of smartphones, tablets, and other electronic devices, children spend more time sitting in front of screens than engaging in physical activities [15]. This trend is compounded by the fact that many schools in Indonesia need more facilities or programs for physical education, making it difficult for children to get the exercise they need [1]. The reasons to explore the scope of sedentary behavior and physical inactivity by using nationwide data before the pandemic suggested that there is a possibility that the trend of sedentary behavior and physical inactivity increasing during the pandemic leads to severe consequences. In this article, we will explore the extent of sedentary behavior and physical inactivity among children in Indonesia and the potential implications of this trend. We will also discuss possible strategies for promoting physical activity among children in Indonesia. Methods This is a commentary article developed from observing the recent progress of sedentary behavior and lack of physical activity among children in Indonesia and the potential consequences. This article systematically reviews the impact of sedentary lifestyles, such as obesity, diabetes mellitus, cardiovascular diseases, and mental health, then continues with the determinants of sedentary lifestyles (the increased use of electronic devices and screen time, change in dietary habits, lack of safe and accessible places for physical activity, cultural and social norms that prioritize academic achievement over physical activity and COVID-19 pandemic). Finally, this article discusses potential strategies for promoting physical activity among children in Indonesia. Results and Discussion Sedentary behavior contributes to physical inactivity, defined as insufficient physical activity to maintain good health. Sedentary behavior intensifies physical inactivity when one spends more time sitting than moving. Over time, physical inactivity may lead to severe life-threatening consequences [3]. Sedentary behavior among children will increase the risk of various health consequences, including obesity, cardiovascular alteration, bone density reduction and mental health problems [12,[16][17][18][19]]. An imbalance of energy intake and expenditure will lead to body fat accumulation, resulting in obesity. Obese children may develop cognitive problems, such as depression and anxiety, as they tend to become the object of bullying [20]. Sedentary behavior may also cause alteration in the cardiovascular system by increasing blood pressure, cholesterol level and risk of heart disease [21]. Children who maintain sedentary behavior are associated with a chance of type 2 diabetes and lower bone density later in adulthood [22,23]. Peltzer & Pengpid (2016) examined the relationship between physical inactivity frequency and sedentary behaviour among school children in the Association of Southeast Asian Nations (ASEAN) region towards 30,284 school children aged 13-15 years from seven ASEAN countries (Cambodia, Indonesia, Malaysia, Myanmar, Philippines, Thailand, and Vietnam) that participated in the Global School-based Student Health Survey (GSHS) between 2007 and 2013. The study found that the prevalence of physical inactivity was 80.4%, ranging from 74.8% in Myanmar to 90.7% in Cambodia and sedentary behaviour 33.0%, ranging from 10.5% in Cambodia and Myanmar to 42.7% in Malaysia. In multivariate analysis, factors such as not walking or biking to school, not attending physical education classes, inadequate vegetable consumption and lack of peer and parental or guardian support were associated with physical inactivity. It is also found that older age (14 and 15 years old), coming from an upper middle-income country, being overweight or obese, attending physical education classes, alcohol use, loneliness, peer support and lack of parental or guardian supervision were associated with sedentary behaviour [24]. The prevalence of insufficient physical activity for boys and girls in Indonesia slightly increased from 86.1% (2001) into 86.4% (2016) [25].This is in line with the global estimates that 81% boys and girls do not meet the WHO global recommendations on physical activity [26]. A scoping review conducted by Andriyani et al. (2020) of 166 studies worldwide found that the prevalence of sedentary behavior ≥ 3 h per day ranges between 24.5% to 33.8% [27]. Nationwide data related to Indonesian children's physical activity available in the public domain is derived from Indonesia Basic Health Research 2007, 2013 and 2018. The relevant age group covered is 10 to 14 years. There is a slight decrease in the percentage of children's lack of physical activity from 66.9% in 2007 to 64.4% in 2018. These numbers denote that more than half of Indonesian children were physically inactive, overlooking that regular physical activity helps maintain body weight and strengthen the cardiovascular system [28]. The 2013 data report the physical activity status provincially and not based on age group. However, only this report provides sedentary activity data. Among children aged 10 to 14, 28.2% did the passive activity for less than 3 h, 42.7% had sedentary activity for 3-5.9 h, and 29.1% were engaged in it for more than 6 h. Several regional studies reported the proportion of children engaged in sedentary behavior at Denpasar, Bali at 44.0% [29] and 27.1% in Gombara, Makassar [30]. A secondary data analysis derived from SEANUTS study (The South East Asian Nutrition Survey) reported 57.3% of Indonesian children aged 6-12 years engaged in TV/computer/Play Station watching more than 2 h a day [31]. It is well known that physical inactivity has a detrimental effect on health, accounting for 6% of global mortality [32]. Being physically inactive could lead to obesity. The accumulation of excess body fat characterizes obesity and can be conceptualized as the physical manifestation of chronic energy excess [33]. Obesity has become a public health problem in Indonesia, as seen by the increased prevalence of adult obesity from 10.5% in 2007 to 21.8% in 2018 [4]. Childhood obesity has become a growing concern in Indonesia over the past decade as the high level of inactive behavior and lack of physical activity among children in Indonesia. The proportion of obesity prevalence among children aged 5-12 years slightly increased from 8.8% in 2007 to 9.2% in 2018, whereas central obese prevalence among >18-years population increased drastically from 18.8% in 2007 to 31.0% in 2018. The reasons for the increase in childhood obesity in developing countries, including Indonesia, are complex and multifaceted [11]. One major factor is the shift towards a more sedentary lifestyle, as children spend more time in front of screens and less time engaging in physical activity. Furthermore, there has been a change in dietary habits, with greater consumption of processed foods high in sugar and fat. Another contributing factor is the lack of access to healthy food options, particularly in rural and low-income areas. Many children in these areas rely on cheap, high-calorie foods, such as instant noodles and fried snacks, as their primary source of nutrition. Last but not least, the COVID-19 pandemic has led to increased sedentary behavior in children due to restrictions on physical activity. Obesity is one of the most significant health risks associated with inactive behavior. A study published in the American Journal of Epidemiology showed that the prevalence of obesity increased with sedentary behavior, particularly among women [34]. The study suggested that reducing sedentary behavior and increasing physical activity could help to prevent obesity. A study conducted using the 2013 Indonesia Basic Health Research found that passive activity was correlated with overweight and obesity among those who lived in urban and rural areas [35]. Similarly, a lack of physical activity can lead to an increased risk of diabetes. According to a study published in Diabetes Care, sedentary behavior and physical inactivity are asso-ciated with a higher risk of type 2 diabetes [36]. The study recommended regular exercise to prevent and manage diabetes. Passive behavior is also linked to cardiovascular diseases. A European Journal of Epidemiology's meta-analysis found that sedentary behavior was associated with an increased risk of cardiovascular diseases, including coronary heart disease, stroke, and heart failure [37]. The study suggested that reducing sedentary behavior and increasing physical activity can help to prevent cardiovascular diseases. In addition to physical health, passive behavior can also affect mental health. A review published in the Journal of Physical Activity and Health found that sedentary behavior was associated with a higher risk of depression, anxiety, and stress [18,38]. Determinants of Sedentary Lifestyles There are several determinants of sedentary lifestyles, including the increased use of electronic devices, spending more time in front of screens and less time engaging in physical activity, change in dietary habits with greater consumption of processed foods, high-calorie foods, and foods high in sugar and fat, lack of safe and accessible places for physical activity, cultural and social norms that prioritize academic achievement over physical activity and COVID-19 Pandemic. The Increased Use of Electronic Devices and Screen Time The increased use of electronic devices such as smartphones, computers, and televisions has been linked to sedentary behavior, which can adversely affect health. Sedentary behavior refers to sitting or lying down for extended periods while engaging in activities such as watching TV, using the computer or playing video games. According to a study published in the Journal of the American Medical Association (JAMA), sedentary behavior has significantly increased among children and adults due to the increased use of electronic devices. The study found that from 2001 to 2016, the amount of time people spent sitting increased by an average of 1.5 to 2 h per day, with the majority of this increase being attributed to electronic devices [39]. Another study published in the journal Obesity Reviews found that sedentary behavior and the use of electronic devices were positively associated with obesity and overweight in children and adults. The study found that individuals who spent more time using electronic devices were likelier to have a higher body mass index (BMI) and increased risk the of developing obesity [40]. A study conducted by Tanjung et al. (2017) in Yogyakarta, Indonesia found that preschool children with high intensity use of gadgets are 1.3 times more likely to be obese [41]. This is inline with the findings from Uttari & Siddhiarta (2017) in Denpasar, Bali, Indonesia, revealing that there was a statistically significant relationship between the level of engagement in the screen time activities and the obesity in children with an odd ratio of 3.3 [42]. Another study conducted by Syahrul et al. (2016) in Indonesia also found that playing outdoors on weekends for less than 1 h were significantly associated with overweight children [43]. Furthermore, sedentary behavior and the use of electronic devices can also have negative impacts on mental health. A study published in the Journal of Adolescence found that excessive use of electronic devices, especially social media, was associated with poorer sleep quality and increased symptoms of depression and anxiety in adolescents [44]. The Council on Communications and Media of the American Academy of Pediatrics advises parents to limit children's screen time to less than 2 h per day, to discourage screen media exposure, and to avoid placing televisions and internet-connected devices in children's bedrooms [16]. Spending more time in front of screens and engaging less in physical activity are a growing problem in our modern society. With the rise of technology, people spend more time sitting in front of screens, whether for work, entertainment, or social media. Unfortunately, this sedentary lifestyle has serious consequences for physical and mental health [45,46]. Studies have shown that increased screen time is associated with decreased physical activity levels. It was found that adolescents who spent more time watching television or playing video games engaged in less physical activity than those who spent less time in front of screens [47][48][49]. This trend is not limited to adolescents, as adults who spend more time in front of screens also tend to have lower physical activity levels [12]. Other findings from 2527 children and adolescents (6-19 years old) from 2003/2004 and 2005/2006 National Health and Nutrition Examination Surveys (NHANES) found that high TV use was a predictor of high cardio-metabolic risk score (CRS) after the adjustment for MVPA and other confounders. Children and adolescents who watched TV ≥ 4 h per day were 2.53 times more likely to have high CRS than those who watched < 1 h per day. Children who engage more in screen time activities may reduce physical activity and thus have a higher risk of obesity. A study in Kupang City, Indonesia found that screen-based activity for more than 2 h per day is particularly associated with an increased risk of obesity [50]. This is similar to the findings of research conducted in Yogyakarta, Indonesia where screen time of more than 2 h per day was associated with children being 2.6 more likely to be obese [51]. Higher screen time was also significantly associated with a higher level of energy intake. There is also a relationship between the duration of gadget use and personal social skills in preschool-aged children. A review conducted by Oktafia et al. (2022) found that the gadget usage among pre school children increased from 38% in 2011 to 80% in 2015 and 13-18% of them experienced developmental issues [52]. Moreover, spending too much time in front of screens can adversely affect mental health. A study published in JAMA Pediatrics found that adolescents who spent more time in nets had a higher risk of developing symptoms of depression and anxiety. This suggests that screen time may contribute to mental health issues, particularly in vulnerable populations such as adolescents [53]. To mitigate the adverse effects of screen time, it is essential to prioritize physical activity and limit screen time. The World Health Organization recommends engaging in at least 150 min of moderate-intensity physical activity per week and limiting sedentary behavior [3]. This can be achieved through regular activity and minor changes, such as taking breaks from screens and engaging in more active pursuits, such as walking or cycling. A Change in Dietary Habits Indeed, a significant relationship exists between sedentary behavior and a change in dietary habits towards greater consumption of processed foods, high-calorie foods, and foods high in sugar and fat. Research has shown that individuals who lead a sedentary lifestyle and consume a diet high in processed and high-calorie foods are at greater risk of obesity, type 2 diabetes, and cardiovascular disease. A study published in the American Journal of Preventive Medicine found that sedentary behavior was positively associated with poor dietary habits in both men and women. The study found that sedentary behavior was linked to higher consumption of snacks, fast foods, sugar-sweetened beverages, and a lower intake of fruits and vegetables [54]. Individuals who spent more time sedentary were more likely to consume a diet high in processed foods and sugar-sweetened beverages [55]. Other study also found that those who spent more time inactive were less likely to consume a healthy diet, including fruits, vegetables, and whole grains [56,57]. A finding in 5 Southeast Asia countries (India, Indonesia, Myanmar, Sri Lanka and Thailand) showed that 76.3% of the 13 to 15 year-olds had insufficient fruits and vegetables consumptions (less than five servings per day); 28% reported consuming fruits and 13.8% consuming vegetables less than once per day, multivariate analysis found that sedentary behaviour and being overweight was protective of inadequate fruits and vegetable consumption [58]. Study conducted by Wulandari et al. in 2015 found that there was a correlation between energy intake and physical activity with overnutrition. Overweight and obesity can occur at any stage in life, including during primary school years. The prevalence of overnutrition in schoolchildren rose about 10.85% from 2007 until 2013, affecting both urban and rural areas. One of the key factors is the imbalance between energy intake and physical activity [59]. Therefore, promoting physical activity and healthy dietary habits should go synergistically to reduce the risk of chronic diseases associated with a sedentary lifestyle and unhealthy diet. Lack of Place for Physical Activity Sedentary lifestyles and lack of safe and accessible places for physical activity are often interconnected. A sedentary lifestyle may result from limited access to safe and accessible areas for physical activity. In contrast, the lack of physical activity opportunities may contribute to sedentary behaviors. Studies have shown that individuals with limited access to safe and accessible places for physical activity are less likely to engage in physical activity [60]. The rapid economy improvement has reduced open spaces and facilities for physical activity or sports as well as parents permission for such activity. A qualitative study conducted by Roshita et al. (2021) in Indonesia, found that girls complained about obtaining permission from their parents to engage in outdoor activities due the parents being worried about their daughters' safety and implementing stricter rules, to ensure that they return home before dark, thus limiting their physical activity after school [61]. A secondary analysis of Global School-Based Health Survey in Indonesia conducted by Yusuf et al. (2021) found that prevalence of active transportation by walking or bicycling to and from school among children was decreased from 47.2% (2007) to 32.3% (2015) [62]. Peer support among boys in 2015 was positively associated with low active transportation, meaning that the ownership by their peer of private vehicles would influence them too either to go together to and from school by using their own vehicle or by sharing a vehicle. The study conducted by Has et al. with 130 pairs of school aged-children and their mothers/fathers, found that there was a significant correlation between children's activity level, and access to safe housing and a playground with a sedentary lifestyle [63]. Therefore, improving access to safe and accessible places for physical activity may help to reduce health disparities and promote a healthier lifestyle. Initiatives such as community gardens, walking trails, and bike lanes can help to promote physical activity and improve access to safe and accessible places for physical activity [64]. Social Norms Cultural and social norms prioritizing academic achievement over physical activity may contribute to sedentary lifestyles. Studies have shown that individuals prioritizing academic success over physical activity are less likely to engage in physical activity [65,66]. The impact of sedentary lifestyles and cultural and social norms prioritizing academic achievement over physical activity on public health is significant. Cultural and social norms prioritizing academic achievement over physical activity may exacerbate the risk of non-communicable diseases by limiting opportunities for physical activity. Furthermore, cultural and social norms prioritizing academic achievement over physical activity may disproportionately affect specific populations, including children and adolescents. Academic achievement is highly valued in many cultures, and physical activity may be seen as less important. This can lead to a lack of opportunities for physical activity and to a sedentary lifestyle among children and adolescents [65]. Yusuf et al. (2021) found the prevalence of physical activity among girls declined from 22.9 (2007) to 15.4% (2015) and boys from 26% (2007) to 17.6% (2015) in Indonesia. This is possibly related to time spent in school, from 7 h in 2007 to 8-10 h in 2015; thus, the time for physical activity was reduced due to more time spent for academic matters [62]. Improving access to physical activity is essential to promote a healthy and active lifestyle and reduce the negative impact of sedentary behaviors. Initiatives such as afterschool sports, community sports leagues, and school-based physical education programs can promote physical activity and provide opportunities for children and adolescents to engage in physical activity [66]. The COVID-19 Pandemic The increasing number of COVID-19 cases requires countries affected to manage the transmission of COVID-19. The Government of Indonesia made a policy to accelerate the management of COVID-19. Implementing the policy of limiting community activities is enforced by enacting the Large-Scale Social Restrictions policy as an adaptation of the lockdown where all community activities are limited through a stay in their homes, except for essential or urgent activities. Until now, COVID-19 cases have declined compared to the first year of the pandemic. Although new to cases are still being found, there has yet to be an apparent shift from the pandemic to the endemic stage, and it requires countries to develop a resilient community [67]. The COVID-19 pandemic has led to increased sedentary behavior in children due to restrictions on physical activity, including school closures and limited opportunities for outdoor recreation [68]. A systematic review regarding physical activity in school children during the pandemic conducted by Ramadan (2022) found that 60-70% of school students did not meet the recommendation for physical activity [69]. A scoping review to explore the impact of COVID-19 on the movement behavior of children and youth conducted by Paterson et al. (2021) found that 150 studies consistently reported declines in physical activity, increased screen time and total sedentary behavior [70]. The pandemic is related to changes in the quantity and nature of physical activity and sedentary behavior among children and youth. A study in Yogyakarta, Indonesia, conducted by Andriyani et al. (2021), found that during the pandemic, mothers perceived their children to be less active and to use more screen-based devices, either for educational or recreational purposes, compared to before [71]. During the pandemic, children with higher levels of sedentary behavior had higher anxiety and depressive symptoms [72]. The findings were also supported by Pfefferbaum & Van Horn (2022), who stated, according to previous research, that decreased physical activity in the context of the COVID-19 pandemic and home confinement was associated with various psychological outcomes, including perceived stress, psychological distress, depression, anxiety, and hyperactivity inattention and prosocial behavior problems [18]. Promoting physical activity is essential to reduce the negative impact of sedentary behavior in children during the COVID-19 pandemic. Strategies to promote physical activity may include home-based physical activity programs, online physical education classes, and outdoor activities that adhere to social distancing guidelines [68]. In addition, encouraging parents to promote physical activity in their children may also be beneficial. Conclusions and Recommendation To address sedentary lifestyles among children, there is a need for a comprehensive approach that addresses both the individual and societal factors contributing to the problem. This might include increasing access to healthy food options, promoting physical activity, and implementing education programs to raise awareness about the importance of healthy eating and exercise. Policies to restrict the marketing and sale of unhealthy foods to children could also be implemented. The high level of physically inactive behavior among children in Indonesia is a cause for concern, and interventions such as increasing access to physical activity facilities and programs, incorporating physical activity into the school curriculum, and addressing social and cultural barriers to physical activity can help to solve this issue. For example, providing safe and accessible places for children to be physically active, such as parks and playgrounds, community gardens, walking trails, and bike lanes, can encourage them to engage in regular physical activity. Schools should encourage regular physical education classes and promote active transportation to school, such as walking or cycling. Additionally, incorporating physical education classes and sports programs into the school curriculum can promote physical activity among children. To mitigate the adverse effects of screen time, it is crucial to prioritize physical activity and limit screen time by taking breaks from screens and engaging in more active pursuits, such as walking or cycling. Parents could ask their children to do domestic tasks too to help their children away from the gadgets. Furthermore, addressing social and cultural barriers to physical activity can also be important. For example, changing cultural and social norms prioritizing academic achievement over physical activity can help create a more supportive environment for children to be physically active. This can include promoting the importance of physical activity for overall health and well-being and encouraging a balance between academic success and physical activity. Furthermore, involving families and communities in promoting physical activity can create a supportive environment for children to be physically active. It is also recommended to conduct further studies to explore the impact of sedentary behavior and physical inactivity among Indonesian children to sharpen preventive action.
2023-07-29T15:15:50.789Z
2023-07-26T00:00:00.000
{ "year": 2023, "sha1": "acb19a9a834247e0ecbbebc75a7bb6dbba31707c", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2227-9067/10/8/1283/pdf?version=1690357253", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "fdfdbe3fc64895d518c67bea41d22e3ebb516d61", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
14553739
pes2o/s2orc
v3-fos-license
100 ps time-of-flight resolution of Dielectric Resistive Plate Chamber Time of flight of a minimum ionizing particle along a fixed base has been measured with a 100 ps accuracy by means of a Dielectric Resistive Plate Chamber (DRPC) with 4 x 0.3 mm gas gaps. DRPC timing characteristics have been studied with different applied voltages, discriminating thresholds and beam intensities. It may be stated that the time-of-flight resolution of gaseous detectors developed within the ALICE experiment has reached the level of the best known scintillation counters. During the last several years a revolutionary progress in breakdown suppression inside gaseous time-of-flight detectors was achieved by introducing, in different ways, a resistivity inside the gas gap [1,2,3]. Despite of this fact, there was a poor idea about timing properties, which may be principally manifested by these detectors. A value of 100 ps seems to be a natural limit in this sence. Basing on the AL-ICE physical conditions, 100 ps time-of-flight resolution is sufficient for π/K/p separation in a real momenta range. Untill recently, such fine resolution could be provided only by modern scintillation counters and Pestov spark counters [4]. As an example, the timing system based on scintillators and photomultipliers, proposed for the STAR project at RHIC, provides the time resolution of about 90 ps [5]. The detector, described in the given paper, is schematically presented in Fig. 1. Dielectric Resistive Plate Chamber (DRPC) consists of several ceramic plates (0.5 mm of ordinary unpolished ceramics) which form four gas gaps, each 0.3 mm wide. In accordance with the expectations, decreasing the gap width has led to the rise of the time resolution. The number of gaps (two in previous version [2]) was doubled to keep the MIP registration efficiency close to 100%. The chamber consists of two types of electrodes. Ceramic cathodes are metallized with aluminum. Dielectric-resistive electrodes are also made of ceramics metallized with aluminum on the one side and covered, through evaporation, with semi-conducting SiC on the other side. The plates are assembled in pairs so that the metal layer, common for two gaps, is positioned inside, and the semiconducting layers are turned towards the gaps. The idea of electrical connection between the semi-conductor and the metal is described in Ref. [2]. The detector has a square working surface of 2 × 2 cm 2 . Methods employed in the measurements of TOF resolution and registration efficiency at ITEP and CERN accelerators was, in general, similar to that decribed in Ref. [2]. The same front-end electronics and gas mixture consisting of 85%C 2 H 2 F 4 + 5%isobutane + 10%SF 6 were used. The start part of the setup, based on scintillation counters, was modified, so that the information from several detectors could be analized simultaneously. The counting rate, or efficiently registered particle flux over the working surface, is an important parameter of DRPC. During the measurements the rate was fixed at the level of 1 kHz·s −1 ·cm −2 , which is higher than that predicted under the ALICE conditions (100-200 Hz·s −1 ·cm −2 ). Special measurements of the way the time resolution dependends on the rate were performed as well. A typical time-of-flight distribution, summarised over the whole range of amplitudes, with the total registration efficincy close to 95%, is shown in Fig. 2a. One can see that the standard deviation is really on the level of 100 ps, and the distribution has unsignificant tails. Fig. 2b represents the same data in more detail: the time resolution is shown for different amplitudes. The dependence is very slight, the resolution stays close to 100 ps in the whole range of amplitudes, which explains the absence of tails in Fig. 2a. Actually, the timing resolution is calculated after slewing correction, which takes into account the fact that signals with larger amplitudes are triggered by the constant-threshold discriminator at earlier times. Such a correction influences the timing distribution in a strong way. It is normally performed with a polynomial function in a way shown in Fig. 3a The amplutude spectrum obtained from the amplifier output is shown in Fig. 3b. More specifically, it shows the charge integrated by a charge-sencitive ADC. Although the front-end electronics is not linear in the whole range of amplitudes, it may be seen that the amplitude spectrum has a peak, staying far from the pedestal bounder on a slightly changing background. Amplitude magnitudes correspond to the gas amplification of 10 7 and allow to obtain excellent registration efficiency at different high voltages and electronics thresholds. The dependence of efficiency on the high voltage is shown in Fig. 4 for different electronics thresholds. Even at the threshold of 100 mV there is a clear plateau, in which the efficiency stays close to 100%. The fact that the TOF resolution does not depend on the high voltage and the discriminating threshold, is illustrated in Fig. 5. All the results described above were obtained at a fixed counting rate. A special experiment was performed to study the rate influence on the detector properties. Fig. 6 shows the dependence of the TOF resolution on the particle flux at 40 mV electronics threshold. The resolution increases with the rate up to approximately 150 ps. But under the ALICE conditions (the very beginning of the scale) it may be expected to be as low as 80 ps. Fig. 7 shows the low rate resolution in detail. The distribution is still very clear with the tails admixture being less than 10 −3 . The study of DRPC timing properties was performed with a support from RFFI grant #99-02-18377.
2014-10-01T00:00:00.000Z
1999-06-01T00:00:00.000
{ "year": 2005, "sha1": "3dff3a21eb6d467d644f42e38960754d47b134ed", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "3dff3a21eb6d467d644f42e38960754d47b134ed", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
253485680
pes2o/s2orc
v3-fos-license
Novel Voltage-Mode PID Controller Using a Single CCTA and All Grounded Passive Components : A compact voltage-mode proportional-integral-derivative (PID) controller based on the utilization of a single current conveyor transconductance amplifier (CCTA) is presented in this paper. The presented active PID controller is made up of a single CCTA, and four truly grounded passive components, i.e. two resistors, and two capacitors. The design consideration of the controller parameters has been examined. Besides, the crucial sensitivity performances of the controller parameters for ideal and non-ideal conditions have also been discussed. An application on the closed-loop test system is demonstrated to validate the practicability of the proposed PID controller circuit. To confirm the theoretical behavior, the proposed circuit is simulated with the PSPICE program using TSMC 0.35-μm CMOS process technology. Experimental test results based on commercially available CFOA AD844 and OTA CA3080 integrated circuits are also provided to demonstrate the practicality of the proposed circuit. Introduction Proportional-Integral-Derivative (PID) controllers are the most commonly employed control actions in feedback control systems, and process industries [1].They have been extensively utilized for several decades since they feature a variety of desired properties, including design simplicity, low cost, robustness, and broad application, as well as easy parameter tuning [2].Their wide range of applications have stimulated and sustained the design and invention of various PID controller circuits and sophisticated hardware modules.Over the last two decades, the enormous literature on PID process controllers has featured a wide range of design techniques based on numerous active compo-nents, such as operational transconductance amplifiers (OTAs) [3], second generation current conveyors (CCIIs) [4]- [5], current feedback operational amplifiers (CFOAs) [6]- [9], current differencing buffered amplifiers (CDBAs) [10], differential voltage current conveyor transconductance amplifiers (DVCCTAs) [11], current follower transconductance amplifiers (CFTAs) [12]-13], voltage differencing transconductance amplifiers (VDTAs) [14], voltage differencing current conveyors (VDCCs) [15], and second generation voltage conveyors (VCIIs) [16].In [3]- [4], [6], [9]- [10], the PID controllers designed with the signal-flow graph approach have been proposed.However, for the realizations in [3]- [4], [9]- [10], [16], at least four active components were required.Furthermore, PID controller realizations of [4]- [6], [9]- [11], [13]- [15] include a number of passive components, i.e., at least five passive components, some of which are also floating.Floating passive components were used to design single active element-based PID controller circuits in [7]- [8], [12]- [13], [15]- [16].Active circuit structures with all grounded passive elements are well recognized to be useful for fully integrated circuit (IC) design as well as IC hybrid fabrication processes.This is due to the fact that the usage of grounded passive elements is helpful for the electronic adjustability and permits the elimination/accommodation of various parasitic effects for IC implementation.Another point to note is that the performance and application of the previously discussed controllers [3]- [7], [10]- [15] were evaluated solely through computer simulations.For acceptability purposes, experimental measurements are the important method to evaluate the practicability of the circuit. The current conveyor transconductance amplifier (CCTA) is a contemporary versatile active circuit block that is a cascade of a CCII and an OTA [17].The CCTA has been extensively used in the design of analog signal processing circuits and solutions like as analog filters [18]- [20], sinusoidal oscillators [21]- [22], resistor-less inductance simulator [23], and high-frequency active meminductor emulator [24].Therefore, this work aims at proposing a CCTA-based voltage-mode PID controller with a canonic and lowcomponent count.It is made up of only one CCTA as an active component and all grounded passive components, such as two resistors and two capacitors.There is no element-equality criteria required for the controller realization.Non-ideal gain effects on the controller performance and sensitivity analysis are also investigated.The functionality of the proposed PID controller is evaluated in a closed-loop system.As an example plant circuit, a second-order lowpass filter was built to design the closed-loop system.To test the behavior of the proposed PID controller circuit, some computer simulations using the PSPICE software and experimen-tal measurement data using off-the-shelf integrated circuits AD844 and CA3080 are presented. Table 1 illustrates the comparative analysis of all previously mentioned PID controllers as having the following desirable properties to substantiate the proposition of the proposed controller: (i) the number of active elements, (ii) the number of passive elements, (iii) the use of all grounded passive elements, (iv) electronic tunability, (v) simulation technology, (vi) simulation supply voltages, (vii) simulated total power consumption, (viii) experimental technology, and (ix) experimental supply voltages. Circuit description 2.1 CCTA properties Basically, the CCTA is a versatile active building block designed by the cascade connection of CCII followed by an OTA [17].The schematic symbol of the CCTA is depicted in Fig. 1.The voltage drop at terminal x follows the applied voltage at terminal y in magnitude.The output currents at terminals z and zc follow the current through terminal x in magnitude.The voltage drop at terminal z is converted to an output current at terminal o with the transconductance gain (g m ).The terminal relationship of the ideal CCTA can be described by the following set of equations: In general, the transconductance g m can be modified electronically by adjusting the externally provided current I B . Proposed single CCTA-based PID controller circuit In a general PID controller, three modes of controller with proportional, integral, and derivative actions must be incorporated.As a consequence, the transfer function of a standard voltage-mode PID controller is generally defined as follows: [25] where V in (s) is the input voltage, V out (s) is the output voltage, K P is the proportional coefficient, K I is the integral coefficient, and K D is the derivative coefficient. Fig. 2 shows the proposed voltage-mode PID controller based on CCTA as an active component.The controller comprises only a single CCTA, and all grounded passive components, i.e. two resistors and two capacitors. From the aspect of an integrated circuit, it is critical to employ all grounded passive components.The circuit analysis of the proposed PID controller in Fig. 2 gives the following voltage transfer function. By comparing the above obtained function with the general equation of the PID controller given in equation (2), the various coefficients of the proposed controller are determined to be as follows: Ref. [5] in 2006 and As seen from the above expressions, the various gain coefficients of the PID controller can be controlled electronically.The expressions also show that the three coefficients can be determined by appropriately setting the values of g m , R 1 , R 2 , C 1 and C 2 .The design considerations will be outlined in the following section.Because the primary contribution of this work is the design of a compact analog PID controller with a minimal number of active and passive components, independent tuning of the gain coefficients K P , K I and K D is not expected.Due to the low component count, the circuit requires a small chip area.As a result, the production cost is lowered. Practical design considerations In order to realize the desired gain parameters of the proposed PID controller indicated in equations ( 4)-( 6), the design procedure for setting the circuit components is described as follows. From equations ( 5) and ( 6), the values of capacitors C 1 and C 2 so obtained are expressed as and Therefore, substituting these relations into equation (4), we then obtain According to the PID design criteria, the coefficients K I and K D are supposed to be arbitrarily determined parameters.By taking the gain K P modifies to The minimum value of K P derived from equation ( 11) is obtained at and its minimum value can be given below: It can be realized from equation ( 13) that the value of K P(min) is determined by the given values of K I and K D .As a result, the useful value for the gain K P should satisfy the following relationship: Rearranging equation (11), we found that .0 Based on the condition of the K P value specified in equation (14), equation (15) has two real roots, as illustrated below. Therefore, the basic steps in the sequel are followed to determine the required controller parameters, and satisfy the K P value given in equation (14). Non-ideal effects The non-ideal condition of the real CCTA is investigated in this section.The port relationship of the non-ideal CCTA, including tracking errors, can be modeled by the following matrix equation: By taking the non-ideal CCTA into effect, the voltage transfer function of the proposed PID controller in Fig. 2 can be derived as: In this case, the non-ideal gain parameters of the PID controller can then be obtained as: and The above expressions clearly show that the non-ideal gains β, α and δ of the CCTA have a direct effect on the PID gain parameters K P , K I and K D .The absolute coefficients of the active and passive sensitivity versus PID gain parameters are found to be less than or equal to unity, as shown by the following equations: and Simulation results The theoretical assumptions are validated through PSPICE simulation utilizing 0.35-μm CMOS process parameters provided by TSMC.For simulation, the CMOS implementation of the CCTA shown in Fig. 3 was used with symmetrical supply voltages of ±1.5 V. Transistor aspect ratios are provided in Table 2.The transconductance of the CCTA is found as: where K = μC ox (W/L).Here, the parameters μ, C ox , W, and L denote electron mobility, oxide capacitance per unit gate area, effective channel width, and effective channel length, respectively.The transconductance g m is dependent on the process parameter K, and the external bias current I B , according to equation (32).The plot in Fig. 4 also illustrates the variations in g m for the CMOS CCTA in Fig. 3, which are relatively dependent on variations in I B .It has been analyzed that, when I B is varied from 40 μA to 230 μA, the percentage inaccuracy in g m is less than 9% compared to the theoretical g m value.Additionally, the temperature-dependent variation of the frequency-domain responses of the proposed PID controller analysis was performed at industrial ranges T = -50°C, 0°C, 27°C, 50°C, and 100°C.The simulation results of the industrial temperature analysis are given in Fig. 7, where it can be observed that the gain characteristic of the controller diminishes with increasing temperature. Figure 8: Single CCTA-based lowpass filter using only grounded capacitors For the performance evaluation of the proposed PID controller circuit in Fig. 2, a voltage-mode lowpass filter using a single CCTA and four passive elements is realized as shown in Fig. 8.The voltage transfer function of the filter with R p2 = 1/g m is given by where the pole frequency (ω p ) and the quality factor (Q) are derived as: The negative feedback system has been given to evaluate the practical ability of the proposed PID controller circuit.The block diagram of the closed-loop system is represented in Fig. 9.In the study that follows, the step response of the system is analyzed to observe the effect of the proposed controller on this system.For the lowpass filter in Fig. 8, the component values were set as follows: g m = 0.61 mA/V, R p1 = 1 kΩ, R p2 = 1.5 kΩ, and C p1 = C p2 = 1 nF, giving f p = 124.3kHz and Q = 1.28.Fig. 10 shows the ideal and simulated transient responses of e(t) and u(t) for the closed-loop system in Fig. 9 with the ramp input voltage v in (t) rising time of 3 μs/100 mV.The overall power consumption of the closed-loop system in Fig. 9 is estimated to be 108 mW predicated on simulations.In addition to observing the step response of the system, a step input with amplitude of 100 mV was applied, and the obtained results are displayed in Fig. 11.These results are provided to demonstrate the transient response characteristics of the system for four different values of the tuning controller gain parameters.The first set of gain parameters in Fig. 11(a) for case 1 were K P = 1.9,K I = 0.86 Ms, and K D = 1 μs with g m = 0.86 mA/V, R 1 = R 2 = 5 kΩ, and C 1 = C 2 = 1 nF.The second parameters in Fig. 11(b) were modified to K P = 6.9,K I = 1.9 Ms, and K D = 5 μs by selecting g m = 0.38 mA/V, R 1 = 1 kΩ, R 2 = 5 kΩ, and C 1 = C 2 = 1 nF for case 2. The gain parameters in case 3 of Fig. 11(c) were K P = 8, K I = 3 Ms, and K D = 5 μs with g m = 0.61 mA/V, R 1 = 1 kΩ, R 2 = 5 kΩ, and C 1 = C 2 = 1 nF.For case 4, the gains in Fig. 11(d) were K P = 13.6,K I = 8.6 Ms, and K D = 5 μs by setting g m = 0.86 mA/V, R 1 = 1 kΩ, R 2 = 5 kΩ, C 1 = 1 nF, and C 2 = 0.5 nF. Experimental results In experimental measurements, the CCTA has been practically constructed with commercially available ICs such as AD844s [26] and CA3080 [27], as depicted in Fig. 12.In this case, the CCTA's transconductance gain (g m ) is proportional to the external bias current (I B ) and has the following relationship: g m = 20I B [27].The symmetrical bias voltages of the AD844s and CA3080 were set at ±5V.Also, for the laboratory testing circuit, the active and passive elements were chosen as g m = 0.86 mA/V (I B = 43 μA), R 1 = R 2 = 5 kΩ, and Proposed PID controller (Fig. 2) Gc(s) Lowpass Filter (Fig. 8 3.As observed in Table 3, the use of the proposed PID controller clearly improves the system's performance in the desired manner. On the other hand, the ideal, simulated and measured frequency-domain responses of the closed-loop system in Fig. 9 for the four mentioned cases of the controller gain values are given in Fig. 15.The experimental results validate the ideal responses within the working range of the proposed PID controller.Nevertheless, the slight deviation in these responses is mostly due which leads to K P = 1.9,K I = 0.86 Ms, and K D = 1 μs as in case 1.A step input voltage with amplitude of 100 mV was applied to the input of the experimental test circuit.Fig. 13 shows the measured input voltage and associated output waveforms for the uncontrolled filter v op (t) and the controlled filter v out (t). To evaluate the impact of the proposed PID controller on the step response of the closed-loop system in Fig. 9, the measured transient responses for three different controller gains are given in Fig. 14.The experimental test results were obtained using the same component settings as in simulation verification, namely to the non-ideal characteristics of AD844s and CA3080 in Fig. 12, such as non-ideal transfer gains and parasitic impedances.Some techniques [28]- [29] that reduce non-ideal transfer gains and parasitic elements can be used to minimize the difference between ideal and measured responses. Conclusions This paper suggests a novel voltage-mode PID controller based on CCTA.A single CCTA, two resistors, and two capacitors are used in the suggested circuit.Because all of the passive components in this realization are grounded, it is ready for further integration.The implementation does not need element-matching requirements and can be simply accomplished using commercially available integrated circuits.The controller gain parameters K P , K I , and K D are all adjustable.An application example of the proposed PID controller as the closed-loop system is included.The analysis of the theoretically proposed circuit has been validated through simulation findings and experimental test results. Acknowledgments This work was supported by Rajamangala University of Technology Rattanakosin (RMUTR).The support by School of Engineering, King Mongkut's Institute of Technology Ladkrabang (KMITL), is also gratefully acknowledged. Conflict of Interest The authors confirm that this article content has no conflict of interest. Figure 1 : Figure 1: Schematic symbol for the CCTA Figure 2 : Figure 2: Proposed single CCTA-based PID controller using all grounded passive elements. β = 1 -ε v and α = 1 -ε i are the non-ideal gains.The parameters ε v (| ε v | << 1) and ε i (| ε i | << 1) represent the voltage and current tracking errors from the y to the x terminals and from the x to the z and zc terminals, respectively.The non-ideal gain δ = 1 -ε gm , where ε gm (| ε gm | << 1) is the transconductance tracking error between the z and o terminals. Figure 4 :. 3 Firstly Figure 4: g m plot for variations in I B for the CMOS CCTA in Fig.3 Figure 9 : Figure 9: Feedback control system to evaluate the performance of the proposed PID controller. Figure 10 : Figure 10: Ramp-input responses of e(t) and u(t) for the closed-loop system in Fig.9. Table 1 : Comparative features of the proposed circuit with previously reported PID controllersCircuit Table 2 : Aspect ratios of the transistors of CMOS CCTA in Fig.3 P. Mongkolwai et al.; Informacije Midem, Vol.52, No. 3(2022), 169 -179 case 2, case 3, and case 4. From Figs.13 and 14, the performance comparison in terms of delay time (t d ), rise time (t r ), peak time (t p ), peak output (p o ), maximum overshoot (M p ), and settling time (t s ) are also measured and summarized in Table Table 3 : Comparison of the performance of the uncontrolled filter and the controlled filter
2022-11-13T16:31:24.776Z
2022-11-02T00:00:00.000
{ "year": 2022, "sha1": "0b0cc8b453f99de3c79bb109f603895cc6328179", "oa_license": "CCBY", "oa_url": "https://ojs.midem-drustvo.si/index.php/InfMIDEM/article/download/1344/364", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "c66e47c640cd3dd80be751a83d5ae7f1e89d1bb2", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [] }
214746402
pes2o/s2orc
v3-fos-license
A Modified ypTNM Staging System–Development and External Validation of a Nomogram Predicting the Overall Survival of Gastric Cancer Patients Received Neoadjuvant Chemotherapy Purpose Neoadjuvant chemotherapy is now widely used in gastric cancer patients. However, the current 8th ypTNM staging system is developed based on patients with less extensive lymph node dissection and the predictive value is relatively limited. In this study, we aim to develop and validate a nomogram that predicts overall survival in gastric cancer patients received neoadjuvant chemotherapy. Patients and Methods From January, 2007 to December, 2014, 471 patients receiving neoadjuvant chemotherapy at our center were enrolled in the study. Based on the Cox proportional hazard model, a nomogram was developed from them and then an external validation was conducted on a cohort of 239 patients from another cancer center. Results The overall survival (OS) rates of 1 year and 3 years were 90.0% and 64.1%, respectively. Body mass index category, tumor location, T stage and N stage were independent prognostic factors for the survival outcome. The C-index of the model was 0.74 in the development cohort and 0.69 in the validation cohort. Our nomogram also showed good calibration in both cohorts. Conclusion We developed and validated a nomogram to predict the 1- and 3-year OS of patients who received neoadjuvant chemotherapy and radical gastrectomy with D2 lymph node dissection. This nomogram predicts survival more accurately than the AJCC TNM staging system, which is the current golden standard. Introduction Gastric cancer is the fifth common cancer and the third leading cause of cancerrelated deaths worldwide. 1 Nowadays, surgery is the most widely used treatment for patients with localized gastric cancer. [2][3][4] However, after curative resection, the survival rate for locally advanced gastric cancer (AGC) remains to be unsatisfactory. [5][6][7] To improve patients' survival, a variety of studies have examined the treatment effect of additional chemotherapy and radiotherapy. [8][9][10] Among these, neoadjuvant chemotherapy (or perioperative chemotherapy) was first advocated by Wilke et al. 11 It is now widely accepted that neoadjuvant chemotherapy (NAC) can help improve patients' tolerance, increase curative resection rate, decrease tumor metastasis, and thus increase the survival rate. [12][13][14] As a result of its increasing popularity, there is now an increasing need for practical tools to predict individual survival after NAC. To our knowledge, the only predictive system for patients received neoadjuvant chemotherapy was the American Joint Committee on Cancer (AJCC) ypTNM staging system, which was established according to the local invasion depth, the number of positive lymph nodes, and distant metastasis. 2 However, this system was developed from patients with less extensive lymph node dissection (less than D2) and thus may not be well applied to patients underwent D2 lymphadenectomy. In our previous study, we conducted a validation of this system (patients at T0 stage excluded) 15 and demonstrated that although ypTNM staging system was effective for staging, its predictive value was limited with a relatively low C-index (0.657). In addition, patients with a T0 stage were not included in ypTNM staging system and thus cannot be evaluated. Furthermore, other prognostic factors related to individual survival have not been taken into consideration, such as age, body mass index (BMI), tumor size, histology, and chemotherapy regimen. Thus, new tools are needed to predict individual survival. Previously, no survival nomograms of gastric cancer patients focused on patients received neoadjuvant chemotherapy. 16,17 In this study, through evaluating data from 471 consecutive patients undergoing neoadjuvant chemotherapy, we aimed to develop a nomogram to predicts overall survival. External validation was then conducted to test the generalizability of our model on a cohort of 239 patients from a different center. Materials and Methods Patients From January 1st, 2007 to December 31st, 2014, a total of 484 gastric cancer patients at Peking University Cancer Hospital in Beijing, China were retrospectively enrolled in this study. The patients were pathologically diagnosed with gastric adenocarcinoma and received no treatment before neoadjuvant chemotherapy. All patients included in this study were proved to be locally advanced gastric cancer of clinical stage II-III by CT and diagnostic staging laparoscopy. Many of our patients were enrolled in clinical trials for neoadjuvant chemotherapy. For other patients, we would suggest both neoadjuvant chemotherapy and surgery plus adjuvant chemotherapy, a shared decision would then be made after a discussion with patients. The extent of resection for gastric cancer was total or distal gastrectomy with D2 lymphadenectomy. After surgery, all of the patients were recommended to receive adjuvant chemotherapy until perioperative chemotherapy cycles added up to eight cycles. Patients with distant metastasis were excluded from the study. Other exclusion criteria included 1) patients with gastrointestinal stromal tumors, lymphoma, neuroendocrine carcinoma, carcinoid tumor; 2) patients with remnant gastric cancer; 3) patients died within the perioperative period; 4) patients received chemotherapy for other diseases within 6 months; 5) patients whose dissected lymph node are less than 15; 6) patients received neoadjuvant radiotherapy, molecular targeted therapy, or intraperitoneal chemotherapy. Eventually, 471 out of 484 patients were selected and enrolled in our study. Previous information on demographic, treatment, and pathology were collected, including age, sex, BMI, ASA score, ECOG score, family history, chemotherapy regimen, surgery method, surgery approach, anastomosis way, blood loss, tumor location, tumor diameter, T stage, number of dissected lymph node, number of positive lymph node, histological type, differentiated type, and cancerous embolus situation. For validation, we enrolled 239 patients who met the same inclusion and exclusion criteria at Sun Yat-sen University Cancer Center (Guangzhou, China) in the validation cohort. In this cohort, data of risk factors in the final nomogram were collected. Follow Up After the surgery, patients were followed up regularly via physical examination, radiological examination, endoscopic examination, and laboratory examination. These examinations were performed every 3 to 6 months during the first 2 years, then every 6 months until the fifth year, and then once every year. Statistic Analysis To build the nomogram for survival prediction, the univariate Cox regression model was applied to each variate and those with a two-sided p-value <0.05 were then included in the multivariable model. A backward stepwise selection method was used for variable selection in binary Cox regression. A nomogram was then developed based on the selected variables. The performance of the nomogram was measured by its discrimination and calibration. The discrimination of the nomogram was measured by the concordance index (CI). Calibration, which compares predicted survival with actual survival, was also used to evaluate the model. We plotted the calibration curves for the actual survival against the nomogram predicted survival probabilities to assess the agreement, using 1000 bootstrap re-samples to decrease the overfitting bias. We used restricted cubic splines to fit the continuous variables to allow for nonlinearity in the relationship between these variables and survival time. We conducted all analyses using R 3.5.1 (R Foundation for Statistical Computing, Vienna, Austria). A two-sided P value <0.05 was considered statistically significant. Ethical Standards The Ethics Committee of Peking University Cancer Hospital approved this study. All procedures were in accordance with the ethical standards of the responsible committee on human experimentation (institutional and national) and with the Helsinki Declaration of 1964 and later versions. Written informed consent was obtained from all patients prior to inclusion in the study. This study does not involve animal study. Descriptive Statistics of the Training Cohort A total of 471 patients were included in this cohort. The baseline characteristics of the participants were provided in Table 1. Overall, 360 (76.4%) patients were males. The average age was 59 (±10.1) years old, with 153 (32.5%) over 65 years old. The average preoperative chemotherapy duration was 2.79 (±1.00) cycles, and surgery was then performed. Most patients (450, 95.5%) were in good preoperatory conditions with an ECOG score of 0 or 1. After surgery, most patients (79.9%) were proved to be at pathologic stage II/III. The median follow-up duration was 38.5 (±21.7) months, with 193 patients died during the followup period (41%). Overall, the 1-year and 3-year OS rates were 90.0% and 64.1%, respectively. The pathology complete remission (pCR) rate was 6.4%. Development of the Nomogram Clinicopathological factors were further evaluated by univariate analysis with the Cox regression model. BMI, chemotherapy cycles, tumor location, multi-organ resection, T stage, N stage, and diameter in the long axis were identified as risk factors for OS ( Table 2). All the variables above were included in the multivariant analysis, and after the stepwise regression process, T stage, N stage, BMI group, and tumor location were included in the final multivariable model for OS. A nomogram was then developed based on our Cox proportional hazard model ( Figure 1). Validation of the Nomogram In the training cohort, the C-index of the OS model was 0.74 in the training cohort and 0.75 in bootstrap validations. The calibration curves for 1-year and 3-year OS were shown in Figures 2A and B. The x-axis was the nomogram predicted survival, and the y-axis was the actual survival calculated by the Kaplan-Meier method. In the validation cohort, the C index was 0.693 (95% CI, 0.671-0.715). Good calibration was also shown for the 1-year, 3-year OS ( Figure 2C and D). Our model also showed superiority in discrimination compared with the AJCC TNM system (8th edition). In our previous study, the discrimination of the TNM staging system was evaluated and the C-index was 0.657. 15 Discussion To make an appropriate clinical decision, it is critical for physicians to determine the prognosis of patients who have received neoadjuvant chemotherapy. Prognostic nomograms based on clinicopathologic factors have been developed for patients who received neoadjuvant chemotherapy for breast, 18 esophageal, 19 and colorectal cancer. 20 However, no nomogram for gastric cancer was available due to limited data. To our knowledge, although prognostic factors for gastric cancer patients received neoadjuvant chemotherapy had been widely studied before, 21,22 the ypTNM staging system was the only predictive model available. However, this system was developed from patients with less extensive lymphadenectomy (less than D2), and thus, the predictive value might be limited. In our previous study, it was shown that the discriminative ability of this system was not high enough to meet clinical demand. 15 Moreover, the new ypTNM staging system did not address pCR and ypT0N1 patients. In this study, we developed a nomogram to predict the OS for targeting patients and conducted validation to prove its efficacy. In our final model, T stage, N stage, BMI group and tumor location were independent prognostic factors for survival. It was not surprising to find that T and N stages both independently affect the prognosis. The prognostic role of T and N had been widely discussed and consensus had been reached that a higher stage correlated with a worse prognosis. BMI was the only demographic factor correlated with overall survival in our final model and individuals with a higher BMI had a better prognosis. Several studies are in line with our finding on this point. Kong et al and Tokunaga et al reported a higher 5-year survival after gastrectomy for overweight patients. 23,24 However, in some other cases, BMI was associated with less lymph node dissection, more surgical complications and higher perioperative morbidity. 25,26 Possible explanations for the positive influence of BMI on survival might be that a patient with a higher BMI tends to have a better nutrition status, which increases the tolerance of both neoadjuvant chemotherapy and gastrectomy and thus improve overall survival. In addition, the negative influence of the BMI partly attributed to the increased surgery difficulty and insufficient lymph node dissection. However, all patients included in our research received enough lymph node dissection (D2); thus, the negative influence of a higher BMI might partly be offset. Tumor location was also selected in the final model. Patients with tumors at the lower third lived longer than those with an upper part disease, and those with a tumor diffused at the whole stomach suffered the worst prognosis. This phenomenon was in accordance with many previous studies. The negative influence on survival of an upper part disease was shown in both single and multivariant analyses. 27,28 In a meta-analysis conducted by Petrelli et al, it was shown that compared with distal tumors, proximal tumors suffered a 25% increased risk of mortality. 29 Tumors spreading throughout the whole stomach also showed a negative influence, which was also reported in other pieces of research. 30,31 Although somehow controversial, this phenomenon may be attributed to two aspects, the biological nature of the tumor and different gastrectomy. For the biology nature, some pieces of research correlated an upper part disease with a higher incidence of HER2 positivity, which is an independent risk factor for overall survival. 32,33 And the increased risk of a diffused disease may be attributed to the aggressive biological features. For gastrectomy, patients with a proximal or diffused cancer always receive a total gastrectomy instead of a distal gastrectomy, which may lead to more complications and worse survival outcomes. 34 Based on the prognostic factors above, a nomogram was then developed. With pT0 patients included and more risk factors considered, this nomogram may be applied to a broader population of patients. Besides, with a c-index of 0.74 in the training cohort and 0.69 in the validating cohort and good calibration, this nomogram predicted more accurately than the 8th AJCC stage system, whose c-index was 0.66. In addition, compared with the TNM system, our nomogram provided a visible tool easy to use. Thus, our nomogram may contribute to prognosis prediction and decision-making. There were also several limitations to our study. First, our study did not contain patients at stage IV, so the implications in those patients were limited Second, due to the limited samples and the retrospective nature of our research, bias might exist. Lastly, the samples of ypT0 patients were limited. Thus, the predictive value of our model remained to be seen within them. Conclusion We developed and validated a nomogram to predict the 1 year and 3-year OS of patients undergoing neoadjuvant chemotherapy, radical gastrectomy, and D2 lymph node dissection. This nomogram uses readily available clinicopathologic Figure 1 Nomogram was developed from 4 clinicopathological parameters (T stage, N stage, BMI group and Tumor Location) to predict 1-and 3-year survival. The first step to calculate the survival probability is to assign points for each parameter by drawing a vertical line from that variable to the points scale. The second step is to sum all the points and draw a vertical line from the total point to calculate the probability of survival. factors and predicts survival more accurately than the AJCC TNM staging system. Publish your work in this journal Cancer Management and Research is an international, peer-reviewed open access journal focusing on cancer research and the optimal use of preventative and integrated treatment interventions to achieve improved outcomes, enhanced survival and quality of life for the cancer patient. The manuscript management system is completely online and includes a very quick and fair peer-review system, which is all easy to use. Visit http://www.dovepress.com/testimonials.php to read real quotes from published authors.
2020-03-26T10:18:50.066Z
2020-03-19T00:00:00.000
{ "year": 2020, "sha1": "7deec7ef4755710dbf7de24940026e3030448d99", "oa_license": "CCBYNC", "oa_url": "https://www.dovepress.com/getfile.php?fileID=56906", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "621fa2541d2f2fc2a20be1ac4f4e4d553306c6f5", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
20675034
pes2o/s2orc
v3-fos-license
Precise Balancing of Viscous and Radiation Forces on a Particle in Liquid-Filled Photonic Bandgap Fiber It is shown that, in the liquid-filled hollow core of a single-mode photonic crystal fiber, a micron-sized particle can be held stably against a fluidic counter-flow using radiation pressure, and moved to and fro (over 10s of cm) by ramping the laser power up and down. The results represent a major advance over previous work on particle transport in optically multimode liquid-filled fibers, in which the fluctuating transverse field pattern renders the radiation and trapping forces unpredictable. The counter-flowing liquid can be loaded with sequences of chemicals in precisely controlled concentrations and doses, making possible studies of single particles, vesicles or cells. 3 allow photo-activation of, e.g., novel anti-cancer compounds in the liquid (11), and fluorescence could be monitored either through the cladding or along the guiding core. The unique combination of single mode guidance and low loss also makes possible precise measurements of the drag forces acting on single particles -as studied in this article. Photonic band gap confinement permits low loss optical guidance even when the refractive index in the liquid is lower than in the cladding. If only the hollow core is filled with liquid, leaving the cladding holes empty, the large index contrast between cladding and core allows guidance by total internal reflection (8). However, the result is a multimode waveguide in which the transverse intensity pattern is a difficult-to-control, axially varying, superposition of many modes. The evanescent edging field of a single guided optical mode may also be used to propel particles over short distances (~0.1 mm) on planar waveguides (12,13) and in Si slot-waveguides (14). This approach has the disadvantage that the transverse optical field decays exponentially from the surface, making stable optical trapping difficult. Furthermore, the particles are guided very close to the waveguide surface, resulting in asymmetric drag forces, in contrast to the experiments reported here, where the particle is held in the middle of the flowing liquid. Experimental arrangements The photonic crystal fibre used had a core diameter of 17 µm (see Fig. 1) and was designed, following known scaling laws (15,16), for single-mode guidance at wavelength 1064 nm when filled with deuterium oxide. D 2 O was used because its absorption (0.04 dB cm -1 ) at the trapping wavelength 1064 nm is 15 times lower than that of H 2 O (0.6 dB cm -1 ), minimising the effects of laser heating. Custom-designed liquid cells were used in the filling process. Coupling to the guided mode was optimized using an objective lens (4× 0.1 NA) that matched the numerical aperture of the guided mode in the liquid-filled fibre. Launch efficiencies of ~89% into the fundamental core mode were achieved. Fig. 1C shows the measured near-field mode profile at the output face of an 11 cm long piece of D 2 Ofilled fibre. Robust single-mode mode guidance was obtained over the wavelength range 790 nm to 1140 nm. Using a cut-back technique, the loss was measured to be 0.05 dB cm -1 at 1064 nm -only slightly higher than the absorption of D 2 O. An 11 cm length of liquid-filled fibre was placed horizontally on a glass plate, its input face oriented parallel to a vertical glass window (100 µm thick) and immersed in a D 2 O droplet ( Fig. 1D). The end-face of the fibre was enclosed in a pressure cell and imaged through an optical window using camera CCD4 (Fig. 1E). The pressure applied to the system could be adjusted over the range ±2 kPa by raising or lowering the D 2 O reservoir. The set-up allowed the peak flow velocity to be accurately adjusted over the range ±266 µm s -1 . The light from a continuous wave Nd:YAG laser (1064 nm) was divided at a beam-splitter into guidance and loading beams (Fig. 1D). The loading beam was focused by a long 5 working-distance (100× 1.1 NA) water immersion objective, forming a conventional singlebeam optical tweezer trap (17). Cameras CCD1 and CCD2, monitoring the input face from orthogonal directions, allowed three-dimensional control of particle position. 6 A small amount of dilute silica sol was added to the D 2 O droplet at the fibre input face. A single particle was selected from those in the droplet, trapped by the loading beam, and moved to the entrance of the fibre core. Photographs of this process, as seen by camera CCD1, are shown in Fig. 2A-C. Once the particle had reached the core entrance, the loading beam was blocked, and the horizontal guidance beam was used to push it into the core. The image from CCD2 in Fig. 2D shows the particle trapped just outside the core entrance by a combination of fluid counter-flow and radiation force. Upon increasing the optical power, the particle is pushed into the core, after which the transmitted power drops by ~40%. Once securely trapped inside the fibre, the particle could be moved to and fro by adjusting the laser power and the fluid counter-flow. Fig. 2. Loading, launching and guidance of a particle (diameter 6 µm). (A) to (C) tweezering a particle up to the entrance to the core; (D) side-view of the particle held at the entrance to the core by optical forces balanced against counter-flow of liquid from the core. Fluid flow and gravity push the particle slightly below core centre. While in this position the particle could be seen to revolve under the action of imbalanced viscous forces; (E) to (H) side-scattering patterns imaged through the cladding of the fibre, photographed at 1 s intervals. Theory The Reynolds number (= 2ρVR / η , where ρ is the liquid density, V the average fluid velocity, R = 8.5 µm is the core radius and η the dynamic viscosity) has the value 0.0015 for V =100 µm s -1 (typical in the experiments), ρ = 1106 kg m -3 and η = 0.00125 N s m −2 , indicating laminar flow. This allows us to use Hagen-Poiseuille theory for an incompressible fluid. Theory shows that the flow rate is not noticeably affected by opposing particle motion under our experimental conditions (ζ = a/R < 0.4 where a is the particle radius). The viscous drag force on a particle being pushed through a constrained counter-flow is complicated to calculate, requiring numerical methods (18). Two limiting regimes can be identified. The first arises when the flow is zero and the particle proceeds at constant speed under the action of the optical force, and the second when the particle is held stationary against the flow by the optical force. In the general case, the net drag force is the sum of these two, and can be written: where V max = (R 2 / 4η) ⋅ dp / dz is the fluid velocity in the centre of the core, V p the particle velocity, η is the viscosity and p is the pressure at point z along the fibre. Using data from Quddus (18), the numerically-evaluated correction factors K 1 and K 2 can be represented (<1% error) by the following polynomials (19): The optical forces are more difficult to estimate in a waveguide geometry, where it is not clear that the assumptions of the standard ray-optics approach (20) are valid. Nevertheless, in order to provide a basis for comparison, we describe in the Supporting Online Material (19) an analysis where the light guided in the core is represented by a bundle of rays travelling parallel to the axis, with intensities following the J 0 2 ( j 01 r/R) shape expected for the fundamental mode ( j 01 is the first zero of the J 0 Bessel function). The momentum transferred to the particle is calculated for each ray, and then integrated over all rays. The propulsive force for a particle with radius a sitting in the centre of the beam can be represented by the polynomial: with a in µm and R = 8.5 µm. The refractive indices of silica and D 2 O were taken to be 1.45 and 1.33 respectively. Under the same conditions, the lateral restoring force per unit displacement from centre is: Using these expressions, it is interesting to calculate the downward displacement due to gravity of a silica particle optically held in a horizontal D 2 O-filled fibre. At 50 mW laser power, it is 0.5 µm for a = 1 µm and 0.62 µm for a = 3 µm, taking the density of silica as 10 2000 kg m -3 . If flowing, the liquid will provide an extra trapping force, further reducing this already small deflection. Results Measurements were made for two different drag regimes. In the first, the particle velocity in the absence of any flow was measured via side-scattering using camera CCD3. A sequence of typical photographs, taken at 1 second intervals, is shown in Fig. 2E-H. The velocities are plotted against optical power in Fig. 3A, showing that the power-dependent optical transport velocity dV p /dP opt lies in the range of 0.5 to 1 mm s -1 W -1 for particle radii between 1 and 3 µm. In the second experiment, the reservoir was positioned so as to create a continuous flow against the direction of the light, and the optical power was adjusted so that the particle remained stationary in the laboratory frame. This was repeated for a range of different pressure gradients, and for two different sphere sizes. The results show a linear relationship (with slopes d 2 P H /dz.dP opt from 0.7 to 1 kPa cm -1 W -1 , depending on particle size) between pressure gradient and the optical power needed to keep the particle stationary ( Fig. 3B). Analysis Comparisons with the predictions of theory are shown in Table 1. At zero flow, theory consistently over-estimates, by a factor of ~3 for the larger particles and ~1.8 for the smaller ones, the power needed to reach a given particle velocity. The disagreement is slightly larger for the pressure gradient required to make the particles stationary; in this case theory over-estimates the power required by factors of ~2.5 and ~4 for small and large particles. We suggest that this disagreement may be due to the waveguide geometry, which restricts the free propagation of rays escaping from the particle. In particular, some of the rays will lie within the capture angle of the waveguide mode. The coherent sum over all such rays is unlikely to have a phase or amplitude profile that matches the guided mode, which will result in less transmitted light in the guided mode and a stronger propulsive force. Another possible contributing factor is the presence of Mie resonances, which could also increase the propulsive optical force for a given power. Both these effects will also be more pronounced for larger particles. A full explanation of this must, however, await the results of an on-going analysis of the complex scattering behaviour of a particle in hollowcore photonic crystal fibre. Outlook The system described offers fresh possibilities for studying the forces acting on particles in microfluidic channels. For example, if a trapped particle is pushed sideways using a laterally-focused laser beam (which can be delivered through the cladding (21) ), the imbalance of viscous drag on opposite sides will cause it to spin, enhancing chemical reactions at the particle surface. Such effects have already been observed (see above) when the particle is being launched into the fibre. By loading the flowing liquid with chemicals in sequence (and perhaps activating them photolytically by side-illumination), an optically-trapped particle could be coated with multiple layers of different materials in a highly controlled manner, the reaction being monitored using in-or through-fibre spectroscopy. This technique could have uses, for example, in the development and optimisation of colloid-based catalysts. The liquid-filled fibre can be viewed as a miniature "riser-downer" reactor, the chemicals needed for synthesis flowing one way, while the particles flow the other. It should also be possible to implement this technique in the gas phase by filling the fibre with suitable gases or vapours. In biomedical research, while a cell is optically held against a counter flow, minute amounts of drugs (perhaps photo-activated) could be loaded into the liquid. The highly controlled micro-environment could then be used to study the effectiveness of chemical therapy at the single cell level. Since the refractive index of cancer cells is higher than that of healthy ones (1.37), it may even be possible to distinguish them by their larger velocities under optical propulsion through the fluid. 14 The spectrally broad window of transmission would allow in situ spectroscopic analysis of particles, cells or quantum dots trapped in the waveguide. Mie resonances, if present, could be used as an indicator of particle size, and micro-Raman techniques could be used to detect changes in the cell membrane or chemical structure. Finally, the system could be used as a flexible opto-fluidic interconnect for transporting particles or cells from one microfluidic circuit to another. (21)). The correction factors are the values needed to make theory and experiment agree. The last two rows compare the experimental results for the two limiting cases of viscous drag with the theoretical predictions of Quddus (19); for identical optical power the propulsive force in each case is the same. S1. Viscous drag forces Two limiting regimes of viscous drag force can be identified for a particle being pushed through a constrained counter-flow in a narrow pipe. The first arises when the particle is held stationary against the flow by the optical force, and the second when the flow is zero and the particle proceeds at constant speed under the action of the optical force. In the general case, the net drag force is the sum of these two, and can be written as (19): where V max = (R 2 / 4η) ⋅ dp / dz is the fluid velocity in the centre of the core (calculated assuming Poiseuille flow), V p the particle velocity, η is the viscosity and p is the pressure at point z along the fibre. Using data from Quddus, the numerically-evaluated correction factors K 1 and K 2 can be represented (<1% error) by the following polynomials: S2. Optical forces based on ray picture We represent the light guided in the core as a bundle of rays travelling parallel to the axis, with intensities following the J 0 2 ( j 01 r/R) shape expected for the fundamental mode ( j 01 is the first zero of the J 0 Bessel function). The momentum transferred to the particle is calculated for each ray, and then integrated over all rays. A transmitted ray is generated every time the ray inside the sphere strikes the boundary. Summing over all these rays, and including the incident ray, yields the element of force: where dP is the power in the incident ray and n L the refractive index of the liquid. The Fresnel power coefficients R and T take the usual forms, and the angle θ is given by Snell's law for angle of incidence α (see Fig. S1A). The transverse force dF xy points in the plane of the rays in Fig. S1A. To find the total forces acting on the sphere, these elemental forces are integrated over all incident rays. For small index contrast the Fresnel coefficients for s and p polarisation are very similar, so we have followed Ashkin in taking the mean of the two forces. After numerical integration, the total propulsive force for a particle with radius a sitting in the centre of the core (radius R = 8.5 µm) works out to be: Under the same conditions, the lateral restoring force per unit displacement from centre is: k T = 0.0387 a − 0.106 a 2 + 1.58 a 3 − 0.0653a 4 − 0.0181a 5 pN W −1 µm −1 (s5) Fig. S1. Ray path inside spherical particle. The upper sketch shows a plan-view of the rays propagating in the plane AA, up to the second internal reflection. The lower figure is the particle as seen by the incident guided mode -the plane AA is tilted in the (x,y) transverse plane. The total force on the particle is obtained by integrating over all incident points P. S3. Balancing viscous and optical forces The balance between viscous forces and the z-component of the optical force is obtained by equating (s4) and (s1) for the two limiting cases of stationary particle (non-zero flow): F p (a)P opt = 6π η a V max K 2 (a / R) (s6) and zero flow (moving particle): F p (a)P opt = 6π η a V p K 1 (a / R) (s7) with laser power P opt in W. V max is the flow velocity in the middle of the core, which can be calculated from the pressure gradient assuming Poiseuille flow (see main text of article).
2017-04-02T12:53:51.017Z
2009-04-24T00:00:00.000
{ "year": 2009, "sha1": "28a4ae69d2e68e1382cdb21577e9adff11ea0761", "oa_license": null, "oa_url": "http://arxiv.org/pdf/0904.3865", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "28a4ae69d2e68e1382cdb21577e9adff11ea0761", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Materials Science", "Physics", "Medicine" ] }
14749564
pes2o/s2orc
v3-fos-license
Network Events on Multiple Space and Time Scales in Cultured Neural Networks and in a Stochastic Rate Model Cortical networks, in-vitro as well as in-vivo, can spontaneously generate a variety of collective dynamical events such as network spikes, UP and DOWN states, global oscillations, and avalanches. Though each of them has been variously recognized in previous works as expression of the excitability of the cortical tissue and the associated nonlinear dynamics, a unified picture of the determinant factors (dynamical and architectural) is desirable and not yet available. Progress has also been partially hindered by the use of a variety of statistical measures to define the network events of interest. We propose here a common probabilistic definition of network events that, applied to the firing activity of cultured neural networks, highlights the co-occurrence of network spikes, power-law distributed avalanches, and exponentially distributed ‘quasi-orbits’, which offer a third type of collective behavior. A rate model, including synaptic excitation and inhibition with no imposed topology, synaptic short-term depression, and finite-size noise, accounts for all these different, coexisting phenomena. We find that their emergence is largely regulated by the proximity to an oscillatory instability of the dynamics, where the non-linear excitable behavior leads to a self-amplification of activity fluctuations over a wide range of scales in space and time. In this sense, the cultured network dynamics is compatible with an excitation-inhibition balance corresponding to a slightly sub-critical regime. Finally, we propose and test a method to infer the characteristic time of the fatigue process, from the observed time course of the network’s firing rate. Unlike the model, possessing a single fatigue mechanism, the cultured network appears to show multiple time scales, signalling the possible coexistence of different fatigue mechanisms. Introduction The spontaneous activity of excitable neuronal networks exhibits a spectrum of dynamic regimes ranging from quasi-linear, small fluctuations close to stationary activity, to dramatic events such as abrupt and transient synchronization.Understanding the underpinnings of such dynamic versatility is important, as different spontaneous modes may imply in general different state-dependent response properties to incoming stimuli and different information processing routes. Cultured neuronal networks offer a controllable experimental setting to open a window into the network excitability and its dynamics, and have been used intensively for the purpose. In addition, recent observations in-vitro (and later even in-vivo) revealed a rich structure of network events ('avalanches') that attracted much attention because their spatial and temporal structure exhibited features (power-law distributions) reminiscent of what is observed in a 'critical state' of a physical system (see e.g.[7,8], and [9,10] and references therein).Generically, an avalanche is a cascade of neural activities clustered in time; while there persist ongoing debate on the relation between observed avalanches and whatever 'criticality' may mean for brain dynamics [11], understanding their dynamical origin remains on the agenda. Quasi-synchronous NS, avalanches and small activity fluctuations are frequently coexisting elements of the network dynamics.Besides, as we will describe in the following, in certain conditions one can recognize network events which are clearly distinct from the mentioned network events, which we name here as 'quasi-orbits'. The abundant modeling literature on the above dynamical phenomena has been frequently focused on specific aspects of one of them; on the other hand, getting a unified picture is made often difficult by different assumptions on the network's structure and constitutive elements and, importantly, by different methods used to detect the dynamic events of interest. In the present paper we define a common probabilistic criterium to detect various coexisting dynamic events (NS, avalanches and quasi-orbits) and adopt it to analyze the spontaneous activity recorded from both cultured networks, and a computational rate model. Most theoretical models accounting for NS are based on an interplay between network self-excitation on one side, and on the other side some fatigue mechanism provoking the extinction of the network spike.For such a mechanism two main options, up to details, have been considered: neural 'spike-frequency adaptation' [3,12] and synaptic 'short-term depression' (STD) [4,5,[13][14][15][16].Despite their differences, both mechanisms share the basic property of generating an activity-dependent self-inhibition in response to the upsurge of activity upon the generation of a NS, the more vigorously, the stronger the NS (i.e. the higher the average firing rate).In this paper, we will mainly focus on STD, stressing the similarities of the two mechanisms, yet not denying their possibly different dynamic implications. While STD acts as an activity-dependent self-inhibition, the self-excitability of the network depends on the balance between synaptic excitation and inhibition; investigating how such balance, experimentally modifiable through pharmacology, influences the dynamics of spontaneous NSs is interesting and relevant as a step towards the identification of the 'excitability working point' in the experimental preparation. To study the factors governing the co-occurrence of different network events and their properties we adopt a rate model for the dynamics of the global network activity, that takes into accounts finite-size fluctuations and the synaptic interplay between one excitatory and one inhibitory population, with excitatory synapses being subject to STD. On purpose we implicitly exclude any spatial topology in the model, which is meant to describe the dynamics of a randomly connected, sparse network, since we intend to expose the exquisite implications of the balance between synaptic excitation and inhibition, and the activity-dependent self-inhibition due to STD.In doing this, we purposely leave out not only the known relevance of a topological organization [9,17,18], but also the role of cliques of neurons which have been proposed to play a pivotal role in the the generation of NS as functional hubs [19], as well as the putative role of 'leader neurons'. We perform a systematic numerical and analytical study of NSs for varying excitation/inhibition balance.The distance from an oscillatory instability of the mean-field dynamics (in terms of the dominant eigenvalue of the linearized dynamics) largely appears to be the sole variable governing the statistics of the inter-NS intervals, ranging from a very sparse, irregular bursting (coefficient of variation CV ∼ 1) to a sustained, periodic one (CV ∼ 0).The intermediate, weakly synchronized regime (CV ∼ 0.5), in which the experimental cultures are often observed to operate, is found in a neighborhood of the instability that shrinks as the endogenous fluctuations in the network activity become smaller with increasing network size. Moreover, the model robustly shows the co-presence of avalanches with NS and quasi-orbits.The avalanche sizes are distributed according to a power-law over a wide region of the excitation-inhibition plane, although the crossing of the instability line is signaled by a bump in the large-size tail of the distribution; we compare such distributions and their modulation (as well as the distributions of NS) across the instability line with the experimental results from cortical neuronal cultures; again the results appear to confirm that neuronal cultures operate in close proximity of an instability line. Taking advantage of the fact that the sizes of both NS and quasi-orbits are found to be significantly correlated with the dynamic variable associated with STD (available synaptic resources) just before the onset of the event, we developed a simple optimization method to infer, from the recorded activity, the characteristic time-scales of the putative fatigue mechanism at work.We first tested the method on the model, and then applied it to in-vitro recordings; we could identify in several cases one or two long time-scales, ranging from few hundreds milliseconds to few seconds. Weak or no correlations were found instead between avalanche sizes and the STD dynamics; this suggests that avalanches originate from synaptic interaction which amplifies a wide spectrum of small fluctuations, and are mostly ineffective in eliciting a strong self-inhibition. Experimental data Cortical neurons were obtained from newborn rats within 24 hours after birth, following standard procedures [2].Briefly, the neurons were plated directly onto a substrate-integrated multielectrode array (MEA).The cells were bathed in MEM supplemented with heat-inactivated horse serum (5%), glutamine (0.5 mM), glucose (20 mM), and gentamycin (10 µg/ml) and were maintained in an atmosphere of 37 • C, 5% CO2/95% air in a tissue culture incubator as well as during the recording phases.The data analyzed here was collected during the third week after plating, thus allowing functional and structural maturation of the neurons.MEAs of 60 Ti/Au/TiN electrodes, 30 µm in diameter, and spaced 200 µm from each other (Multi Channel Systems, Reutlingen, Germany) were used.The insulation layer (silicon nitride) was pretreated with poly-D-lysine.All experiments were conducted under a slow perfusion system with perfusion rates of ∼100 µl/h.A commercial 60-channel amplifier (B-MEA-1060; Multi Channel Systems) with frequency limits of 1-5000 Hz and a gain of 1024× was used.The B-MEA-1060 was connected to MCPPlus variable gain filter amplifiers (Alpha Omega, Nazareth, Israel) for additional amplification.Data was digitized using two parallel 5200a/526 analog-to-digital boards (Microstar Laboratories, Bellevue, WA.)Each channel was sampled at a frequency of 24000 Hz and prepared for analysis using the AlphaMap interface (Alpha Omega.)Thresholds (8× root mean square units; typically in the range of 10-20 µV ) were defined separately for each of the recording channels before the beginning of the experiment.The electrophysiological data is freely accessible for download at https://technion.academia.edu/ShimonMarom/Neural-Network-Data-(Marom's-Lab). Network rate dynamics A set of Wilson-Cowan-like equations [20] for the spike-rate of the excitatory (ν E ) and the inhibitory (ν I ) neuronal populations lies at the core of our dynamic mean-field model: where τ E and τ I represent two characteristic times (of the order of few to few tens of ms), and Φ is the gain function of the input currents, I E and I I , that in turn depend on ν E and ν I .We chose Φ to be the transfer function of the leaky integrate-and-fire neuron under the assumptions of Gaussian, uncorrelated input of mean µ and infinitesimal variance σ 2 [21]: where τ V is the membrane time constant, τ ARP is a refractory period, and V rest , V reset , and V thresh are respectively the rest, the post-firing reset, and the firing-threshold membrane potential of the neuron (we assume the membrane resistance R = 1).The values of the parameters are shown in Table 1.The model incorporates the non-instantaneous nature of synaptic transmission in its simplest form, by letting the νs being low-pass filtered by a single synaptic time-scale τ : One can regard the variables νs as the instantaneous firing rates as seen by post-synaptic neurons, after synaptic filtering.The form of Eq. 3 and our choice of τ values (see Table 1) implicitly neglects slow NMDA contributions and is restricted to AMPA and GABA synaptic currents.Thus, the input currents I E and I I in Eq. 1 will be functions of the rates νs through these filtered rates; with reference to Eq. 2, the model assumes the following form for the mean and the variance of the current I E (the expressions for I I are similarly defined): where the n E and n I are the number of neurons in the excitatory and inhibitory population respectively; c is the probability of two neurons being synaptically connected; J EE (J EI ) is the average synaptic efficacy from an excitatory (inhibitory) pre-synaptic neuron to an excitatory one, σ 2 J is the variance of the J-distribution; finally, an external current is assumed in the form of a Poisson train of spikes of rate ν ext driving the neurons in the network with average synaptic efficacy J ext .In Eq. 4 r E (t) (0 < r E < 1) is the fraction of synaptic resources available at time t for the response of an excitatory synapse to a pre-synaptic spike; the evolution of r E evolves according to the following dynamics, which implements the effects of short-term depression (STD) [22,23] into the network dynamics: where 0 < u STD < 1 represents the (constant) fraction of the available synaptic resources consumed by an excitatory postsynaptic potential, and τ STD is the recovery time for the synaptic resources.Finally, for a network of n neurons, we introduce finite-size noise by assuming that the signal the synapses integrate in Eq. 3 is a random process ν n of mean ν; in a time-bin dt, we expect the number of action potentials fired to be a Poisson variable of mean n ν(t) dt; Eq. 3 will thus become: Putting all together, the noisy dynamic mean-field model is described by the following set of (stochastic) differential equations: complemented by Eqs. 2, 4, and 6. Spike-frequency adaptation (SFA) (only included in simulations of Fig. 9) is introduced by subtracting a term to the instantaneous mean value of the proportional to the instantaneous value of the variable c E , that simply integrates ν n E : with a characteristic time τ SFA .This additional term aims to model an afterhyperpolarization, Ca 2+ -dependent K + current [24,25].In this sense, c E can be interpreted as the cytoplasmic calcium concentration [Ca 2+ ]), whose effects on the network dynamics are controlled by the value of the "conductance" g SFA .Simulations are performed by integrating the stochastic dynamics with a fixed time step dt = 0.25 ms. In the following, by "spike count" we will mean the quantity ν(t) n dt. Network events detection For the detection of network events (NSs, quasi-orbits, and avalanches) we developed a unified approach based on Hidden Markov Models (HMM) [26]. Despite HMM have been widely used for temporal pattern recognition in many different fields, to our knowledge few attempts have been made to use them in the context of interest here [27,28].For the purpose of the present description, we just remind that a HMM is a stochastic system that evolves according to Markov transitions between "hidden", i.e. unobservable, states; at each step of the dynamics the visible output depends probabilistically on the current hidden state.Such models can be naturally adapted to the detection of network events, the observations being the number of detected spikes per time bin, and the underlying hidden states, between which the system spontaneously alternates, being associated with high or low network activity ('network event -no network event').A standard optimization procedure adapts then the HMM to the recorded activity sample by determining the most probable sequence of hidden states given the observations.The two-step method we propose is based on HMM, has no user-defined parameters, and automatically adapts to different conditions. In the first step, the algorithm finds the parameters of the two-state HMM (one low-activity state, representing the quasi-quiescent periods, and one high-activity state, associated with network events) that best accounts for a given sequence of spike counts -the visible states in the HMM; such fitting is performed through the Baum-Welch algorithm [26].In the second step, the most probable sequence for the two alternating hidden levels, given the sequence of spike counts and the fitted parameters, is found through the Viterbi algorithm.Network events are identified as the periods of dominance of the high activity hidden state. In order to retain only the most significant events a minimum event duration is imposed; such threshold is self-consistently determined as follows.The second step of the algorithm is applied to a "surrogate" time-series obtained by randomly shuffling the original one; calling x the vector listing the durations of the found events, and x 75 its (estimated) 75th percentile, the threshold is determined as log 0.25 × 10 3 ( x x>x 75 − x 75 ) + x 75 .The motivation behind this procedure lies in a distribution of event durations for surrogate data consistently found to have a roughly exponential large-value tail; the average of such tail-distribution is estimated, and the threshold is then set to a value such as to have a probability P = 10 −3 of finding a surrogate event of duration larger than this value.The procedure was found to give results that are more consistent for different realizations of the surrogate data-set than just taking the 99.9-th percentile of the detected event durations. For the results presented in Figs. 2 and 3, a further analysis is carried out.The distribution of network event sizes is often found (both in simulations and in experimental data) to be well approximated by a sum of an exponential and a normal distribution (see Results for further discussion): The parameters of the two distributions and their relative weight 0 ≤ p 0 ≤ 1 are estimated by maximizing the log-likelihood on the data.A threshold for the event size is determined as the value having equal probability of being generated by either the exponential or the normal distribution.In Figs. 2 and 3 NSs are defined as events having size larger than this threshold.In those cases in which a threshold smaller than the peak of the normal distribution could not be determined, no threshold was set. As already remarked, we used essentially the same algorithm for detecting NS/quasi-orbits and avalanches.The only significant difference is that, in the case of avalanches, the emission probability of the low-activity hidden state is kept fixed during the Baum-Welch algorithm to p(n) δ n0 (δ ij is the Kronecker delta; p(n) is the probability of emitting n spikes in a time-bin).Thus the lower state is constrained to a negligible probability of outputting non-zero spike-counts, conforming to the intuition that in between avalanches the network is (almost) completely silent.More precisely, we set p(1) = 10 −6 n , where n is the average number of spikes that the network emits during a time-bin dt.After the modified Baum-Welch first step, avalanches are determined, as above, by applying the Viterbi algorithm; no threshold is applied in this case, neither to the avalanche duration nor to its size. Simulations and data analysis have been performed using custom-written mixed C++/ MATLAB (version R2013a, Mathworks, Natick, MA) functions and scripts. Size distribution for quasi-orbits Consider a generic planar linear dynamics with noise: where A is 2×2 real matrix, and ξ = (ξ(t), 0) is a white noise with ξ(t)ξ(t ) = δ(t−t ).We here assume that the system is close to a Hopf bifurcation; in other words that the matrix A has complex-conjugated eigenvalues λ ± = λ + i λ, with λ < 0 and | λ| λ.By means of a linear transformation, the system can be rewritten as: with σ x and σ y constants determined by the coordinate transformation.Making use of Itō's lemma to write: and summing the previous two equations, we find for the square radius As long as λ | λ|, it is physically sound to make the approximation: for 0 ≤ t ≤ T = 2 π/ λ and then to average the variance of the noise over such period to get: in order to rewrite Eq. ( 13) as: Such stochastic differential equation is associated with the Fokker-Planck equation: that admits an exponential distribution as stationary solution: that is, a Rayleigh distribution for w: Results In the following, we will study a stochastic firing-rate model and make extensive comparison of its dynamical behavior with the activity of ex-vivo networks of cortical neurons recorded through a 60-channel multielectrode array. The stochastic firing-rate model consists of two populations of neurons, one excitatory and one inhibitory, interacting through effective synaptic couplings; excitatory synaptic couplings follow a dynamics mimicking shortterm depression (described after the Tsodyks-Markram model, [22]).We adopted the transfer function of the leaky integrate-and-fire neuron subject to white-noise current with drift [21] as the single population input-output function; moreover the activity of each population is made stochastic by adding realistic finite-size noise.Working with a noisy mean field model allows in principle to easily sweep through widely different network sizes and, more importantly, allows us to perform the stability analysis. To start the exploration that follows, we chose a reference working point where the model's dynamics has a low-rate fixed point (2 − 4 Hz) just on the brink of an oscillatory instability or, in other words, where the dominant eigenvalue λ of the dynamics, linearized around the fixed point, is complex with null real part.The model network (Fig. 1, panel A) shows in proximity of this point a dynamical behavior qualitatively similar, in terms of population spikes, to what is observed in ex-vivo neuronal networks (Fig. 1, panel B).deterministically when the network is again in the condition of generating a new NS.Weak excitability, on the other hand, leads to rare NSs, approaching a Poisson statistics (CV INSI 1), since excitability is so low that fluctuations are essential for recruiting enough activation to elicit a NS, with STD playing little or no role at the ignition time; below an "excitation threshold", NSs disappear. Excitation-inhibition balance and NS statistics The solid lines in Fig. 2 are derived from the linearization of the 5dimensional dynamical system (see Eq. 7), and are curves of iso-λ, where λ is the dominant eigenvalue of the Jacobian: λ = 0 Hz (black line, signaling a Hopf bifurcation in the corresponding deterministic system), λ = 3.5 Hz (red line), and λ = −3.5 Hz (blue line).Values of CV found in typical cultured networks are close to model results near the bifurcation line λ = 0 Hz.We observe, furthermore, that such lines roughly follows iso-INSI and iso-CV INSI curves, suggesting that a quasi one-dimensional representation might be extracted. We show in Fig. 3 INSI (panel A) and CV INSI (panel B) against λ for the same networks (circles) of Fig. 2, and for a set of larger networks (N = 8000 neurons, squares) that are otherwise identical to the first ones, pointwise in the excitation-inhibition plane.The difference in size amounts, for the new, larger networks, to weaker endogenous noise entering the stochastic dynamics of the populations' firing rates.The points are seen to approximately collapse onto lines for both sets of networks, thus confirming λ as the relevant control quantity for INSI and CV INSI .It is seen that, for the smaller networks, INSI and CV INSI depend smoothly on λ, due to finite-size effects smearing the bifurcation.Also note the branch of points (marked by superimposed crosses) for which λ = 0 and then no oscillatory component is present, corresponding to points in the extreme top-left region of the planes in Fig. 2. For the set of larger networks, the dependence of INSI and CV INSI on the λ is much sharper, as expected given the much smaller finite-size effects; this shrinks the available region, around the instability line, allowing for intermediate, more biologically plausible values of CV INSI . We remark that NSs are highly non-linear and relatively stereotyped events, typical of an excitable non-linear system.The good predictive power of the linear analysis for the statistics of INSI signals that relatively small fluctuations around the system's fixed point, described well by a linear analysis, can ignite a NS. A spectrum of network events Our mean-field, finite-size network is a non-linear excitable system which, to the left of the Hopf bifurcation line, and close to it, can express different types of excursions from the otherwise stable fixed point.Large (almost stereotyped for high excitation) NSs are exquisite manifestations of the nonlinear excitable nature of the system, ignited by noise; the distribution of NS size (number of spikes generated during the event) is relatively narrow and symmetric. Noise can also induce smaller, transient excursions from the fixed point which can be adequately described as quasi-"orbits" in a linear approximation.In fact, noise induces a probability distribution in the maximum distance w from the fixed point reached by the system in each of such events.Such distribution can be computed (see Methods, Eq. 18), and it is a Rayleigh distribution (p(w) ∝ w exp(−α w 2 )).On the other hand, we found a high correlation between w and the duration of the event.Therefore the event size (the 'area' below the firing rate time profile during the excursion from the fixed point) is expected to scale as w 2 , so that it should be exponentially distributed.Fig. 4, panel A, shows the activity of a simulated network (blue line) alongside with detected network events.At the single event level the distinction between a NS and a quasi-orbit cannot be other than statistical.From the best fit for the expected size distribution (an exponential plus a Gaussian distribution) a threshold for the event size can be determined to separate events that are (a-posteriori ) more probably quasi-orbits from the ones that are more probably NSs (for details, see Methods).Following such classification, the green line in Fig. 4, panel A, marks the detection of two NSs (first and third event) and two quasi-orbits (second and fourth event). As one moves around the excitation-inhibition plane, to the left of the 4. Algorithms for network events detection.Panel A: total network activity from simulation (blue line) with detected NS/quasi-orbits (green line) and avalanches (red line).Four large events (green line) are visible; the first and third are statistically classified as network spikes; the other smaller two as quasi-orbits.Note how network spikes and quasi-orbits are typically included inside a single avalanche.Panel B: a zoom over 0.5 seconds of activity, with discretization time-step 0.25 ms, illustrates avalanches structure (red line).bifurcation line, the two types of events contribute differently to the overall distribution of network event sizes.Qualitatively, the farther from the bifurcation line, the higher the contribution of the small, "quasi-linear" events.This fact can be understood by noting that the average size of such events is expected to scale as 1/| λ|, where λ is the real part of the dominant eigenvalue of the (stable) linearized dynamics (see Methods, Eq. 17).The av-erage size is furthermore expected to scale with the amount of noise affecting the dynamics, thus the contribution of quasi-linear events is also expected to vanish for larger networks. To take into account the coexistence of the two types of events, we fitted the distribution of burst sizes with the sum of an exponential and a Gaussian distributions (see Methods), which turns out to be acceptably good over wide regions of the excitation and inhibition plane (see Fig. 5, panels A and B).The same fit seems to adequately describe the analogous distributions from experimental data, which show a balanced contribution of the two types of events (see Fig. 5, panels C and D). As mentioned in the introduction, avalanches are cascades of neural activities clustered in time (see Methods for our operational definition; examples of different methods used in the literature to detect avalanches can be found in [7,[29][30][31]).Fig. 4, panel A and panel B, shows an example of the structure of the detected avalanches (red lines) in the model network. We extracted avalanches from simulated data, as well as from experimental data.For simulations, we choose data corresponding to three points in the (w exc , w inh ) plane of Fig. 2, with constant w inh = 1 and increasing w exc , with the rightmost falling exactly over the instability line (black solid line in Fig. 2).Three experimental data sets were extracted from different periods of a very long recording of spontaneous activity from a neural culture; each data set is a 40-minute recording. In Fig. 6 we show (in log-log scale) the distribution of avalanche sizes for the three simulated networks (top row) and the three experimental (bottom row) data sets (blue dots); red lines are power-law fits [32]. From the panels in the top row we see that the distributions are well fitted, over a range of two orders of magnitude, by power-laws with exponents ranging from about 1.5 to about 2.2, consistent with the results found in [7].Note that in the cited paper the algorithm used for avalanche detection is quite different from ours, and the wide range of power-law exponents is related to their dependence on the time-window used to discretize data.In [33] (adopting yet another algorithm for avalanche detection), both the shape of the avalanche distribution and the exponent vary depending on using pharmacology to manipulate synaptic transmission, over a range compatible with our model findings; notably, they find the slope of the power-law to be increasing with the excitability of the network, which is consistent with our modeling results. Panels B and C of Fig. 6 A broad spectrum of synchronous network events: simulations vs ex-vivo data.Distribution of event sizes for simulations (panel A, corresponding to the point (w exc , w inh ) = (0.86, 0.9) in Fig. 2; panel B, (w exc , w inh ) = (0.86, 0.75)) and two ex-vivo datasets (panel C and D).Note the logarithmic scale on the y-axis.The network of panel A has higher total inhibition and lies farther from the oscillatory instability line than the network of panel B. Theoretical analysis leads to recognize in the broad spectrum of events two families, network spikes and quasi-orbits, that differ in behavior both with respect to noise and in terms of how close the network is to an oscillatory instability; the continuous lines are fits of the theoretical distribution of event sizes, a sum of an exponential (for quasi-orbits) and a Gaussian (for NS) distribution (see Methods).The vertical lines mark the probabilistic threshold separating NS and quasi-orbits. Avalanche Size p(AS) Panels A-C: mean-field simulations, with fixed inhibition w inh = 1.and increasing excitation (w exc = 0.9, 0.94, 1).The distributions are well fitted by power-laws; panel B and C clearly show the buildup of 'bumps' in the high-size tails, reflecting the increasing contribution from network spikes and quasi-orbits in that region of the distribution.Panels D-F from ex-vivo data, different ∼ 40-minute segments from one long recording; power-laws are again observed, although fitted exponents cover a smaller range; in panels E and F , bumps are visible, similar to model findings.The similarity between the theoretical and experimental distributions could reflect changes of excitatory/inhibitory balance in time in the experimental preparation.Since all the three simulations lay on the left of or just on the bifurcation line (black line in Fig. 2), the shown results are compatible with the experimental network operating in a slightly sub-critical regime. high-size tails, increasing with the self-excitation of the network; this is understood as reflecting the predominance of a contribution from NS and possibly quasi-orbits in that region of the distribution, on top of a persisting wide spectrum of avalanches.Also this feature is consistent with the findings of [33]. Turning to the plots in the bottom row of Fig. 6, we observe the following features: power-laws are again observed over two decades and more; in panels E and F , bumps are visible, similar to model findings; power-law exponents cover a smaller range just above 2. While the sequence of plots in two rows (modeling and experiment) clearly shows similar features, we emphasize that experimental data were extracted from a unique long recording, with no intervening pharmacological manipulations affecting synaptic transmission; on the other hand, it has been suggested [34] that a dynamic modulation of the excitatory/inhibitory balance can indeed be observed in long recordings; although our model would be inherently unable to capture such effects, it is tempting to interpret the suggestive similarity between the theoretical and experimental distributions in Fig. 6 as a manifestation of such changes of excitatory/inhibitory balance in time, of which the theoretical distributions would be a 'static' analog.If this interpretation is correct, this means that the experimental preparation operates below, and close, to an oscillatory instability; on the other hand, contrary to NS, the appearance of avalanches does not seem to be exquisitely related to a Hopf bifurcation, rather they seem to generically reflect the nonlinear amplification of spontaneous fluctuations around an almost unstable fixed point -a related point will be mentioned in the next section.We also remark that we obtain power-law distributed avalanches in a (noisy) mean-field rate model, by definition lacking any spatial structure; while the latter could well determine specific (possibly repeating) patterns of activations (as observed in [17]), it is here suggested to be not necessary for power-law distributed avalanches. Inferring the time-scales The fatigue mechanism at work (STD in our case) is a key element of the transient network events, in its interplay with the excitability of the system.While the latter can be manipulated through pharmacology, STD itself (or spike frequency adaptation, another -neural -fatigue mechanism) cannot be directly modulated.It is therefore interesting to explore ways to infer relevant properties of such fatigue mechanisms from the experimentally accessible information, i.e. the firing activity of the network.We focus in the following on deriving the effective (activity-dependent) time scale of STD from the sampled firing history. The starting point is the expectation that the fatigue level just before a NS should affect the strength of the subsequent NS.We therefore measured the correlation between r (fraction of available synaptic resources) and the total number of spikes emitted during the NS (NS size) from simulations.We found that the average value of r just before a NS is an effective predictor of the NS size, the more so as the excitability of the network grows. Based on the r-NS size correlation, we took the above "experimental" point of view, that only the firing activity ν is directly observable, while r is not experimentally accessible.Furthermore, the success of the linear analysis for the inter-NS interval statistics (due to the NS being a low-threshold very non-linear phenomenon), suggests that without assuming a specific form for the dynamics of the fatigue variable f , we may tentatively adopt for it a generic linear integrator form, of which we want infer the characteristic time-scale τ To do this, first we reconstruct f (t) from ν(t) for a given τ * ; then we set up an optimization procedure to estimate τ * optim , based on the maximization of the (negative) f -NS size correlation (a strategy inspired by a similar principle was adopted in [11]).Fig. 7, panel A, shows an illustrative example of how the correlation peaks around the optimal value.As a reference, the dotted line marks the value below which 95% of the correlations computed from surrogate data fall; surrogate data are obtained by shuffling the values of f at the beginning of each NS. We remark that in this analysis we use both NS and quasi-orbit events (which are both related to the proximity to a Hopf bifurcation.)This is reasonable since we expect to gain more information about the anti-correlation between f and NS size by including both types of large network events. For each point of the excitation-inhibition plane, we performed such an optimization procedure.We expect the optimal values τ * optim to depend on the average activity ν of the network for the different points.Indeed, when the fatigue variable follows the Tsodyks-Markram model of STD (which of course was actually the case in the simulations), linearizing the dynamics of r around a fixed point r ( r = 1/(1 + u STD ν τ STD )), r behaves as a simple linear integrator with a time-constant: 19) and the size of the immediately subsequent network spike plotted against the time-scale τ * of the low-pass integrator (continuous line).The correlation presents a clear (negative) peak for an 'optimal' value τ * optim = 0.58 s of the low-pass integrator; such value is interpreted as the effective time-scale of the putative slow self-inhibitory mechanism underlying the statistics of network events -in this case, short-term synaptic depression (STD); as a reference, the dotted line marks the value computed for surrogate data (see text).Panel B: for each point in the (w exc , w inh )-plane (see Fig. 2), τ * optim vs average network activity; the continuous line is the best fit of the theoretical expectation for STD's effective time-scale (Eq.20); the fitted values for the STD parameters τ STD and u STD are consistent with the actual values used in simulation (τ STD = 0.8 s, u STD = 0.2). that depends both on u STD and ν .The optimal τ * values across the excitation-inhibition plane against ν are plotted in Fig. 7, panel B (dots).The solid line is the best fit of τ STD and u STD from Eq. 20, which are consistent with the actual values used in simulations. This result is suggestive of the possibility of estimating from experiments the time-scale of an otherwise inaccessible fatigue variable, by modeling it as a generic linear integrator, with a "state dependent" time-constant. Fig. 8 shows the outcome of the same inference procedure for two segments of experimental recordings.The plot in panel A is qualitatively similar to panel A in Fig. 7: although the peak is broader and the maximum correlation (in absolute value) is smaller, the τ * peak is clearly identified and statistically significant (with respect to surrogates, dotted line), thus suggesting a dominant time scale for the putative underlying, unobserved fatigue process.However, Fig. 8, panel B, clearly shows two significant peaks in the correlation plot; it would be natural to interpret this as two fatigue processes, with time scales differing by an order of magnitude, simultaneously active in the considered recording segment.Correlation between low-pass filtered network activity f (see Eq. 19) and the size of the immediately subsequent network spike plotted against the timescale τ * of the low-pass integrator for two experimental datasets (different periods -about 40 minutes each -in a long recording).The plot in panel A is qualitatively similar to the simulation result shown in panel A of Fig. 7: a peak, although broader and of smaller maximum (absolute) value, is clearly identified and statistically significant (with respect to surrogate data, dotted line).Panel B shows two significant peaks in the correlation plot, a possible signature of two concurrently active fatigue processes, with time scales differing by roughly an order of magnitude. To test the plausibility of this interpretation, we simulated networks with simultaneously active STD and spike-frequency adaptation (SFA, see Methods).Fig. 9 shows the results of time scale inference for two cases sharing the same time scale for STD (800 ms) and time scale of SFA differing by a factor of 2 (τ SFA = 15 and 30 s respectively).In both cases the negative correlation peaks at around τ * 500 ms; this peak is plausibly related to the characteristic time of STD, consistently with Fig. 7.The peaks at higher τ * s, found respectively at 11 and 18 s, roughly preserve the ratio of the corresponding τ SFA values. This analysis provides preliminary support to the above interpretation τ SFA = 30 s.In both cases the correlation presents a STD-related peak at around τ * 500 ms (τ STD = 800 ms), consistently with Fig. 7.The peaks at higher τ * s, found respectively at 11 and 18 s, roughly preserve the ratio of the corresponding τ SFA values. of the double peak in Fig. 8, right panel, in terms of two coexisting fatigue processes with different time scales.We also checked to what extent the avalanche sizes were influenced by the immediately preceding amount of available synaptic resources r, and we found weak or no correlations; this further supports the interpretation offered at the end on the previous section, that avalanches are a genuine manifestation of the network excitability which amplifies a wide spectrum of small fluctuations. Discussion Several works recently advocated a key role of specific network connectivity topologies in generating 'critical' neural dynamics as manifested in powerlaw distributions of avalanches size and duration (see [18,35]).Also, it has been suggested that 'leader neurons', or selected coalitions of neurons, play a pivotal role in the onset of network events (see e.g.[19,36,37]).While a role of network topology, or heterogeneity in neurons' excitability, is all to be expected, we set out to investigate what repertoire of network events is accessible to a network with the simplest, randomly sparse, connectivity, over a wide range of excitation-inhibition balance, in the presence of STD as an activity-dependent self-inhibition.In the present work we showed that network spikes, avalanches and also large fluctuations we termed 'quasi-orbits' coexist in such networks, with various relative weights and statistical features depending on the excitation-inhibition balance, which we explored extensively, including the role of finite-size noise (irregular synchronous regimes in balanced excitatory-inhibitory networks has been studied in [29]).We remark in passing that the occurrence of quasi-orbits is exquisitely related to the proximity to a Hopf bifurcation for the firing rate dynamics; on the other hand, the occurrence of NS and, presumably, avalanches, does not necessarily require this condition: for instance, NS can occur in the proximity of a saddlenode bifurcation, where the low-high-low activity transitions derive from the existence of two fixed points, the upper one getting destabilized by the fatigue mechanism (see e.g.[38,39]).We also remark that, with respect to the power-law distribution of avalanches, it is now widely recognized that while criticality implies power-law distributions, the converse is not true, which leaves open the problem of understanding what is actually in operation in the neural systems observed experimentally (for a general discussion on the issues involved, see [40]).In the present work, we do not commit ourselves to the issue of whether avalanches could be considered as evidence of Self-Organized Criticality. On the methodological side, in the present work, we contribute a unified probabilistic model for detection of NS and avalanches, which we believe makes the study of their co-existence easier to develop and more consistent.The more so, given that the very definition of avalanches and the consequent detection algorithms vary significantly across published papers on the subject, which makes it difficult to compose the reported results of different studies in a global picture. Besides, under an assumption which, in the simulation tests, turned out to be a posteriori a fairly good one, we assumed the (experimentally not accessible) fatigue mechanism to be described by a generic linear integrator, but with a characteristic time scale dependent on the average network activity, and in this way, based only on the observed firing rate, through a simple optimization process we could provide an estimate of the characteristic time of the fatigue mechanism.We demonstrated a preliminary application on experimental data from ex-vivo cultured networks, which gave encouraging results. Figure 1 .Figure 2 . Figure 1.Time course of the network firing rate.Panel A: noisy mean-field simulations; panel B: ex-vivo data.Random large excursions of the firing rate (network spikes and quasi-orbits) are clearly visible in both cases. Figure 3 . Figure 3. Stability analysis of the linearized dynamics captures most of the variability in the inter-network-spike interval (INSI) statistics.INSI (panel A) and CV INSI (panel B) vs the real part λ of the dominant eigenvalue of the Jacobian of the linearized dynamics, for two networks that are pointwise identical in the excitation-inhibition plane, except for their size (circles: 200 neurons, as in Fig. 2; squares: 8000 neurons).The data points almost collapse on 1-D curves when plotted as functions of λ, leading effectively to a "quasi one-dimensional" representation of the INSI statistics in the (w exc , w inh )-plane.The region in which the INSIs are neither regular (CV INSI ∼ 0) nor completely random (CV INSI 1), as typically observed in experimental data, shrinks for larger networks.The crosses superimposed on the data points mark a null imaginary part λ. Figure 5.A broad spectrum of synchronous network events: simulations vs ex-vivo data.Distribution of event sizes for simulations (panel A, corresponding to the point (w exc , w inh ) = (0.86, 0.9) in Fig.2; panel B, (w exc , w inh ) = (0.86, 0.75)) and two ex-vivo datasets (panel C and D).Note the logarithmic scale on the y-axis.The network of panel A has higher total inhibition and lies farther from the oscillatory instability line than the network of panel B. Theoretical analysis leads to recognize in the broad spectrum of events two families, network spikes and quasi-orbits, that differ in behavior both with respect to noise and in terms of how close the network is to an oscillatory instability; the continuous lines are fits of the theoretical distribution of event sizes, a sum of an exponential (for quasi-orbits) and a Gaussian (for NS) distribution (see Methods).The vertical lines mark the probabilistic threshold separating NS and quasi-orbits. Figure 6 . Figure 6.Avalanche size distribution: simulations vs ex-vivo data.Panels A-C: mean-field simulations, with fixed inhibition w inh = 1.and increasing excitation (w exc = 0.9, 0.94, 1).The distributions are well fitted by power-laws; panel B and C clearly show the buildup of 'bumps' in the high-size tails, reflecting the increasing contribution from network spikes and quasi-orbits in that region of the distribution.Panels D-F from ex-vivo data, different ∼ 40-minute segments from one long recording; power-laws are again observed, although fitted exponents cover a smaller range; in panels E and F , bumps are visible, similar to model findings.The similarity between the theoretical and experimental distributions could reflect changes of excitatory/inhibitory balance in time in the experimental preparation.Since all the three simulations lay on the left of or just on the bifurcation line (black line in Fig.2), the shown results are compatible with the experimental network operating in a slightly sub-critical regime. Figure 7 . Figure 7. Slow time-scales inference procedure: test on simulation data.Panel A: correlation between low-pass filtered network activity f (see Eq.19) and the size of the immediately subsequent network spike plotted against the time-scale τ * of the low-pass integrator (continuous line).The correlation presents a clear (negative) peak for an 'optimal' value τ * optim = 0.58 s of the low-pass integrator; such value is interpreted as the effective time-scale of the putative slow self-inhibitory mechanism underlying the statistics of network events -in this case, short-term synaptic depression (STD); as a reference, the dotted line marks the value computed for surrogate data (see text).Panel B: for each point in the (w exc , w inh )-plane (see Fig.2), τ * optim vs average network activity; the continuous line is the best fit of the theoretical expectation for STD's effective time-scale (Eq.20); the fitted values for the STD parameters τ STD and u STD are consistent with the actual values used in simulation (τ STD = 0.8 s, u STD = 0.2). Figure 8 . Figure 8. Slow time-scales inference procedure on ex-vivo data.Correlation between low-pass filtered network activity f (see Eq.19) and the size of the immediately subsequent network spike plotted against the timescale τ * of the low-pass integrator for two experimental datasets (different periods -about 40 minutes each -in a long recording).The plot in panel A is qualitatively similar to the simulation result shown in panel A of Fig.7: a peak, although broader and of smaller maximum (absolute) value, is clearly identified and statistically significant (with respect to surrogate data, dotted line).Panel B shows two significant peaks in the correlation plot, a possible signature of two concurrently active fatigue processes, with time scales differing by roughly an order of magnitude. Figure 9 . Figure 9. Slow time-scales inference procedure on simulation data with STD and spike-frequency adaptation.Correlation between lowpass filtered network activity f (see Eq.19) and the size of the immediately subsequent network spike plotted against the time-scale τ * of the low-pass integrator.In this case, the mean-field model includes, besides short-term depression (STD), a mechanism mimicking spike-frequency adaptation.Panel A: spike-frequency adaptation with characteristic time τ SFA = 15 s.Panel B: τ SFA = 30 s.In both cases the correlation presents a STD-related peak at around τ * 500 ms (τ STD = 800 ms), consistently with Fig.7.The peaks at higher τ * s, found respectively at 11 and 18 s, roughly preserve the ratio of the corresponding τ SFA values.
2015-11-26T10:10:07.000Z
2015-02-18T00:00:00.000
{ "year": 2015, "sha1": "f0558252762ecadb0feb5d6fbad3ee2905b35884", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/ploscompbiol/article/file?id=10.1371/journal.pcbi.1004547&type=printable", "oa_status": "GOLD", "pdf_src": "ArXiv", "pdf_hash": "c2e4cdccd57183c3dba6b0e333f10de1a61dba12", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Biology", "Medicine" ] }
1232743
pes2o/s2orc
v3-fos-license
Estimation of Saturation of Permanent-Magnet Synchronous Motors Through an Energy-Based Model We propose a parametric model of the saturated Permanent-Magnet Synchronous Motor (PMSM) together with an estimation method of the magnetic parameters. The model is based on an energy function which simply encompasses the saturation effects. Injection of fast-varying pulsating voltages and measurements of the resulting current ripples then permit to identify the magnetic parameters by linear least squares. Experimental results on a surface-mounted PMSM and an interoir magnet PMSM illustrate the relevance of the approach. I. INTRODUCTION Sensorless control of Permanent-Magnet Synchronous Motors (PMSM) at low velocity remains a challenging task. Most of the existing control algorithms rely on the motor saliency, both geometric and saturation-induced, for extracting the rotor position from the current measurements through high-frequency signal injection [1], [2]. However some magnetic saturation effects such as cross-coupling and permanent magnet demagnetization can introduce large errors on the rotor position estimation [3], [4]. These errors decrease the performance of the controller. In some cases they may cancel the rotor total saliency and lead to instability. It is thus important to correctly model the magnetic saturation effects, which is usually done through d-q magnetizing curves (flux versus current). These curves are usually found either by finite element analysis FEA or experimentally by integration of the voltage equation [5], [6]. This provides a good way to characterize the saturation effects and can be used to improve the sensorless control of the PMSM [7], [8]. However the FEA or the integration of the voltage equation methods are not so easy to implement and do not provide an explicit model of the saturated PMSM. In this paper a simple parametric model of the saturated PMSM is introduced (section II); it is based on an energy function [9], [10] which simply encompasses the saturation and cross-magnetization effects. In section III a simple estimation method of the magnetic parameters is proposed and rigorously justified: fast-varying pulsating voltages are impressed to the motor with rotor locked; they create current ripples from which the magnetic parameters are estimated by linear least squares. In section IV experimental results on two kinds of motors (with surface-mounted and interior magnets) illustrate the relevance of the approach. II. AN ENERGY-BASED MODEL FOR THE SATURATED PMSM A. Energy-based model The electrical subsystem of a two-axis PMSM expressed in the synchronous d − q frame reads where φ d , φ m are the direct-axis flux linkages due to the current excitation and to the permanent magnet, and φ q is the quadrature-axis flux linkage; u d , u q are the impressed voltages and i d , i q are the currents; θ is the rotor (electrical) position and R is the stator resistance. The currents can be expressed in function of the flux linkages thanks to a suitable energy function H(φ d , φ q ) by where ∂ k H denotes the partial derivative w.r.t. the k th variable, see [9], [10]; without loss of generality H(0, 0) = 0. For an unsaturated PMSM this energy function reads where L d and L q are the motor self-inductances, and we recover the usual linear relations Notice the expression for H should respect the symmetry of the PMSM w.r.t the direct axis, i.e. which is obviously the case for H l . Indeed, (1)-(2) is left unchanged by the transformation Integrating these relations yields where c d , c q are functions of only one variable. But this makes sense only if c d (φ q ) = c q (φ d ) = c with c constant. Since H(0, 0) = 0, c = 0, which yields (5). B. Parametric description of magnetic saturation Magnetic saturation can be accounted for by considering a more complicated magnetic energy function H, having H l for quadratic part but including also higher-order terms. From experiments saturation effects are well captured by considering only third-and fourth-order terms, hence This is a perturbative model where the higher-order terms appear as corrections of the dominant term H l . The 9 coefficients α ij together with L d , L q are motor dependent. But (5) implies α 2,1 = α 0,3 = α 3,1 = α 1,3 = 0, so that the energy function eventually reads 4) and (6) the currents are then explicitly given by which are the flux-current magnetization curves. Fig. 1 shows examples of these curves in the more familiar presentation of fluxes w.r.t currents obtained by numerically inverting (3)-(4); the motor is the IPM of section IV. The model of the saturated PMSM is thus given by (1)-(2) and (7)- (8). It is in state form with φ d , φ q as state variables. The magnetic saturation effects are represented by the 5 additional parameters α 3,0 , α 1,2 , α 4,0 , α 2,2 , α 0,4 . C. Model with i d , i q as state variables The model of the saturated PMSM is often expressed with i d , i q as state variables, e.g. [5]. Starting with flux-current magnetization curves in the form and differentiating w.r.t time, (1)-(2) then becomes Though not always acknowledged L dq and L qd should be equal. Indeed, plugging (3)-(4) into (9)-(10) gives Taking the total derivative of both sides of these equations w.r.t. φ d and φ q then yields Since ∂ 12 H = ∂ 21 H the second matrix in the last line is symmetric, hence the first; in other words L dq = L qd . To do that with the model of section II-B the nonlinear equations (7)-(8) must be inverted. Rather than doing that exactly, we take advantage of the fact the coefficients α i,j are experimentally small. At first order w.r.t. the α i,j we obviously have φ d = L d i d + O(|α i,j |) and φ q = L q i q + O(|α i,j |). Plugging these expressions into (7)-(8) we easily find Finally, A. Principle To estimate the 7 magnetic parameters in the model, we propose a procedure which is rather easy to implement and whereū d ,ū q , u d , u q , Ω are constant and f is a periodic function with zero mean. The pulsation Ω is chosen large enough w.r.t. the motor electric time constant. It can then be shown, see section III-C, that after an initial transient whereī d =ū d R ,ī q =ū q R , i d , i q are constant and F is the primitive of f with zero mean (F has clearly the same period as f ); fig. 2 shows for instance the current i d obtained for the SPM of section IV when starting from i d (0) = 0 and applying a square signal u d with Ω = 500Hz,ū d = 23V and u d = 30V . On the other hand using the saturation model the amplitudes i d , i q of the current ripples turn out to be As i d , i q can easily be measured experimentally, these expressions provide a means to identify the magnetic parameters from experimental data obtained with various values ofū d ,ū q , u d , u q . C. Justification of section III-A The assertions of section III-A can be rigorously justified by a straightforward application of second-order averaging of differential equations [11, p. 40]. Indeed the electrical subsystem (1)-(2) with locked rotor (i.e. dθ dt = 0) and input voltages (13)-(13) reads when setting τ = Ωt This system is in the so-called standard form for averaging, with a right hand-side periodic in τ and 1 Ω as a small parameter. Therefore its solution is given by where obtained by averaging the right-hand side of (25)-(26). After an initial transient φ 0 d (τ ), φ 0 q (τ ) asymptotically reaches the constant value (φ d ,φ q ) determined byū d = Ri d (φ d ,φ q ) and u q = Ri q (φ d ,φ q ). A. Experimental setup The methodology of section III is tested on an interior magnet PMSM (IPM) and a surface-mounted PMSM (SPM) with rated parameters listed below. The setup consists of an industrial inverter with a 400V DC bus and a 4kHz PWM switching frequency, 3 dSpace boards (DS1005 PPC Board, DS2003 A/D Board, DS4002 Timing and Digital I/O Board) and a host PC. The measurements were sampled also at 4kHz. B. Experimental results With the rotor locked in the position θ = 0, a square wave voltage with frequency Ω = 500Hz and constant amplitude u d or u q (30V for the IPM, 40V for the SPM) is applied to the motor. But for the determination of L d , L q wherē u d =ū q = 0, several runs are performed with variousū d (resp.ū q ) such thatī d (resp.ī q ) ranges from −2A to +2A with a 0.3A increment (IPM), or from −8A to 8A with a 0.5A increment (SPM). The estimated parameters are listed below; the uncertainty in the estimation stems from a ±10mA uncertainty in the current measurements. whole operating (|i| = i 2 d + i 2 q ranging from 0A to 2A with a 0.3A increment for the IPM, and from 0A to 5.5A with a 0.5A increment for the SPM). Fig. 5 shows for instance the results for a 60 • current angle; there is a good agreement between the measured values and those predicted by the model. As a kind of cross-validation we also examined the currents time responses to large voltage steps. Fig. 6 shows the good agreement between the measurements and the time response obtained by simulating the model with the estimated parameters; it also shows the differences with the simulated response when the saturation effects are omitted. Fig. 7 shows the good agreement also between the "measured" flux values (i.e. obtained by integrating the measured currents and voltages) and the flux values obtained by simulation. V. CONCLUSION A simple parametric magnetic saturation model for the PMSM with a simple identification procedure based on high- frequency voltage injection have been introduced. Experimental tests on two kinds of PMSM (IPM and SPM) demonstrate the relevance of the approach. This model can be fruitfully used to design a sensorless control scheme at low velocity.
2011-03-15T07:37:40.000Z
2011-03-15T00:00:00.000
{ "year": 2011, "sha1": "a5ed4c29d78bcbb4c6caab6910e1dc672624ba11", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1103.2923", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "79ea8a19f99717810b94c1b66987041e87de094a", "s2fieldsofstudy": [ "Physics", "Engineering" ], "extfieldsofstudy": [ "Mathematics", "Computer Science", "Physics" ] }
231885817
pes2o/s2orc
v3-fos-license
Increased Mesenchymal Stem Cell Functionalization in Three-Dimensional Manufacturing Settings for Enhanced Therapeutic Applications Mesenchymal stem/stromal cell (MSC) exist within their in vivo niches as part of heterogeneous cell populations, exhibiting variable stemness potential and supportive functionalities. Conventional extensive 2D in vitro MSC expansion, aimed at obtaining clinically relevant therapeutic cell numbers, results in detrimental effects on both cellular characteristics (e.g., phenotypic changes and senescence) and functions (e.g., differentiation capacity and immunomodulatory effects). These deleterious effects, added to the inherent inter-donor variability, negatively affect the standardization and reproducibility of MSC therapeutic potential. The resulting manufacturing challenges that drive the qualitative variability of MSC-based products is evident in various clinical trials where MSC therapeutic efficacy is moderate or, in some cases, totally insufficient. To circumvent these limitations, various in vitro/ex vivo techniques have been applied to manufacturing protocols to induce specific features, attributes, and functions in expanding cells. Exposure to inflammatory cues (cell priming) is one of them, however, with untoward effects such as transient expression of HLA-DR preventing allogeneic therapeutic schemes. MSC functionalization can be also achieved by in vitro 3D culturing techniques, in an effort to more closely recapitulate the in vivo MSC niche. The resulting spheroid structures provide spatial cell organization with increased cell–cell interactions, stable, or even enhanced phenotypic profiles, and increased trophic and immunomodulatory functionalities. In that context, MSC 3D spheroids have shown enhanced “medicinal signaling” activities and increased homing and survival capacities upon transplantation in vivo. Importantly, MSC spheroids have been applied in various preclinical animal models including wound healing, bone and osteochondral defects, and cardiovascular diseases showing safety and efficacy in vivo. Therefore, the incorporation of 3D MSC culturing approach into cell-based therapy would significantly impact the field, as more reproducible clinical outcomes may be achieved without requiring ex vivo stimulatory regimes. In the present review, we discuss the MSC functionalization in 3D settings and how this strategy can contribute to an improved MSC-based product for safer and more effective therapeutic applications. Mesenchymal stem/stromal cell (MSC) exist within their in vivo niches as part of heterogeneous cell populations, exhibiting variable stemness potential and supportive functionalities. Conventional extensive 2D in vitro MSC expansion, aimed at obtaining clinically relevant therapeutic cell numbers, results in detrimental effects on both cellular characteristics (e.g., phenotypic changes and senescence) and functions (e.g., differentiation capacity and immunomodulatory effects). These deleterious effects, added to the inherent inter-donor variability, negatively affect the standardization and reproducibility of MSC therapeutic potential. The resulting manufacturing challenges that drive the qualitative variability of MSC-based products is evident in various clinical trials where MSC therapeutic efficacy is moderate or, in some cases, totally insufficient. To circumvent these limitations, various in vitro/ex vivo techniques have been applied to manufacturing protocols to induce specific features, attributes, and functions in expanding cells. Exposure to inflammatory cues (cell priming) is one of them, however, with untoward effects such as transient expression of HLA-DR preventing allogeneic therapeutic schemes. MSC functionalization can be also achieved by in vitro 3D culturing techniques, in an effort to more closely recapitulate the in vivo MSC niche. The resulting spheroid structures provide spatial cell organization with increased cell-cell interactions, stable, or even enhanced phenotypic profiles, and increased trophic and immunomodulatory functionalities. In that context, MSC 3D spheroids have shown enhanced "medicinal signaling" activities and increased homing and survival capacities upon transplantation in vivo. Importantly, MSC spheroids have been applied in various preclinical animal models including wound healing, bone and osteochondral defects, and cardiovascular diseases showing safety and efficacy in vivo. Therefore, the incorporation of 3D MSC culturing approach into cell-based therapy would significantly As reviewed in Kouroupis et al. (2017), MSC therapeutic usage in vivo in both autologous and allogeneic settings is safe due to their immunoevasive characteristics, and therefore, even multiple infusions of allogeneic MSC do not elicit a strong immune response that can lead to rejection progression (Koç et al., 2002;Aggarwal and Pittenger, 2005;Ringden et al., 2006;Le Blanc et al., 2008;Pittenger et al., 2019). Over the past 30 years, the safety profile of MSC has been clearly demonstrated in clinical trials to treat multiple clinical indications, with efficacy starting to produce encouraging results in some of them. To date, more than 10,000 patients have been treated as part of clinical trials, with 188 phase 1 or phase 2 trials completed and 10 trials advanced to phase 3. 1 However, to obtain clinically 1 www.clinicaltrials.gov relevant cell numbers, therapeutic protocols usually require MSC extensive in vitro 2D expansion resulting in MSC products with limited stem cell potency and, as a result in some cases, only moderate or inconsistent effectiveness to treat various clinical indications. Also, according to previous studies, MSC isolated from different tissue sources demonstrate similar, but not identical, functional capacity (Guilak et al., 2010;Moretti et al., 2010;Hass et al., 2011). Efficacy and reproducibility of MSC therapies are not only affected by the composition of the cell preparation but also by the functionality of the infused MSC to consistently home and engraft within dysregulated tissues, and subsequently to predictably exert their therapeutic effects by inducing and/or modifying specific host responses. To circumvent these limitations, various in vitro/ex vivo techniques have been applied to manufacturing protocols to induce specific features, attributes, and functions in expanding cells. On this basis, MSC functionalization can be achieved by in vitro 3D culturing techniques, in an effort to more closely recapitulate the in vivo 3D MSC niche and therefore preserve or enhance cellular phenotypes that result in improved in vivo therapeutics. MSC SPHEROID FORMATION AND STRUCTURE Adult MSC possesses a remarkable ability to coalesce and assemble in tri-dimensional (3D) structures, reminiscent of their innate aggregation as limb cell precursors in the mesenchymal condensation during early skeletogenesis. In that context, 3D organoid formation in vitro closely recapitulates the in vivo MSC niche by providing spatial cell organization with increased cell-cell interactions. According to the differential adhesion hypothesis that was first introduced in the 1960s, the cell movement and cell aggregation phenomena present in self-assembly processes are driven by differential cadherin expression levels and guided by the reduction of adhesive-free energy as cells tend to maximize their mutual binding (Foty and Steinberg, 2005). In general, cell aggregation and subsequent multicellular spheroid formation processes involve three phases ( Figure 1A). Initially, cells form loose aggregates via the tight binding of extracellular matrix arginine-glycine-aspartate (RGD) motifs with membrane-bound integrin. As a result of the increased cellcell interactions, cadherin gene expression levels are upregulated, whereas cadherin protein is accumulated on the cell membrane. During the later phase, homophilic cadherin-to-cadherin binding induce the formation of compact cell spheroids from cell aggregates. The extracellular matrix proteins and cadherin type FIGURE 1 | Mesenchymal stem/stromal cell (MSC) spheroids formation process and structure. (A) Cell aggregation and spheroid formation involving three phases. Initially, cells form loose aggregates via the tight binding of extracellular matrix arginine-glycine-aspartate (RGD) motifs with membrane-bound integrin. Due to increased cell-cell interactions, cadherin gene expression levels are upregulated and cadherin protein is accumulated on the cell membrane. In the later phase, homophilic cadherin-to-cadherin binding induce the formation of compact cell spheroids from cell aggregates. (B) Methylcellulose-based technique can be used to generate viable MSC spheroids on low-attachment gas-permeable plates (left panel). Generated MSC spheroids show stable immunophenotypic profile by expressing high levels of the pericytic marker CD146 (green) and MSC-related marker CD90 (red) (middle panel) (unpublished data). Structurally, based on their size and abundance of nutrients and oxygen in vitro, MSC spheroids can be divided into zones (outer and inner). The nutrients, oxygen, and waste concentration gradients within the spheroids should be always taken into consideration when selecting the optimal technique to generate spheroids in vitro in order to achieve increased spheroid functionality in in vivo settings (right panel). and concentration are variable between different cell types, whereas other intercellular proteins such as connexin, pannexins, and actin cytoskeleton filaments play crucial roles in cell-cell interactions and subsequent multicellular cell spheroid formation (reviewed in Cui et al., 2017). Structurally, based on their size and abundance of nutrients and oxygen in vitro, most multicellular spheroids can be divided into three zones (Mueller-Klieser, 1984;Alvarez-Pérez et al., 2005;Curcio et al., 2007; Figure 1B). The outer asynchronously proliferative zone contains cells with intact nuclei that are proliferative with active metabolism. The intermediate zone contains cells with shrunk nuclei that are in quiescent state possessing minimum metabolic activities. Usually depending on the spheroid size, the inner necrotic zone contains cells with disintegrated nuclei that are senescent/apoptotic due to limited nutrients and oxygen influx (hypoxia) in the spheroid core. The inner necrotic zone is formed as the diffusion limitation of most molecules in spheroids is 150-200 µm, and as a result, metabolic wastes are gradually accumulating within the spheroid core. Additionally, Curcio et al. (2007) indicated that aggregates of 200-µm diameter or greater show severe oxygen limitation in the most part of their dimensions, and Alvarez-Pérez et al. (2005) related drastic intra-spheroidal pH alterations to spheroid size, with spheroids of 600-µm diameter or greater showing acidic necrotic core. Based on these findings, a three-part spheroid zonation is highly dependent on cell aggregation size and microenvironment conditions, whereas a 200-µm diameter can be putatively considered a reliable size threshold for limited/diminished inner necrotic core zone formation. Therefore, the nutrients, oxygen, and waste concentration gradients within the spheroids should be always taken into consideration when selecting the optimal technique to generate spheroids in vitro in order to achieve increased spheroid functionality in in vivo settings. The organization of MSC in 3D spheroids result in altered cell morphology, cytoskeleton rearrangement, and polarization due to the cell-cell and cell-extracellular matrix interactions within the spheroid structure. Additionally, 3D cultures account for the established reduction in size of individual MSC (about 0.25-0.5 the volume of an average 2D cultured cell) (Bartosh et al., 2010). Specifically, studies showed that individual MSC strain is increased within the spheroid structure and equally dispersed in all cell dimensions (a Young's Modulus of approximately 60 Pascal), whereas overall MSC tension is greater in the outer zone compared with the inner zone of spheroids. These tension differences affect MSC morphology and polarization resulting in a more flattened morphology and high integrin expression for outer zone MSC and a more irregular morphology with high cadherin expression for the inner zone MSC (Baraniak et al., 2012;Sart et al., 2014). On this basis, Lee et al. (2012) indicated E-cadherin as the main calcium-dependent adhesion molecule that plays a crucial role in MSC spheroid formation in vitro. During spheroid formation, E-cadherin activation and cell-cell interactions regulate the proliferative and paracrine activity of MSC via the ERK/AKT signaling pathway . Importantly, studies showed that cadherins, and especially N-cadherin and OB-cadherin, are both affecting the proliferation, migration, and differentiation potential of 2D MSC cultures (Theisen et al., 2007;Xu et al., 2013). Of note, cadherin levels may be important in mediating MSC anti-inflammatory actions as reports indicated that they are crucial in the response of synovial fibroblasts to inflammation (Agarwal and Brenner, 2006;Chang et al., 2011). To this end, engineered cadherin surfaces and engineered surface microtopology have been generated to control differentiation, and cell-to-cell adhesion and signaling of 2D cultured MSC in vitro (reviewed in Alimperti and Andreadis, 2015). However, the inherent increased cadherin levels upon MSC spheroid formation can be directly related to increased MSC spheroid functionality in vitro and in vivo, offering an advantage over 2D MSC cultures. Interestingly, studies showed that mild hypoxia present within the inner zones of MSC spheroids positively affect MSC survival and secretory capacity. Moreover, spheroid hypoxic microenvironment upregulate the expression of hypoxiaadaptive molecules (such as CXCL12 and HIF-1α), inhibit MSC apoptosis, and increase the secretion of angiogenic and antiapoptotic molecules including HGF, VEGF, and FGF-2 compared to 2D MSC cultures (Bhang et al., 2011). Specifically, studies showed that MSC spheroids embedded in fibrin gel secrete up to 100-fold more VEGF compared with dissociated MSC in fibrin gel (Murphy et al., 2014). Except these molecules, the angiogenic trophic enhancement is produced via the upregulation of other key angiogenic factors such as angiogenin (ANG) and angiopoietin 2 (ANGPT-2; Potapova et al., 2007;Potapova et al., 2008;Yeh et al., 2014). However, Murphy et al. (2017) reported that even though cellular metabolism decreased significantly with higher cell numbers and resultant spheroid sizes, oxygen tension show a gradient that vary less than 10% from the outer zone to the inner core even for spheroids with diameters up to 353 ± 18 µm. This indicates that increased MSC functionality within the spheroid is not oxygen gradient driven but due to increased ECM production and autocrine signaling. Overall, the advantages and disadvantages of MSC functionalization in 3D spheroids are described in Table 1. METHODS AND BIOMATERIALS USED TO GENERATE MSC SPHEROIDS EX VIVO Lately, standardization of MSC manufacturing has been extensively evaluated in order to translate in vitro and in vivo preclinical research into safe and effective therapeutic products. Toward this goal, the large-scale clinical-grade generation of SCAFFOLD-FREE MESENCHYMAL STEM/STROMAL CELL SPHEROID CULTURE PLATFORMS Mesenchymal stem/stromal cell spheroid culture platforms are usually trivial, rapid, and low-cost methods to generate spheroids in a non-or low-adherent environment that allows the selforganization of cells into suspended spheroids (Figures 2A-D). In the hanging drop technique, MSCs are aggregated by gravitational force but due to the absence of direct contact with solid surfaces, the composition of ECM proteins is the main factor for the regulation of spheroid microenvironment (Foty, 2011). Therefore, the hanging drop technique can generate MSC spheroids of controlled size and number; however, its main limitation is the laborious preparation of the 3D cultures that significantly limits the large-scale production of spheroids for in vivo applications. Using the hanging drop technique, Bartosh et al. indicated a 100-fold upregulation of antiinflammatory (TSG-6) and anti-tumorigenic (IL-24 and TRAIL) genes compared to 2D MSC cultures (Bartosh et al., 2010). In addition, the hanging drop technique results in higher expression of stemness markers Oct4, Sox2, and Nanog in MSC spheroids compared to 2D MSC cultures (Lou et al., 2016). Forced aggregation technique (or pellet culture) is also used to generate scaffold-free MSC aggregates by gravitational force that are further induced toward 3D differentiation protocols such as high-density MSC chondrogenic pellet culture (Mackay et al., 1998). A less laborious and more standardized technique is the use of low-attachment surfaces. Similar to the hanging drop technique, spontaneously secreted ECM proteins are regulating the spheroid microenvironment, however, generated spheroids show increased variability in size and morphology (Redondo-Castro et al., 2018b). Interestingly, studies showed that MSC spheroids generated on low-attachment surfaces secreted more hypoxia-induced angiogenic cytokines including VEGF, SDF, and HGF, whereas phosphorylation of Akt cell survival signaling was higher and the expression of pro-apoptotic molecules lower in MSC spheroids compared with 2D MSC cultures (Lee et al., 2016). Magnetic levitation can be used to generate MSC spheroids as by diminishing gravitational force, it promotes cell-cell contact and induces cell aggregation in vitro. In detail, cells are mixed with magnetic particles in culture, and cells incorporated with them can levitate due to exogenously applied magnetic field. Although preliminary studies show spheroid formation reproducibility and stable MSC spheroid phenotype, others have reported that abnormal gravity induces classic apoptotic alterations such as cell size reduction and cell membrane blebbing, reduced cell viability, nuclear chromatin condensation and margination, and increased caspase-3/7 activity (Meng et al., 2011). Except the static techniques, various dynamic approaches have been investigated to generate MSC spheroids including spinner flask culture and rotating wall vessel techniques (Figures 2E,F). Spinner culture technique is based on a spinner flask bioreactor system where cells are continuously mixed by stirring, whereas rotating wall vessel technique simulates microgravity by constant circular rotation where cells are continuously in suspension. In a comparative study between dynamic and 2D MSC cultures, Frith et al. indicated that both spinner and rotating wall vessel dynamic cultures can form viable compact MSC spheroids showing altered cell size, altered phenotypic and molecular profiles, and enhanced osteogenic and adipogenic differentiation potential (Frith et al., 2010). Further studies showed that rotating wall vessel microgravity dramatically affect the molecular profile of MSC spheroids by upregulating genes related to adipogenic and downregulating genes related to osteogenic and chondrogenic differentiation potentials (Sheyn et al., 2010). MSC spheroid culturing in microgravity conditions results in reduced osteogenic differentiation due to decreased Collagen I gene expression and subsequent Collagen I/integrin signaling pathway activation (Meyers et al., 2004). Also, microgravity disrupts F-actin stress fibers, increase intracellular lipid accumulation, and significantly reduces RhoA activity (Meyers et al., 2005). Interestingly, others indicated that microgravity has a synergistic effect with chemical induction in stimulation of chondrogenesis mediated by p38 MAPK activation (Yu et al., 2011). The abovementioned advantages of MSC spheroids over 2D MSC cultures make them a great candidate as building blocks for 3D bioprinting. For the large-scale manufacturing of spheroid-based tissue complexes in vitro, various 3D bioprinting techniques have been reported including extrusionbased bioprinting (Jakab et al., 2008;Mironov et al., 2009;Bulanova et al., 2017;Mekhileri et al., 2018), droplet-based bioprinting (Gutzweiler et al., 2017), Kenzan (Moldovan et al., 2017), and biogripper (Blakely et al., 2015;Ip et al., 2016) approaches. Studies showed that homogeneous MSC-derived cartilage spheroids with a mean diameter of 116 ± 2.8 µm can be assembled using extrusion-based bioprinting into viable cartilage constructs with stable phenotype (De Moor et al., 2020). Also, MSC-derived adipose spheroids bioprinted into a microtissue showed multilocular microvacuoles and successful differentiation toward mature adipocytes (Colle et al., 2020). However, existing 3D bioprinting techniques involve several limitations related to substantial damage to biological, structural, and mechanical spheroid properties. Recently, Ayan et al. (2020) proposed aspiration-assisted bioprinting as a novel approach for MSC spheroid assembly that causes minimal cellular damage and precisely bioprint a wide range of spheroid dimensions (ranging from 80 to 800 µm). On this basis, authors demonstrated the patterning of angiogenic sprouting spheroids and self-assembly of osteogenic spheroids. Further advancements into bioprinting field would benefit the generation of various types of MSC spheroid-derived microtissues in vitro. SCAFFOLD-BASED MESENCHYMAL STEM/STROMAL CELL SPHEROID CULTURE PLATFORMS In addition to the scaffold-free culture platforms, various scaffold-based MSC spheroid generation approaches have been proposed using both natural and synthetic biomaterials. As mentioned before, MSC spheroids can benefit the in vivo microenvironment primarily by their immunomodulatory and trophic actions, and secondarily (if any) by their direct differentiation toward specialized cells. The latter supports the notion that MSC spheroids should maintain their integrity in order to achieve effective cell replacement in vivo as biodegradation is a key factor in tissue engineering. Therefore, depending on the therapeutic application mode, biomaterial selection except from biological factors (cell adhesion, biocompatibility, etc.) should take into consideration physic-chemical (porosity to support nutrients/oxygen influx, biodegradation, etc.) parameters (Nikolova and Chavali, 2019). On this basis, even though scaffold's topography allows seeded MSC to form a microstructured matrix within the 3D spheroid microenvironment, depending on the treated tissue's nature, scaffold biodegradation rate should be controlled accordingly by the incorporation of chemical components that trigger gradual hydroytic degradation. However, to date, no specific studies have been performed to define if long-term maintenance of MSC spheroid structure is crucial for its therapeutic use. Scaffold-based culture platforms using natural polymers such as agar/agarose, chitosan, and collagen can promote spheroid formation. Agar/agarose non-adherent surfaces have been used to promote MSC aggregation and spheroid formation in vitro (Vorwald et al., 2018). Specifically, chitosan-based substrates result in a more complex spheroid microenvironment compared to scaffold-free methods as the carbohydrate structure of chitosan is similar to the glycosaminoglycans in the ECM (Cui et al., 2017). Chitosan is a polycationic natural biocompatible polysaccharide, whereas the degree of its deacetylation can modulate the cell adhesion and spheroid formation capacity in vitro. On this basis, highly deacetylated chitosan substrate supports strongly the attachment and proliferation of fibroblasts (Seda Tıglı et al., 2007). Interestingly, Yeh et al. showed that MSC spheroid culturing on chitosan membranes results in increased intracellular calcium levels, whereas the calcium binding capacity of chitosan affect the cell-substrate and cell-cell interactions within the MSC spheroid. As a result, the chitosan-cultured MSC spheroids show significantly upregulated expression of calcium-, cell adhesion/ migration-, and anti-inflammatory-associated genes compared to 2D MSC on tissue culture polystyrene plates (Yeh et al., 2012(Yeh et al., , 2014. Hsu and Huang showed that Wnt signaling is not only distinct in MSC spheroids compared to 2D MSC cultures but also substrate dependent. MSC spheroids derived on chitosan-activated Wnt3α-mediated canonical Wnt signaling is prone to osteogenesis, whereas MSC spheroids derived on hyaluronan-grafted chitosan activated Wnt5α-mediated non-canonical Wnt signaling that is prone to chondrogenesis (Hsu and Huang, 2013). On this basis, Huang et al. (2011) showed that MSC spheroids generated on chitosan and chitosan-hyaluronan substrates preserve the expression of stemness markers Oct4, Sox2, and Nanog, and increase their chondrogenic differentiation capacity. As autophagy is an important mechanism promoting cell survival, a study showed that MSC spheroids derived on chitosan respond to environmental stress (H 2 O 2 treatment) by upregulating autophagy-related markers in a calcium-dependent manner (Yang et al., 2015). This effect is important as it may increase the MSC spheroid survival and therapeutic efficacy in in vivo settings. Interestingly, nanomagnetically levitated MSCs cultured as spheroids within type I collagen gels preserve their quiescent phenotype indicated by the expression of STRO-1 and Nestin, whereas in response to co-culture wounding, they are capable of migrating to the wound site and differentiate accordingly (Lewis et al., 2016). Polymers and chemically modified polymers have been extensively investigated for the development of novel biomaterials with good physic-chemical properties and biocompatibility. On this basis, MSC spheroid generation has been performed on various synthesized polymer substrates such as polycaprolactone, micropatterned poly(ethylene glycol), poly(L-glutamic acid)/chitosan, and methylcellulose. In one study, Messina et al. (2017) showed that fibroblast, myoblast, and neural cell spheroids on polymeric membranes possess high biological activity in terms of oxygen uptake, whereas they undergo faster fusion and maturation on polycaprolactone than on agarose substrates. Also, showed improved adipogenic and osteogenic differentiation capacity of MSC spheroids generated on micropatterned poly(ethylene glycol) substrates. Microarray analysis indicated not only the upregulation of genes related to adipogenesis and osteogenesis but also the downregulation of genes related to MSC stemness such as the mesoderm-specific transcript (MEST) and the mesenchymal stem cell specific marker (THY1) . Similarly, Zhang et al. (2015) indicated that MSC spheroids generated on poly(L-glutamic acid)/chitosan substrate show increased chondrogenic differentiation capacity by increased GAGs and COLII, and decreased COLI deposition during in vitro chondrogenic induction. Methylcellulose, an ether derivative of cellulose, which is synthesized by the replacement of hydrogen atoms from hydroxy groups with methyl groups, has been recently used to generate successfully MSC spheroids in vitro. Deynoux et al. (2020) showed that methylcellulose allows MSC spheroid formation within 24 h, which tends to shrink in size partially due to the balance between proliferation and cell death triggered by hypoxia and oxidative stress up to 3 weeks in vitro. Similar to methylcellulose-based technique published by Markou et al. (2020), we have generated successfully viable MSC spheroids in a gas-permeable plate system that possess stable phenotypic and molecular profiles, and increased functionality both in vitro and in vivo (Kouroupis et al., 2021). The usage of this system is aimed to ensure uniform oxygenation throughout the MSC spheroid culture, as it is based on previous reports demonstrating that in gas-permeable plates 3D cell structures efficiently receive air from both the top (after diffusion through the medium) and the bottom (after diffusion across permeable membrane) of the culture (Fraker et al., 2007(Fraker et al., , 2013Cechin et al., 2014). These reports show that MSC spheroid generation on synthesized substrates can dramatically affect their stemness and multipotential differentiation capacities in vitro. CULTURE MEDIUM EFFECTS ON MESENCHYMAL STEM/STROMAL CELL SPHEROIDS With the exception of the scaffold-free or scaffold-based culture platforms, reports showed that culture medium composition strongly affect the spheroid formation progression and MSC spheroid functionality in vitro. To date, most studies use fetal bovine serum (FBS)-based media to generate spheroids in vitro. However, safety concerns have been raised regarding FBS usage for the manufacturing of MSC products for clinical applications, most of them related to prion exposure risk, toxicological risk, and immunological risk (Mendicino et al., 2014;Karnieli et al., 2017). Regulatory-complaint xeno-free media such as chemically defined formulations and human platelet lysate (hPL) are promising alternatives to generate clinically relevant cell numbers and to preserve or even enhance the MSC functionality in vitro prior to their in vivo application (Doucet et al., 2005;Centeno et al., 2008;Jung et al., 2010;Kouroupis et al., 2020a,b). On this basis, Ylostalo et al. (2014) showed that MSCs cannot condense into tight spheroids when cultured in several commercial stem cell media and only chemically defined formulation supplemented with human serum albumin (HSA) can result in compact MSC spheroids with high viability and enhanced anti-inflammatory secretory profile. Importantly, MSC spheroids generated with HAS supplementation show increased anti-inflammatory capacity when co-cultured with lipopolysaccharide-stimulated macrophages in vitro (Ylostalo et al., 2014). In contrast, another study indicated that MSC spheroids generated in FBS-based medium show low or no proliferation but increased paracrine secretory profile (PGE2 and IDO), whereas MSC spheroids generated in xeno-free medium show significant proliferative capacity but low paracrine secretory profile (Zimmermann and McDevitt, 2014). Overall, further investigations have to be performed in order to optimize the in vitro culturing conditions for the standardization and reproducibility of MSC spheroid therapeutic potential. Most importantly, challenges still exist related to the generation of clinically relevant cell numbers in 3D cultures and the qualitative assessment of the generated MSC spheroids using conventional methods. Specifically, the less laborious dynamic approaches, such as the spinner flask culture and the rotating wall vessel techniques, offer a viable solution to generate large MSC spheroid numbers; however, novel bioreactor systems are needed to additionally monitor and control all culture environmental variables (temperature, gas exchange, pH, and metabolite levels) (de Bournonville et al., 2019). Similar to 2D MSC cultures, qualitative evaluation of MSC spheroids requires their phenotypic protein profiling using fluorescent microscopy and flow cytometry methods. Fluorescent imaging is often laborious for xyz images and represent only a fraction of MSC spheroid cultures, whereas flow cytometry requires the enzymatic/mechanical dissociation of the spheroids to a single cell, usually disrupting important sensitive phenotypic attributes (CD146 immunomodulationrelated marker). Furthermore, comparative preclinical studies are needed to evaluate how different MSC spheroid generation platforms in vitro are affecting the therapeutic outcomes upon their implantation or infusion in vivo. ANTI-INFLAMMATORY PROPERTIES OF MESENCHYMAL STEM/STROMAL CELL SPHEROIDS In MSC spheroid settings, their enhanced anti-inflammatory effects have been mainly attributed to high expression of TGF-β1, IL-6, TSG-6, stanniocalcin (STC-1), and PGE-2 antiinflammatory molecules (Bartosh et al., 2010;Ylöstalo et al., 2012;Zimmermann and McDevitt, 2014; Figure 3). Specifically, Bartosh et al. showed that BM-derived MSC spheroid increased secretion of anti-inflammatory TSG-6 and STC-1 results in reduced TNFα expression and secretion by LPS-stimulated macrophages in MSC spheroid/macrophages co-cultures in vitro. In a mouse zymosan-induced peritonitis model, intraperitoneal injection of 1.5 × 10 6 BM-derived MSC spheroids for a 6-h time-frame resulted in decreased protein content and volume of the lavage fluid, neutrophil activity, and decreased levels of TNFα, IL-1β, CXCL2/MIP-2, and PGE2. Also, MSC spheroid injection significantly decreased the serum levels of plasmin activity, an inflammation-related protease that is inhibited by secreted TSG-6 ( Bartosh et al., 2010). Importantly, in vitro studies showed that BM-derived MSC spheroid conditioned medium affect LPS-stimulated macrophages not only by inhibiting the secretion of pro-inflammatory cytokines TNFα, CXCL2, IL-6, IL12-p40, and IL-23 but also by increasing the secretion of antiinflammatory cytokines IL-10 and IL1-Ra and the expression of M2-polarization CD206 marker. The main anti-inflammatory molecule secreted in the conditioned medium was PGE2, whereas FIGURE 3 | Therapeutic properties of MSC spheroids in vivo. Upon infusion in vivo, MSC spheroid "medicinal signaling" activities are exerted by the paracrine secretion of modulatory mediators that possess immunomodulatory and trophic (i.e., angiogenic, anti-fibrotic, anti-apoptotic, and mitogenic) actions. MSC spheroids have been safely and effectively applied in various preclinical animal models for the treatment of skin wounds, myocardial infarction, vascular injury/ischemia, liver injury, kidney injury, bone and osteochondral defects, and knee synovitis. its production is dependent on caspase activity and NFkB activation in MSC spheroids (Ylöstalo et al., 2012). Upon MSC homing to the target site and depending on the molecular composition of the local microenvironment, they exhibit a therapeutic responsive polarization into either anti-inflammatory (MSC-2) or pro-inflammatory (MSC-1) phenotypes. Interestingly, studies showed that except for the abovementioned secreted molecules with antiinflammatory effects, MSC spheroids increase the secretion of pro-inflammatory cytokines (including IL-1α, IL-1β, and IL-8) and chemokines (including CCL2 and CCL7) (Potapova et al., 2007;Bartosh et al., 2010Bartosh et al., , 2013Yeh et al., 2014) that contribute in the inflammatory cell recruitment locally and putatively in the overall inflammatory response of the host. However, Bartosh et al. showed that BM-derived MSC assembly into MSC spheroids triggers the caspase-dependent IL-1 signaling and activates the expression of IL-1 in an autocrine secretion manner, resulting in an "auto-priming" effect (Figure 3). In MSC spheroids, the increased PGE2 secretion was related to activation of both caspase-dependent IL-1 and Notch signaling pathways, whereas TSG-6 and STC-1 secretion was related only to caspase-dependent IL-1 signaling activation (Bartosh et al., 2010). Collectively, MSC priming by paracrine and/or autocrine pro-inflammatory modes is a prerequisite in order to acquire their anti-inflammatory MSC2 phenotype and exert strong anti-inflammatory effects in vivo. As reviewed in Kouroupis et al. (2018), several studies indicate that activation of specific Toll-like receptors (TLRs) in MSC in vitro prior to infusion in vivo has a profound effect on MSC functionalization toward immunomodulatory phenotype. However, Redondo-Castro et al. (2018a) reported that IL-1 stimulation of BM-derived MSC spheroids resulted in significantly increased expression of IL1-Ra, VEGF, and G-CSF molecules without anti-inflammatory effects on LPStreated microglial cells in co-cultures. These discrepancies of the data underline the necessity for optimization of the priming methods and culture conditions. Previous studies showed that MSC immunomodulatory factor secretion is strongly affected by the composition of the culture medium (Zimmermann and McDevitt, 2014). In 2D culture settings, BM-derived and adipose-derived MSC cultured with FBS or hPL showed differences in expression of immunomodulatory and adhesion molecules, with adipose-derived MSC being more potent functionally in inhibiting T-cell proliferation (Menard et al., 2013). Similarly, in two studies, Kouroupis et al. (2020a;2020b) indicated that fat pad-derived (IFP) MSCs when cultured in regulatory-compliant conditions in vitro are superior functionally in Substance P degradation and T-cell proliferation inhibition compared to FBS-grown MSC. In an acute synovitis rat model, IFP-MSC intra-articular injection in vivo reversed more effectively signs of synovitis and IFP fibrosis when they were cultured under regulatorycompliant conditions (Kouroupis et al., 2020a,b). In 3-D settings, MSC spheroids cultured in serum and animal component-free chemically defined medium had less secretion of IDO, PGE2, TGF-β1, and IL-6 immunomodulatory factors compared to the typical MSC cultures supplemented with FBS (Zimmermann and McDevitt, 2014). In order to overcome these hurdles, Ylostalo et al. proposed specific protocols to efficiently prime MSCs in 3-D settings and preserve their robust antiinflammatory properties under chemically defined xeno-free conditions (Ylostalo et al., 2017). Overall, further studies are required to address the effects of pro-inflammatory cytokines and culturing conditions on antiinflammatory properties of MSC spheroids in vitro and in vivo. THERAPEUTIC PROPERTIES OF MESENCHYMAL STEM/STROMAL CELL SPHEROIDS IN PRECLINICAL ANIMAL MODELS With exception to the therapeutic safety that most MSC clinical trials are investigating for various clinical disorders, 2 crucial factors that affect the therapeutic efficiency are MSC homing to target tissues and subsequent MSC survival in vivo. It cannot be overlooked that initial outcomes from many of such studies revealed that MSC therapies show a significant degree of variability with cases of non-reproducible clinical data. The inconsistent evidence potentially relates not only to intrinsic differences in the cell-based products used but importantly related with their in vivo fate upon implantation or infusion [parameters affecting MSC functionalization in vitro and in vivo are reviewed in Kouroupis et al. (2018)]. On this basis, a pioneering study showed that 5.0 × 10 5 BM-MSC injected into the left ventricle of uninjured mouse heart can effectively engraft the myocardium; however, only 0.44% of the MSCs could be identified after 4 days of injection (Toma et al., 2002). In addition, Toma et al. showed that 92 ± 7% of intraarterially injected MSC in rats are entrapped in the microvasculature (Toma et al., 2009). Collectively, even though long-term engraftment seems not to be a prerequisite for MSC reparative effects in vivo, their initial homing and survival is a crucial factor affecting the therapeutic outcomes. In that context, 3D spheroid formation in vitro closely recapitulates the in vivo MSC niche by providing spatial cell organization with increased cell-cell interactions that protect MSC viability and intrinsic properties. For example, in a mouse model of hind limb ischemia, MSC spheroid transplantation improved its survival compared to MSC suspension, by suppressing a key apoptotic signaling molecule (Bax), while activating anti-apoptotic signaling (BCL-2; Bhang et al., 2012). These positive effects can also be attributed to improved resistance to oxidative stress-induced apoptosis exerted by hypoxia-induced genes (e.g., VEGF-A, HIF-1 α , and MnSOD), elevated by the hypoxic conditions at the spheroid core (Potapova et al., 2007;Zhang et al., 2012). WOUND HEALING To date, three separate studies applied MSC spheroids for wound healing in a model of diabetic healing impaired (leptin receptordeficient) mice (Amos et al., 2009), in chemotherapy-induced oral mucositis (Zhang et al., 2012), and in a rat skin repair model (Hsu and Hsieh, 2015). In a pioneering study, Amos et al. investigated the applicability of MSC spheroids to treat chronic wounds such as diabetic ulcers, which remain a significant health burden for diabetic patients. In detail, full-thickness dermal wounds (approximately 78.5 mm 2 area) were generated in leptin receptor-deficient mice and treated with a total of 350,000 adipose-derived MSC per wound organized in multiple separate spheroids. Interestingly, for a 12 day time-frame, MSC spheroids resulted in significantly greater rate of wound closure compared to wounds treated with MSC suspension. This outcome may be attributed to higher expression of ECM genes (tenascin C, Collagen VIα3, and fibronectin) and higher secretion of soluble factors (HGF, in MSC spheroid compared to MSC suspension cultures in vitro (Amos et al., 2009). Zhang et al. (2012 intravenously infused 1 × 10 6 gingiva-derived MSC spheroids or MSC suspension to a 5-fluorouracil-induced oral mycositis mouse model. On day 7, results indicated that MSC spheroids can reverse body weight loss and promote the regeneration of damaged epithelial lining of the mucositic mouse tongues. Interestingly, authors reported that MSC spheroids are capable of increased homing/engrafting to mucositic tongues due to their enhanced CXCR4 expression and may potentially transdifferentiate into epithelial cells via mesenchymal-epithelial transition in vivo (Zhang et al., 2012). These data indicate the potential use of MSC spheroids to alleviate the oral mucositis side-effect post-chemotherapy in cancer patients. In another rat skin wound healing model, 1 × 10 5 adipose-derived MSC spheroids or MSC suspension were applied to 15 mm × 15 mm wounds and covered with hyaluronan gel/chitosan sponge to maintain a moist environment. On day 8, results showed that the MSC spheroid group showed faster wound closure and significantly higher ratio of angiogenesis compared with the MSC suspension group. In vivo tracking of fluorescently labeled MSCs showed close localization of MSC spheroids to microvessels, suggesting enhanced angiogenesis through paracrine effects. Moreover, MSC spheroid increased engrafting and angiogenesis effects may be attributed to the high expression of cytokine genes (FGF-1, VEGF, and CCL2) and migration-related genes (CXCR4 and MMP-1) (Hsu and Hsieh, 2015). Collectively, in all cases, MSC spheroids provide better therapeutic efficacy compared with traditional MSC suspension in wound healing. BONE/OSTEOCHONDRAL DEFECTS AND SYNOVITIS Studies showed that bone/osteochondral defects and knee synovitis can be treated by MSC spheroids. In a delicate study, Sekiya's group generated a full-thickness (5 mm × 5 mm wide, 1.5 mm deep) osteochondral defect rabbit model, and defects were treated with different doses of synovium-derived MSC spheroids (containing 2.5 × 10 5 -20 × 10 6 MSC/defect) (Suzuki et al., 2012). Post-implantation MSC spheroids could attach to the osteochondral defects by surface tension, whereas at 12 weeks, MSC spheroids containing 2.5 × 10 6 MSC showed the highest safranin-O-positive area ratio and resulted in regenerated cartilage with thickness similar to the neighboring healthy cartilage. Interestingly, authors reported that MSC spheroids with high cell densities result in failed defect repair and fibrous tissue formation possibly due to cell death and nutrient deprivation effects (Suzuki et al., 2012). In a calvarial bone defect (8 mm wide) rat model, Suenaga et al. treated the rat defects using three different conditions, 3.0 × 10 7 BM-MSC spheroids, β-TCP granules, or BM-MSC spheroids coated with β-TCP granules. Eight weeks post-implantation, MSC spheroids resulted in full-thickness bone formation with evident vascularization. In contrast, the other two groups had only minimal or non-uniform bone formation at the implanted sites, indicating that β-TCP restricts the bone regenerative capacity of MSC spheroids (Suenaga et al., 2015). Recently, Yanagihara et al. (2018) treated 4 mm wide femoral bone defects in rats with 2.4 × 10 6 Runx2-transfected MSC spheroids or Runx2-transfected MSC suspension embedded in collagen scaffolds. On day 35, MSC spheroids showed faster bone regeneration compared with MSC suspension and nontransfected MSC, whereas enhanced MSC spheroid migration to the defect sites was correlated with higher expression levels of migration-related genes CXCR4 and Integrinα2 (Yanagihara et al., 2018). Recently, in a mono-iodoacetate acute synovial/IFP inflammation rat model, Kouroupis et al. intraarticularly injected 5.0 × 10 5 infrapatellar fat pad MSC (IFP-MSC) spheroids. Twenty-five days post-infusion, IFP-MSC spheroids effectively degraded Substance P and resolved inflammation and fibrosis of synovial membrane and fat pad tissues in the rat knee. Interestingly, IFP-MSC intraarticular injection not only results in anti-inflammatory and anti-fibrotic effects but also showed strong anabolic/cartilage protective effects. Specifically, in the IFP-MSC spheroid cohort, cartilage integrity was preserved intact up to 28 days (Kouroupis et al., 2021). To conclude, MSC spheroids exert anti-inflammatory/anti-fibrotic effects and are effective for promoting both bone and osteochondral defect regeneration. MYOCARDIAL INFARCTION Intramyocardial transplantation of MSC spheroids in rat (Wang C.-C. et al., 2009;Lee et al., 2012;Liu et al., 2013) and porcine (Emmert et al., 2013b) myocardial infarction models resulted in greater heart function improvement compared with MSC suspensions. In an acute myocardial infarction rat model, Wang C.-C. et al. (2009) performed intramyocardial injection of 5.0 × 10 5 BM-derived MSC spheroids or MSC suspension and evaluated the echocardiography and catheterization measurements 4, 8, and 12 weeks post-operatively. The results showed superior heart function and stimulation of significant increase in vascular density for the MSC spheroid group (Wang C.-C. et al., 2009). In a delicate study, in vivo tracking of Dil-labeled UC-derived MSC spheroids showed that they can be differentiated into endothelial and cardiomyocyte cells at 4 weeks post-intramyocardial injection in a rat myocardial infarction model. At 7 weeks, the therapeutic efficacy of UCderived MSC spheroids is superior to MSC suspension in post-infarction left ventricular remodeling . Importantly, Liu et al. (2013) showed that adipose-derived MSC spheroids generated on chitosan membranes show a 20-fold increase in cardiac marker gene expression (Gata4, Nkx2-5, Myh6, and Tnnt2) compared with MSC suspension cultures. In a similar approach, intramyocardial injection of 1 × 10 7 adipose-derived MSC spheroids in a rat myocardial infarction model showed better functional recovery compared with MSC suspensions after 12 weeks (Liu et al., 2013). Interestingly, a previous study indicated that intramyocardial injection of MSC spheroids consisting of adipose-derived MSC/human umbilical vein endothelial cells results in low arrhythmogenic potential but no further beneficial effects compared to the untreated group in a rat myocardial infarction model (Kolettis et al., 2018). In a larger animal model study, adipose-derived MSC were first labeled with micron-sized iron oxide particles, and then 2 × 10 7 MSC spheroids or MSC suspension were intra-myocardial injected in the porcineinfarcted myocardium. Moreover, the MSC spheroid engrafted successfully in 88.8% of animals keeping intact their micro architecture in vivo, whereas no arrhythmogenic, embolic, or neurological events occurred in the treated groups for up to 5 weeks follow-up (Emmert et al., 2013b). Therefore, preclinical studies established the feasibility, safety, and beneficial effects of intra-myocardial injected MSC spheroids in infarcted myocardium. NEOVASCULARIZATION AND ISCHEMIA In conjunction with the beneficial trophic effects of MSC spheroids toward infarcted myocardium, their applicability has been also investigated for neovascularization in vivo. In a mouse hind limb ischemia model, 1.0 × 10 7 cord-blood MSC spheroid intramuscular injection significantly increased the number of microvessels and αSMA-positive vessels, resulting in decreased fibrosis in the ischemic region, and attenuated limb loss and necrosis. In comparison, the MSC spheroid group showed a limb salvage rate of 75%, whereas the MSC suspension group resulted in limb salvage rate of only 12.5% (Bhang et al., 2012). Additionally, Lee et al. (2016) showed that intramuscular injected adipose-derived MSC spheroids showed better proliferation than MSC suspension in the ischemic region, an effect that can be attributed to an increased expression of the proliferation marker PCNA. Therefore, MSC spheroids promote vascularization through secretion of angiogenic cytokines, preservation of ECM, and regulation of apoptotic signals. LIVER AND KIDNEY DISEASE The potential of MSC spheroids has been also investigated in liver regeneration and kidney injury models. For liver regeneration, two animal models have been tested for hepatectomy and CCl4-induced acute liver failure. In a pioneering study, Liu and Chang (2006) injected intraperitoneally 3 × 10 7 BM-MSC or hepatocytes in alginate-polylysine-alginate spheroids or suspension formats to treat 90% of hepatectomized rats. Up to day 14, in the BM-MSC spheroid, hepatocyte spheroid, and hepatocyte suspension groups, most rats survived (83-100%) and showed increased liver wet weight. Interestingly, these beneficial effects could be attributed to the increased expression in MSC spheroids of hepatocyte markers cytokeratin 8, cytokeratin 18, albumin, and α-fetoprotein (Liu and Chang, 2006). In an improved approach, 3 × 10 7 BM-MSC spheroids or MSC suspension were intrasplenically injected to treat 90% of hepatectomized rats. On day 14, survival rate in MSC spheroid group was prolonged by almost 70% compared with the MSC suspension group via the secretion of hepatotrophic factors such as HGF and IL-6 into the liver. Of note, authors reported that implanted MSC may transdifferentiate into hepatocyte-like cells in vivo and therefore may render spleen as an ectopic functional liver support Chang, 2009, 2012). This hypothesis has to be further investigated as MSC differentiation toward endodermal fate has not been widely established. In a CCl4-induced acute liver failure mouse model, 1 × 10 6 UC-MSC spheroids or MSC suspension were infused via the tail vein and, at day 2, resulted in liver injury attenuation. Specifically, MSC spheroids could promote IL-6 and IFN-γ secretion but suppress TNF-α serum levels, and therefore significantly reduce tissue necrosis and increase liver regeneration . In a recent study, adipose-derived MSC spheroids have been used to treat an ischemia-reperfusion (I/R)-induced acute kidney injury rat model. Moreover, 2 × 10 6 MSC spheroids or MSC suspension were directly injected to the kidney cortex, and renal function was investigated for a 14-day follow-up. Results indicated that MSC spheroids are more beneficial to the kidney by reduction of tissue damage, increased vascularization, and amelioration of renal function compared with MSC suspensions. In detail, the MSC spheroid group showed increased levels of VEGF, HGF, and TSG-6 cytokines, and decreased levels of creatinine and blood urea nitrogen in the serum (Xu et al., 2016). Therefore, in both liver and kidney injury animal models, MSC spheroid paracrine actions result in improved therapeutic effects characterized by reduced tissue necrosis, increased tissue regeneration, and improved organ function. FUTURE CLINICAL PERSPECTIVES To date, only a limited number of comparative preclinical studies have been performed between MSC spheroids and MSC suspension after 2D culture, whereas no clinical trials exist to evaluate the efficacy of MSC spheroids in clinical settings. As a result, there are no specific criteria to define when MSC spheroids would be preferable over MSC suspension to treat various clinical indications. However, it has become increasingly clear that current conventional and extensive 2D MSC culturing methods, similar to the ones used in public and commercial stem cell biobanks, even though they can ensure the generation of clinically relevant cell numbers for in vivo applications, cannot guarantee the preservation of MSC qualitative characteristics and their related high functionality. To circumvent these limitations, the incorporation of 3D MSC culturing approach into cell-based therapy would significantly impact the field, as more reproducible clinical outcomes may be achieved without requiring extensive ex vivo MSC manipulation and MSC stimulatory regimes (reviewed in Kouroupis et al., 2018). Specifically, current data indicate that MSC spheroid cultures with or without the usage of biomaterials not only preserve MSC phenotypic and molecular profiles but also significantly reinforce MSC functionality related to their immunomodulatory, anti-fibrotic, angiogenic, and trophic properties. In addition, as initial MSC homing and survival are crucial factors affecting the therapeutic outcome, 3D spheroid formation closely recapitulates the in vivo MSC niche, protect MSC viability, and works as a "vehicle" for their effective homing to the affected tissues upon implantation in vivo. On this basis, the adaptation of high-throughput regulatory-compliant and reproducible methods for MSC spheroid production would allow their use in clinical settings and contribute to an improved MSC-based product for safer and more effective therapeutic applications. AUTHOR CONTRIBUTIONS Both authors have made substantial contributions to the drafting of the article or revising it critically and to the final approval of the version to be submitted. ACKNOWLEDGMENTS We are in gratitude to the Soffer Family Foundation and the DRI Foundation for their generous funding support.
2021-02-12T14:07:48.325Z
2021-02-12T00:00:00.000
{ "year": 2021, "sha1": "3422086d08ff0b79829d240ddb73ac4e2e6bbafa", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fbioe.2021.621748/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "3422086d08ff0b79829d240ddb73ac4e2e6bbafa", "s2fieldsofstudy": [ "Medicine", "Engineering" ], "extfieldsofstudy": [ "Medicine" ] }
18677415
pes2o/s2orc
v3-fos-license
Treatments for Biomedical Abnormalities Associated with Autism Spectrum Disorder Recent studies point to the effectiveness of novel treatments that address physiological abnormalities associated with autism spectrum disorder (ASD). This is significant because safe and effective treatments for ASD remain limited. These physiological abnormalities as well as studies addressing treatments of these abnormalities are reviewed in this article. Treatments commonly used to treat mitochondrial disease have been found to improve both core and associated ASD symptoms. Double-blind, placebo-controlled (DBPC) studies have investigated l-carnitine and a multivitamin containing B vitamins, antioxidants, vitamin E, and co-enzyme Q10 while non-blinded studies have investigated ubiquinol. Controlled and uncontrolled studies using folinic acid, a reduced form of folate, have reported marked improvements in core and associated ASD symptoms in some children with ASD and folate related pathway abnormities. Treatments that could address redox metabolism abnormalities include methylcobalamin with and without folinic acid in open-label studies and vitamin C and N-acetyl-l-cysteine in DBPC studies. These studies have reported improved core and associated ASD symptoms with these treatments. Lastly, both open-label and DBPC studies have reported improvements in core and associated ASD symptoms with tetrahydrobiopterin. Overall, these treatments were generally well-tolerated without significant adverse effects for most children, although we review the reported adverse effects in detail. This review provides evidence for potentially safe and effective treatments for core and associated symptoms of ASD that target underlying known physiological abnormalities associated with ASD. Further research is needed to define subgroups of children with ASD in which these treatments may be most effective as well as confirm their efficacy in DBPC, large-scale multicenter studies. BACKGROUND The autism spectrum disorders (ASD) are a group of behaviorally defined neurodevelopmental disorders with lifelong consequences. They are defined by impairments in communication and social interaction along with restrictive and repetitive behaviors (1). The definition of ASD has recently undergone revision. Previously, the Diagnostic Statistical Manual (DSM) Version IV Text Revision divided ASD into several diagnoses including autistic disorder, Asperger syndrome, and pervasive developmental disorder-not otherwise specified. The new revision of the DSM now does not differentiate between these ASD subtypes and considers communication and social impairments together in one symptom class (2). Complicating this change is the fact that over the past several decades, most research has used a framework from the former DSM versions. Autism spectrum disorder has been recently estimated to affect 1 out of 68 individuals in the United States (3) with four times more males than females being affected (4). Over the past two decades, the prevalence of the ASDs has grown dramatically, although the reasons for this increase are continually debated. Despite decades of research on ASD, identification of the causes of and treatments for ASD remain limited. The standard-of-care treatment for ASD is behavioral therapy that requires full-time engagement of a one-on-one therapist typically requiring many years of treatment, and recent reviews have pointed out that controlled studies on commonly used behavior therapies are generally lacking (5). The only medical treatments approved by the United States of America Food and Drug Administration for ASD are antipsychotic medications. However, these medications only treat a symptom associated with ASD, irritability, but not any core ASD symptom. In children, these medications can be associated with significant adverse effects, including detrimental changes in body weight as well as triglyceride, cholesterol, and blood glucose concentrations within a short time (6) and they also increase the risk of type 2 diabetes (7). In some studies, the percentage of children experiencing these side effects is quite high. For example, one recent study reported that 87% of ASD children had side effects with risperidone, including drowsiness, weight gain, and rhinorrhea (8). A great majority of ASD research has concentrated on genetic causes of ASD (9) despite the fact that inherited single gene and chromosomal defects are only found in the minority of cases www.frontiersin.org (10). In fact, several recent studies that have conducted genome wide searches for common genetic defects across large samples of ASD children have only identified rare de novo mutations, thereby pointing to acquired mutations and/or mutations secondary to errors in DNA maintenance rather than inherited genetic syndromes (11,12). As research in the field of ASD continues, it is becoming clear that the etiology of most ASD cases involves complicated interactions between genetic predisposition and environmental exposures or triggers. Indeed, a recent study of dizygotic twins estimated that the environment contributes a greater percentage of the risk of developing autistic disorder as compared to genetic factors (13). Another study of over two million children reported that environmental risk factors accounted for approximately 50% of ASD risk (14). Recent reviews have outlined the many environmental factors that are associated with ASD and have described how polymorphisms in specific genes can combine with the environment to cause neurodevelopmental problems (15). Identifying the metabolic or physiological abnormalities associated with ASD is important, as treatments for such abnormalities may be possible. Thus, a better understanding of these abnormalities may allow for the development of novel treatments for children with ASD. Below the evidence for metabolic abnormalities related to ASD that may be amenable to treatment are discussed along with the evidence of potential treatments for these disorders. Figure 1 provides a summary of the pathways and demonstrates which pathways are targeted by the better studied treatments. In addition, a section on the common adverse effects of these treatments follows the discussion of treatments. REVIEW OF TREATABLE CONDITIONS AND THEIR POTENTIAL TREATMENTS MITOCHONDRIAL DYSFUNCTION Recent studies suggested that 30-50% of children with ASD possess biomarkers consistent with mitochondrial dysfunction (31,43) and that the prevalence of abnormal mitochondrial function in immune cells derived from children with ASD is exceedingly high (44,45). Mitochondrial dysfunction has been demonstrated FIGURE 1 | Pathways affected in autism spectrum disorder that are discussed in this article as well as the treatments discussed with their points of action. Pathways are outlined in blue while treatments are outlined in green. Oxidative stress is outlined in red and the red arrows demonstrate how it can negatively influence metabolic pathways. Certain pathways such as glutathione and tetrahydrobiopterin pathways have an antioxidant effect and a reciprocal relationship with oxidative stress such that they can improve oxidative stress but at the same time oxidative stress has a direct detrimental effect on them. Mitochondrial dysfunction and oxidative stress have mutually negative effects on each other such that oxidative stress causes mitochondrial dysfunction while mitochondrial dysfunction worsens oxidative stress. Dihydrofolate reductase (DHFR) is colored in red since polymorphisms in this gene, that are commonly seen in individuals with autism, have a detrimental effect on the reduction of folic acid such that the entry of folic acid into the folate cycle is decreased. Folinic acid enters the folate cycle without requiring this enzyme. Similarly the folate receptor alpha can be impaired in individuals with autism by autoantibodies and by mitochondrial dysfunction. In such cases, folinic acid can cross the blood-brain barrier by the reduced folate carrier. Methionine synthase (MS) connects the folate and methylation cycles and requires methylcobalamin as a cofactor. in the postmortem ASD brain (23,30,(46)(47)(48)(49) and in animal models of ASD (50). Novel types of mitochondrial dysfunction have been described in children with ASD (28,51,52) and in cell lines derived from children with ASD (53,54). Several studies suggest that children with ASD and mitochondrial dysfunction have more severe behavioral and cognitive disabilities compared with children who have ASD but without mitochondrial dysfunction (55)(56)(57). Interestingly, a recent review of all of the known published cases of mitochondrial disease and ASD demonstrated that only about 25% had a known genetic mutation that could account for their mitochondrial disease (31). Thus, many treatments that are believed to improve mitochondrial function have been shown to be helpful for some children with ASD. However, none of these studies have specifically selected children with mitochondrial dysfunction or disease to study, so it is difficult to know if individuals with ASD and mitochondrial dysfunction would benefit the most from these treatments or whether these treatments are effective for a wider group of children with ASD. One study did demonstrate that the multivitamin used for treatment resulted in improvements in biomarkers of energy metabolism (as well as oxidative stress) suggesting that the effect of the multivitamin may have been at least partially related to improvements in mitochondrial function (65). Clearly, this is a fertile area for research but there remain several complications that could impede moving forward in a systematic way. For example, given the inconsistency in the prevalence estimates of mitochondrial disease and dysfunction across studies (ranging from about 5-80%), the notion that mitochondrial abnormalities are even associated with ASD is somewhat controversial. This may be, in part, due to the unclear distinction between mitochondrial disease and dysfunction. However, even the lower bound of the prevalence estimate of 5% is significant, as mitochondrial disease is only believed to affect <0.1% of individuals in the general population and given the current high prevalence of ASD, a disorder that affects even 5% of individuals with ASD would add up to millions of individuals who have the potential to have a treatable metabolic abnormality. Other complicating factors include the fact that there are many treatments for mitochondrial disease and these treatments have not been well-studied (76). Hopefully, the increased interest in treatments for mitochondrial disease will help improve our knowledge of how to best treat mitochondrial disease so that such information can be applied to children who have mitochondrial disease and dysfunction with ASD. Other recent approaches include the in vitro assessment of compounds that may improve mitochondrial function in individuals with ASD (53). FOLATE METABOLISM Several lines of evidence point to abnormalities in folate metabolism in ASD. Several genetic polymorphisms in key enzymes in the folate pathway have been associated with ASD. These abnormalities can cause decreased production of 5-methyltetrahydrofolate, impair the production of folate cycle metabolites and decrease folate transport across the blood-brain barrier and into neurons. Indeed, genetic polymorphisms in methylenetetrahydrofolate reductase (22,(77)(78)(79)(80)(81)(82)(83)(84)(85), dihydrofolate reductase (86) and the reduced folate carrier (22) have been associated with ASD. Perhaps the most significant abnormalities in folate metabolism associated with ASD are autoantibodies to the folate receptor alpha (FRα). Folate is transported across the blood-brain barrier by an energy-dependent receptor-mediated system that utilizes the FRα (87). Autoantibodies can bind to the FRα and greatly impair its function. These autoantibodies have been linked to cerebral folate deficiency (CFD). Many cases of CFD carry a diagnosis of ASD (88)(89)(90)(91)(92)(93)(94) and other individuals with CFD are diagnosed with Rett syndrome, a disorder closely related to ASD within the pervasive developmental disorder spectrum (95)(96)(97). Given that the FRα folate transport system is energy-dependent and consumes ATP, it is not surprising that a wide variety of mitochondrial diseases (91,94,(97)(98)(99)(100)(101)(102) and novel forms of mitochondrial dysfunction related to ASD (52) have been associated with CFD. Recently, Frye et al. (17) reported that 60% and 44% of 93 children with ASD were positive for the blocking and binding FRα autoantibody, respectively. This high rate of FRα autoantibody positivity was confirmed by Ramaekers et al. (103) who compared 75 ASD children to 30 non-autistic controls with developmental delay. The blocking FRα autoantibody was positive in 47% of children with ASD but in only 3% of the control children. Many children with ASD and CFD have marked improvements in clinical status when treated with folinic acid -a reduced form of folate that can cross the blood-brain barrier using the reduced folate carrier rather than the FRα transport system. Several case reports (89) and case series (90, 91) have described neurological, behavioral, and cognitive improvements in children with documented CFD and ASD. One case series of five children with CFD and low-functioning autism with neurological deficits found complete recovery from ASD symptoms with the use of folinic acid in one child and substantial improvements in communication in two other children (90). In another study of 23 children with low-functioning regressive ASD and CFD, 2 younger children demonstrated full recovery from ASD and neurological symptoms, 3 older children demonstrated improvements in neurological deficits but not in ASD symptoms, and the remainder demonstrated improvements in neurological symptoms and partial improvements in some ASD symptoms with folinic acid; the most prominent improvement was in communication (91). Recently, in a controlled open-label study, Frye et al. (17) demonstrated that ASD children who were positive for at least one of the FRα autoantibodies experienced significant improvements in verbal communication, receptive and expressive language, attention, and stereotypical behavior with high-dose (2 mg/kg/day in two divided doses; maximum 50 mg/day) folinic acid treatment with very few adverse effects reported. Thus, there are several lines of converging evidence suggesting that abnormalities in folate metabolism are associated with ASD. Evidence for treatment of these disorders is somewhat limited but it is growing. For example, treatment studies have mostly concentrated on the subset of children with ASD who also possess the FRα autoantibodies. These studies have only examined one form of reduced folate, folinic acid, and have only examined treatment response in limited studies. Thus, large DBPC studies would be very helpful for documenting efficacy of this potentially safe and effective treatment. In addition, the role of other abnormalities in the folate pathway beside FRα autoantibodies, such as genetic polymorphisms, in treatment response needs to be investigated. It might also be important to investigate the role of treatment with other forms of folate besides folinic acid, but it might also be wise to concentrate research on one particular form of folate for the time being so as to optimize the generalizability of research studies in order to have a more solid understanding of the role of folate www.frontiersin.org metabolism in ASD. Given the ubiquitous role of folate in many metabolic pathways and the fact that it has a role in preventing ASD during the preconception and prenatal periods (104), this line of research has significant potential for being a novel treatment for many children with ASD. REDOX METABOLISM Several lines of evidence support the notion that some children with ASD have abnormal redox metabolism. Two case-control studies have reported that redox metabolism in children with ASD is abnormal compared to unaffected control children (22,105). This includes a significant decrease in reduced glutathione (GSH), the major intracellular antioxidant, and mechanism for detoxification, as well as a significant increase in the oxidized disulfide form of glutathione (GSSG). The notion that abnormal glutathione metabolism could lead to oxidative damage is consistent with studies which demonstrate oxidative damage to proteins and DNA in peripheral blood mononuclear cells and postmortem brain from ASD individuals (23,30,106), particularly in cortical regions associated with speech, emotion, and social behavior (30,107). Treatments for oxidative stress have been shown to be of benefit for children with ASD. In children with ASD, studies have demonstrated that glutathione metabolism can be improved with subcutaneously injected methylcobalamin and oral folinic acid (69, 105), a vitamin and mineral supplement that includes antioxidants, co-enzyme Q10, and B vitamins (65) and tetrahydrobiopterin (20). Interestingly, recent DBPC studies have demonstrated that N -acetyl-l-cysteine, a supplement that provides a precursor to glutathione, was effective in improving symptoms and behaviors associated with ASD (72, 73). However, glutathione was not measured in these two studies. Several other treatments that have antioxidant properties (66), including carnosine (75), have also been reported to significantly improve ASD behaviors, suggesting that treatment of oxidative stress could be beneficial for children with ASD. Many antioxidants can also help improve mitochondrial function (31), suggesting that clinical improvements with antioxidants may occur through a reduction of oxidative stress and/or an improvement in mitochondrial function. These studies suggest that treatments that address oxidative stress may improve core and associated symptoms of ASD. Furthermore, these treatments are generally regarded as safe with a low prevalence of adverse effects. Unfortunately many studies that have looked at antioxidants and treatments that potentially support the redox pathway did not use biomarkers to measure redox metabolism status in the participants or the effect of treatment on redox pathways. Including biomarkers in future studies could provide important information regarding which patients may respond to treatments that address redox metabolism and can help identify the most effective treatments. Since there are many treatments used to address oxidative stress and redox metabolism abnormalities in clinical practice and in research studies, the most effective treatments need to be carefully studied in DBPC studies to document their efficacy and effectiveness. Overall, the treatments discussed above have shown some promising results and deserve further study. TETRAHYDROBIOPTERIN METABOLISM Tetrahydrobiopterin (BH 4 ) is a naturally occurring molecule that is an essential cofactor for several critical metabolic pathways, including those responsible for the production of monoamine neurotransmitters, the breakdown of phenylalanine, and the production of nitric oxide (19). BH 4 is readily oxidized by reactive species, leading it to be destroyed in the disorders where oxidative stress is prominent such as ASD (18). Abnormalities in several BH 4 related metabolic pathways or in the products of these pathways have been noted in some individuals with ASD, and the cerebrospinal fluid concentration of BH 4 has been reported to be depressed in some individuals with ASD (19). Clinical trials conducted over the past 25 years have reported encouraging results using sapropterin, a synthetic form of BH 4 , to treat children with ASD (19). Three controlled (109)(110)(111) and several open-label trials have documented improvements in communication, cognitive ability, adaptability, social abilities, and verbal expression with sapropterin treatment in ASD, especially in children younger than 5 years of age and in those who are relatively higher functioning at the beginning of the trial (19). Frye has shown that the ratio of serum citrulline-to-methionine is related to the BH 4 concentration in the cerebrospinal fluid, suggesting that abnormalities in both oxidative stress and nitric oxide metabolism may be related to central BH 4 deficiency (18). More recently, Frye et al. demonstrated, in an open-label study, that sapropterin treatment improves redox metabolism and fundamentally alters BH 4 metabolism in children with ASD. Interestingly, serum biomarkers of nitric oxide metabolism were found to predict response to sapropterin treatment in children with ASD (20), thereby suggesting that the therapeutic effect of BH 4 supplementation may be specific to its effect on nitric oxide metabolism. The potential positive effects on nitric oxide metabolism by BH 4 supplementation could be significant for several reasons. The literature supports an association between ASD and abnormalities in nitric oxide metabolism. Indeed studies have documented alterations in nitric oxide synthase genes in children with ASD (112,113). In the context of low BH 4 concentrations, nitric oxide synthase produces peroxynitrite, an unstable reactive nitrogen species that can result in oxidative cellular damage. Indeed, nitrotyrosine, a biomarker of reactive nitrogen species, has been shown to be increased in multiple tissues in children with ASD, including the brain (22,23,107,114,115). Thus, BH 4 supplementation could help stabilize nitric oxide synthase as well as act as an antioxidant and improve monoamine neurotransmitter production. Further Frontiers in Pediatrics | Child and Neurodevelopmental Psychiatry DBPC studies using biomarkers of metabolic pathways related to BH 4 metabolism will be needed to determine which children with ASD will most benefit from formulations of BH 4 supplementation like sapropterin. POTENTIAL ADVERSE EFFECTS Although many of the treatments discussed within this manuscript are considered safe and are generally well-tolerated, it is important to understand that these treatments are not without potential adverse effects. In general, these treatments are without serious adverse effects but some children may not tolerate all treatments well. Systematic and controlled studies are best at providing data on adverse effects, so the true adverse effects of the supplements discussed will only be based on the limited treatments that have been studied in such a fashion. It is also important to understand that because of the complicated nature of the effects of these treatments, they should only be used under the care of a medical professional with appropriate expertise and experience. Controlled studies for treatments that address mitochondrial disorders include l-carnitine and a multivitamin with various mitochondrial supplements. In one small DBPC study, there were no significant adverse events reported in the 16 children treated with l-carnitine (59) while a second small DBPC trial reported no differences between the adverse effects reported by the treatment and placebo groups; notably, more patients in the placebo group withdrew from the study because of adverse effects (58). Thus, there is no data to suggest that l-carnitine has any significant adverse effects. In the large DBPC multivitamin study, about equal numbers of children in the treatment and placebo groups withdrew from the study because of behavior or gastrointestinal issues (65). In another small DBPC study, the investigators noted that two children began to have nausea and emesis when they started receiving the treatment at nighttime on an empty stomach (64). This adverse effect resolved when the timing of the treatment was adjusted. Thus, with proper dosing of this multivitamin, it appears rather safe and well-tolerated. Controlled studies for folate pathway abnormalities only include folinic acid. In a medium-sized, open-label controlled study, 44 children with ASD and the FRα autoantibody were treated with high-dose folinic acid (2 mg/kg/day in two divided doses; maximum 50 mg/day) and four children discontinued the treatment because of an adverse effect (17). Of the four children who discontinued the treatment, three children, all being concurrently treated with risperidone, demonstrated increased irritability soon after starting the high-dose folinic acid while the other child experienced increased insomnia and gastroesophageal reflux after 6 weeks of treatment. Since there was no placebo in this study, the significance of these adverse effects is difficult to determine. For example, it is not clear whether this was related to concurrent risperidone treatment or was related to a baseline high irritability resulting in the needed for risperidone. All other participants completed the trial without significant adverse effects. Due to the timing of the adverse events in the children on risperidone in this trial, to be safe, the authors suggested caution when using folinic acid in children already on antipsychotic medications. Clinical studies for treatments that could address redox metabolism include N -acetyl-l-cysteine, methylcobalamin, methylcobalamin combined with oral folinic acid and a multivitamin (as previous mentioned). One small open-label study that provided 25-30 µg/kg/day (1500 µg/day maximum) of methylcobalamin to 13 patients found no adverse effects (68) while a medium-sized, open-label trial that provided 75 µg/kg subcutaneously injected methylcobalamin given every 3 days along with twice daily oral low-dose (800 µg/day) folinic acid to 44 children noted some mild adverse effects (69,70). Four children discontinued the treatment, two because their parents were uncomfortable given injections and two because of hyperactivity and reduced sleep. The most common adverse effect in the participants that remained in the study was hyperactivity, which resolved with a decrease in the folinic acid to 400 µg/day. Lastly, two medium sized, DBPC studies examined N -acetyl-l-cysteine, one as a primary treatment and another as an add-on to risperidone. The trial that used N -acetyl-l-cysteine as a primary treatment noted no significant differences in adverse events between the treatment and placebo groups, although both groups demonstrated a high rate of gastrointestinal symptoms and one participant in the active treatment phase required termination due to increased agitation (72). In the add-on study, one patient in the active treatment group withdrew due to severe sedation (73). In this latter study, adverse effects were not compared statistically between groups, but most adverse effects were mild and had a low prevalence. Such adverse effects included constipation, increased appetite, fatigue, nervousness, and daytime drowsiness. Lastly, a small DPBC study using vitamin C did not report any adverse effects from the treatment (67). Thus, there are several relatively safe and well-tolerated treatments for addressing abnormal redox metabolism, but there does appear to be a low rate of adverse effects, reinforcing the notion that a medical professional should guide treatment. Three DBPC studies, one small (110), one medium (111), and one medium-to-large (109) sized, were conducted using sapropterin as a treatment for ASD. None of these studies have reported a higher prevalence of adverse effects in the treatment group as compared to the placebo group and none of these studies attributed any dropouts to the treatment. Thus, sapropterin appears to be a well-tolerated treatment. DISCUSSION One advantage of the treatments outlined above is that the physiological mechanisms that they address are known and biomarkers are available to identify children who may respond to these treatments. Preliminary studies suggest that there are a substantial number of ASD children with these metabolic abnormalities. For example, mitochondrial abnormalities may be seen in 5-80% of children with ASD (31, 43-45, 53, 54) and FRα autoantibodies may be found in 47% (103) to 75% (17) of children with ASD. Clearly, further studies will be required to clarify the percentage of these subgroups. Further large-scale, multicenter DBPC clinical trials are needed for these promising treatments in order to document the efficacy and define the subgroups that best respond to these treatments. As more treatable disorders are documented and as data accumulates to demonstrate the efficacy of treatments for these disorders, clinical algorithms to approach the work-up for a child with ASD need to be developed by a consensus of experts. Indeed, developing www.frontiersin.org guidelines will be the next step for applying many of these scientific findings. Clearly many children with ASD may be able to benefit from such treatments, which are focused on improving dysfunctional physiology. Given the fact that no approved medical treatment exists which addresses the underlying pathophysiology or core symptoms of ASD, these treatments could make a substantial difference in the lives of children with ASD and their families. With the high prevalence of ASD, treatments that successfully treat even only a fraction of children affected with ASD would translate into substantial benefits for millions of individuals with ASD and their families. In summary, it appears that many of these treatments may provide benefit for a substantial proportion of children with ASD.
2016-06-17T21:40:06.966Z
2014-06-27T00:00:00.000
{ "year": 2014, "sha1": "94f5428264eb3920ac3694619caff13687dae05b", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fped.2014.00066/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "7b6aca5208003ebb30ea3668e894b4754287f9d3", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
255641511
pes2o/s2orc
v3-fos-license
Effects of financial development and capital accumulation on labor productivity in sub-Saharan Africa: new insight from cross sectional autoregressive lag approach This study aims to shed light on the effects of financial development and accumulation of capital on the productivity of labor in the sub-Sahara African region within the period of 1990–2018. In this work, we used the (dynamic) common correlated effects estimator-mean group and additional techniques such as cross-section autoregressive distributed lag to calibrate the sample into the African subregion to ensure robustness. The findings reveal that financial progress in the region over time leads to an increase in productivity of labor and also the accumulation of capital. Furthermore, financial markets have a progressive impact on the productivity of labor within sub-Saharan African regions. We extend the very limited literature on the nexus between financial development and labor productivity by incorporating capital accumulation into our model which has not been previously studied. and savings. Nevertheless, no specific theoretical framework has been established to forecast the nature of employment in relation to financial institution (FI) and FD. Our research also contributes to the growing understanding of the diversity of the labor market and the importance of FD and FI toward a reduction in the rate of unemployment. In deliberation of the next segment, the development, employment, and unemployment levels are expected to function from diverse frequencies, and the net effect is likely to be uncertain. Although our study originally aims to conduct estimations beyond the 1990-2018 study period, it is impossible due to incomplete data for some of the variables in some countries. Some researchers have drawn alternative conclusions from some well-established findings, implying that market credit failures may play an important role in the collective concepts of dynamic operations involving labor and divergent investment flows. Our research is unique in that it uses conventional theory to investigate the relationships between FD, CAPTA, and labor productivity (Fontaine et al. 2020;Iheonu et al. 2020;Ssozi and Asongu 2016). Our study has two further interesting elements: first, a large sample of SSA countries is utilized, as estimations are performed for all 39 SSA countries and each subregion (South Africa, West Africa, East Africa, and Central Africa). Doing so ensures the robustness of our findings. Second, to obtain the empirical output, our study employs the (dynamic) common correlated effect estimator-mean group (CCEE-MG) through cross-section autoregressive distributed lag (CS-ARDL), second-generation unit root test, panel dynamic ordinary least square (PDOLS), and fully modified ordinary least square (FMOLS) estimation techniques. This method allows the heteroscedasticity and endogeneity problems to be solved, which are common issues associated with micro panel data. To the best of our knowledge, this method has not been utilized by previous studies in the context of SSA. Additionally, FI and other institutions are largely underdeveloped within the SSA region. The remainder of the paper is structured as follows: Sect. 2 gives an overview of the reviewed literature. Section 3 contains the data description and methodology. Section 4 presents and discusses the observed results. Section 5 provides the conclusion and policy inferences. Impacts of FD and FI on labor productivity Insignificant attention has been paid in recent literature to FD impression, labor productivity, and its impact on employment. Nonetheless, existing empirical studies have thoroughly examined the impending breaks mediating the nexus between FD and labor productivity. Ibrahim and Alagidedeb (2018), Atiase et al. (2019), and Chao et al. (2021) examined the influence of finance on job creation, which explains the preliminary level of percapita income, human capital in nations, and FD for 29 African nations within the SSA region over the 1980-2014 period by applying the verge estimation and sample splitting technique. Their results indicated that FD is positively and significantly connected with economic development. However, the findings and policy implications drawn from the conclusion revealed that an increase in the FD level is essential in the long run besides the general levels of human capital and income, which is extremely important. Other previous studies carried out on FI applied the variance decomposition VAR method and another causality test, which assumes that the existence of FI aids in promoting trade and commercial activities within the economy where linear correlation exists and activities are normally distributed. However, such an assumption is contrary to others because of the weakness in their financial system, whereby the reappearance of FI does not observe the normal distribution aspect. Therefore, a nonlinear connection tends to appear among FI (Hong et al. 2021;Li et al. 2018). Bernier and Plouffe (2019) studied the development spending in the financial sector and financial innovation research using a panel of 23 countries during the 1996-2014 period. The results validated a net positive correlation between gross capital formation (GCF) and financial innovation. According to Sarwar et al. (2020), the relationship between FD and human capital development is significant and positive in developing economies. Barucca et al. (2021) and Shahbaz et al. (2011) examined the relationship between FI, output, and the unindustrialized sector in Pakistan using the Cobb-Douglas production function, which incorporates FD as the central production factor, for the period from 1971 to 2011. To examine the long-run relationships among the variables, the ARDL bounds test technique for cointegration was used. Based on their study outcomes, the researchers suggested that the government must encourage output development, particularly in the agricultural sector, to increase the effectiveness in the financial sector. Influence of financial market (FM) and CAPTA on labor productivity Other studies have examined associations between FM and various productivity factors, such as capital formation, productivity, and investment where the outcomes have shown that an improvement in finance and capital availability leads to an increase in investment (Joyce et al. 2020;Arcand et al. 2015). Iheonu et al. (2020) explored the influence of financial sector development on domestic investment in several ECOWAS countries. The Granger noncausality test was performed to examine for causality in the presence of cross-sectional dependence (CD), and the augmented mean group technique was used to account for country-specific heterogeneity and CD. The findings revealed that (1) the impact of financial sector development on domestic investment varies depending on the measure of the financial sector development used; (2) domestic credit to a private sector has a positive but insignificant impact on domestic investment in ECOWAS countries, whereas banking intermediation efficiency (i.e., banks' ability to convert deposits into credits) and broad money supply have negative and significant impacts on domestic investment. Our study suggests that the measure of FD used as a policy tool to encourage domestic investment should be carefully considered. We also stress the need to use country-specific domestic investment strategies rather than broad-brush initiatives. When anticipating future domestic investment, domestic lending to the private sector should take precedence. Consequently, some recent studies on FM development with an emphasis on European countries propose that FM development can hinder inequality if further consideration is given to its development; hence, countries with FMs that are developed are considered to have better social equality than countries that have less developed financial systems (Baiardi et al. 2019). In their recent empirical studies, which propose that finance has a declining and ultimately negatively market improvement, Kou et al. (2022) and Bukhari et al. (2020) suggested that emphasis must also be placed on the positive effects of financial complexity and growth within the FMs of developing economies. Based on this evidence, questions have been raised regarding whether financial freedom may obstruct rather than guarantee a viable increase in the gross domestic product through finance, entrepreneurial process, FM, and innovation (Fonseca and Doornik 2022). Using a large sample of listed Chinese firms from 2007 to 2017, evidence shows that firms with high retail investor attention tend to have a low future stock price crash risk. Moreover, high-quality auditing can mitigate the impact of retail investor attention on the future crash risk of firms (Wen et al. 2019). A theoretical debate has been continuing regarding the relationship between FD and human capital, technology, and labor productivity. Thus, Bosworth and Collins (2003) explained total factor productivity (TFP) and educational attainment and applied an extended structure, which includes human capital incorporation. That is, the role played by education in the Cobb-Douglas model is shown in the equation below: where Y t is the output, A t is the TFP, K [H t = h t , L t ] , α is the share of K, h t refers to educational attainment (human capital), and L t means labor. Therefore, the following model is considered: Finally, for improved economic development and performance, CAPTA and labor productivity are considered prerequisites. Other basics include growth in FI development, FM development, which transforms into an improved labor force, and technical progress along with the monetary impact (Hong et al. 2021;Asongu and Acha-Anyi 2020). However, according to data sourced from the International Labour Organization (ILO) and the World Bank's World Development Indicators, the increase in labor productivity in Africa shows a similar pattern to the regions' FD, CAPTA proxies by GCF, FI, and FM development index. This graph is established to view the movements and trends among the variables under study. As observed, FD and CAPTA (proxied by GCF) exhibit similar movements to FI and FM, indicating that the variables have cluster relationships, which are displayed in Fig. 1. Data description Data from 39 SSA nations, which comprised countries from different regions as indicated in Table 1, were extracted and analyzed for the period from 1990 to 2018. (1) Zoaka and Güngör Financial Innovation (2023) However, because of inadequate data for some countries concerning the required variables, the initial plans of obtaining observations and exploring all SSA countries were disregarded. Countries from various regions with the most available data for the required period under study were chosen. The panel method and model-based analysis were applied to resolve or adjust for heterogeneity changes and differences in various countries. Data were collected from different sources, which comprised data on labor productivity (LNPROD), which was proxied by output per worker sourced from ILO, CAPTA (LNCAPTA) proxied by GCF obtained from the recent 2019 form of the World Development Indicators, and a comprehensive financial index. Descriptive statistics To analyze the characteristics of the variables and the existing relationships among FI development, CAPTA, and labor productivity, a preliminary analysis was performed (Fig. 2). The results are presented as follows. Table 2 presents a summary of the dependent and explanatory variables used in this study. It enables a cursory examination of the statistical properties of the variables. Table 2 also shows descriptive statistics where the results revealed that for the sample of 39 countries across the period studied, the average overall FD of SSA countries is approximately 0.134% and the average value of PROD is 10,502. The overall FD ranges The scatter graph presents a visualization of the relationships between labor productivity and other variables under study, which denotes a positive connection between CAPTA, FD, and FM and labor productivity. The variables are correlated, in accordance with the findings of Nakamura et al. (2019); thus, an improvement in capital, finance, and technology encourages productivity. Table 3 presents the correlation outcomes. Correlation coefficients were applied to test for multicollinearity, and all variables were positively correlated. However, the results indicated a strong correlation between FD and its subcomponents, namely, FI and FM. Thus, FD, FI, and FM were included in the regression separately to control for the multicollinearity problem. Model specification Our model was construed based on the adjustment and extension of the model adopted by Hirono (2021), which is written below: where X it is the output per worker proxied by LNPROD in SSA country i at time t. Y it refers to the independent variables (LNCAPTA, FD, FI, and FM of country i at time t). Two of the variables are expressed in logarithmic form. (3) Methodology and results Before proceeding with our estimates, verifying the existence of long-run associations among all variables is necessary. In this regard, we first check for the existence of CD within the panel data, that is, to verify whether the cross-sectional units are independent of one another. One of the reasons for the presence of CD is the absence of unnoticed common shocks across countries. Panel CD We employ the test proposed by Pesaran (2015) to detect the existence of CD among variables. The correlation coefficient for each association between the variable's series of country i with country j is estimated through this test. The higher the correlation coefficient, the stronger the CD among the residuals. If the null hypothesis is rejected, then the panel is cross-sectionally dependent or correlated. A simple panel model is considered in this study. where α i refers to the parameters to be estimated and β i represents time-invariant individual nuisance parameters. In testing for the existence of CD, the CD statistics tests proposed by Pesaran (2015) is as follows: where pti represents the sample evaluation of the correlation. Table 4 presents the outcomes of the Pesaran (2015) test. The null hypothesis of no CD is rejected by all tests in Table 4. Thus, CD exists within the sample". Second-generation unit root tests Considering the existence of CD in our variables, we cannot proceed with the firstgeneration unit root test, which is the conventional one, because it rejects the null hypothesis of nonstationarity in the presence of CD. To resolve this problem, we utilize second-generation unit root tests. The cross-sectionally augmented Dickey Fuller (CADF) test proposed by Pesaran (2015), which examines a unit root in the presence of one common factor, is applied in this study. This test is advantageous, as factor estimation is no longer needed. The common factor can be proxied by the (4) cross-section means of the lagged levels and the first difference of the variable. The test is based on the unit root hypothesis on the t-ratio of the ordinary least square estimate of α i in the following CADF regression: where x t represents the cross-section mean of the first differences of x it and the crosssection mean of the lagged values of x it is represented by" x t−1 ( Table 5). The second-generation panel unit root test outcomes are presented in Table 6, as suggested by Pesaran's (2015) findings. The existence of nonstationarity (unit root) is tested for the five variables (LNPROD, LNCAPTA, FI, FD, and FM). The test is conducted at both levels and the first difference for each of the abovementioned variables. At level, that is at 1(0), the variables are nonstationary for the versions with and without trend. Nevertheless, the variables become stationary at the differences in the versions with and without trend. Thus, the variables are 1(1) series. The results also include specific deterministic terms and are thus robust. Testing for cointegration We proceed by ascertaining the existence of a cointegrating association among the variables, after establishing that the variables are all 1(1) series. We consider the error correction-based panel cointegration test developed by Westerlund (2007) and the second-generation panel cointegration test for unnoticed factors. Each test allows for heterogeneity and CD. "c, b, and a denotes statistically significant at 0.01%, 0.05%, and 0.10%. Correspondingly. Z(t-bar) is the average of the individual t-ratios of the OLS estimate of" ∝ i Westerlund (2007) error correlation model (ECM) panel cointegration test The Westerlund (2007) ECM ascertains whether cointegration is present using four panel cointegration test statistics (Ga, Gt, Pa, and Pt). The four test statistics are normally distributed. The two tests (Gt and Pt) are computed with the standard errors of λ i logK estimated in a standard way, whereas the other statistics (Ga and Pa) are based on Newey and West (1994) standard errors, adjusted for heteroscedasticity and autocorrelations. We utilize this Westerlund (2007) cointegration test based on the following reasons: it has been developed to cope with cross-sectionally dependent data, and it allows for large heterogeneity in short-run dynamics and in long-run cointegration relationships. The equation below represents the existence of cointegration in our study: where λ i k K ∈ (PROD) stands for the parameters of the error correction term, i represents the estimate of the speed of error correction, and Ɛ i,t is the white noise random disturbance term. Exploring the long-term equilibrium through the panel cointegration test is important (Table 7). The cointegration assessment from the Westerlund results depicts strong evidence to reject the null hypothesis of no cointegration because most group and panel statistics with their respective robust p-values are statistically significant. Based on the statistics with their respective p-values, although Model 3 is fully significant, which shows that some errors can be adjusted in the long run, the presence of long-run equilibrium relationships among variables is still confirmed for the panels that the model 3 is highly significant in the long run.. Therefore, the variables being analyzed are characterized by long-term links, which must be simulated, concurring with the studies of Coffie et al. (2020) and Matsuoka et al. (2019). (Dynamic) CCEE-MG (CS-ARDL) (Dynamic) CCEE-MG (CS-ARDL) is also conducted to observe the association among LNPROD, FD, LNCAPTA, FI, and FM for SSA countries for comparative purposes. CS-ARDL is deemed effective as an alternative model to the generalized methods of moments because it utilizes the cointegration form of the standard (ordinary) ARDL model Pesaran, Shin, and Smith (1999). "The main features of the (Dynamic) CCEE-MG (CS-ARDL) are that it permits short-run coefficients (with error variances, speed of adjustment to long-run equilibrium values and intercepts) to be heterogeneous across countries, whereas long-run slope coefficients are restricted to be homogeneous across countries. According to Blackburne and Frank (2007), CCEE-MG is useful when reasons exist to believe that equilibrium relationships among variables appear within areas. ECM is the result of these properties, as illustrated in Eq. (7) in which divergence from the equilibrium influences the short-run dynamics of the variables in the system. where where X it is labor productivity and Y it refers to the independent variables. Parameter ø i is the error correction speed of the adjustment term. If ø i = 0, then no evidence of a long-run relationship exists. This parameter is expected to be statistically significant. The long-run associations among the variables are shown by vector θi. The outcomes of the CS-ARDL are presented in Table 8. Due to the existence of CD, the CCEE-MG estimator through the CS-ARDL model is implemented, and the findings of which are displayed in Table 8. The coefficients of the CS-ARDL reveal that FD, FI, and FM have positive significant impacts on the labor productivity of SSA countries in the long run. A positive significant impact on CAPTA is also observed, implying that a 1% increase in FD, FI, CAPTA, and FM leads to 2%, 2%, 5%, and 3% decreases in labor productivity, respectively. Note that CAPTA has an enormous impact on the long-run reduction of labor productivity in SSA. Thus, CAPTA, FD, FI, and FM enhance the labor productivity of the SSA region in the long run. PDOLS and FMOLS results Equation (8) is estimated using the PDOLS technique proposed by Pedroni (2001) after detecting the existence of long-run associations among the sampled variables. This method is also used because the output per worker proxied by labor productivity can be endogenous. Furthermore, the exogeneity assumption is not required by PDOLS. Finally, it calculates the mean group estimator and considers heterogeneity across groups. The estimator of PDOLS is derived by taking the average of the conventional time series (DOLS) estimator. In our case, the regression is represented below: where X it refers to productivity, Y it represents the independent variables, and δ ij represents the lag/lead coefficients. The formula for calculating the estimator is as follows: FMOLS is also estimated using Eq. (9) below; this test is the upgraded form of the Phillip and Hansen (FMOLS) estimator proposed by Pedroni (2001). This estimation method is selected because it is appropriate for endogenous variable estimation, and the equation is recommended when series are stationary in the same order. FMOLS can also be powerful for any variable that does not appear to be stationary. Hence, the equation below shows the panel of FMOLS: where �εℓ denotes the serial correlation, y it + stands for the correction term, and Y it is the transformed variable, which is used to resolve endogeneity problems. Table 8 Results of (dynamic) common correlated effects estimator-mean group (CS-ARDL) Standard errors in parentheses ***p < 0.05; **p < 0.1 Financial markets development Long Table 9 shows the estimations of Pedroni's (2001) FMOLS and DOLS off stock and Watson arguments after testing and confirming that the variables are connected in the long run. In FMOLS and DOLS, the model estimations yield identical results. However, according to the outcomes of FMOLS and PDOLS, all variables used in our study are positive and statistically significant except for FI whose coefficients have negative values. Additionally, FI has a negative effect on labor productivity in the SSA region with a coefficient of 0.0002. Thus, a one-unit increase in FI development results in a decrease in labor productivity in SSA of 0.0002 units. It is a result of the developing nature of FI within the region and is in agreement with Dumitrache et al. (2021) and Bakas et al. (2020). FD outcomes indicate that labor productivity has a positive contribution in the SSA region with coefficients of 0.015 and 0.009. Therefore, FD boosts labor productivity by less than a unit in the region. Another research has been carried out with an emphasis on FD whose outcomes are in line with the study conducted by Mohammed et al. (2019). Nevertheless, FM is also significant with a positive coefficient. Thus, a rise in FM by one unit results in an increase in labor productivity by 0.0012 and 0.0001 units. Therefore, given that FM development improves labor productivity in the long run, the hypothetical impact is presumed to be consistently long-and short-run events for SSA countries. CAPTA is positive and statistically significant, meaning that labor productivity is fostered by a rise in CAPTA in SSA countries, considering the findings of Bustos et al. (2020) and Ibrahim and Alagidede (2018). Considering that most variables have significant values, no collinearity problem exists among them. Moreover, given that the main estimator, FMOLS, compensates for serial correlation, no serial correlation testing is performed for the models. Structural stability check To check the stability of model variables, Cumulative Sum of Recursive Residuals (CUSUM) and Cumulative Sum of Recursive Residuals Squares (CUSUMSQ) calculations are made. The CUSUM results indicate that the parameters remain constant throughout the research duration because the CUSUM numbers fall inside the threshold region of 5% (Fig. 3). Discussion In the early stage of the analysis, the Pesaran CD residual CD test rejects the null hypothesis of cross-sectional independence across all variables. This rejection implies the presence of cross-country connectedness between SSA nations among various study panels. Based on the economic, regional, and social experiences of the sampled SSA countries, the intersectoral dependency of these countries is unsurprisingly seen in their respective panels. This evidence is comparatively consistent with the studies of Coffie et al. (2020) for African states in terms of income levels and Mendez and Kataoka (2021) for South Asian countries together with Dumitrache et al. (2021) for industrialized countries but contrary to their findings. Therefore, the application of CADF stationarity tests reveals that all evaluated variables have homogeneous integration orders at first differences. Accordingly, the variables used are capable of producing prolific results as shown to be stationary. The findings of the stationarity tests confirm the results of Hong et al. (2021) regarding production, FD, and income in South Asian countries. However, the Westerlund ECM panel cointegration result reveals that the presence of long-run equilibrium relationships among variables is still confirmed for the panels from the economic perspective, which implies that the utilized CAPTA and financial indicators have elastic longterm effects on labor productivity. Therefore, this evidence supports the revelation of (2019) had a contrary view from their findings, which state the nonexistence of long-run relationships amid FD. Empirically considering the long-term simulation outcomes from the CCEE-MG-CS-ARDL approach and with the existence of CD and proliferation concerns, the CCEE-MG estimator through the CS-ARDL model is implemented, and the findings are indicated in Table 8. The effects on LNPROD and LNCAPTA concerning FD, FI, and FM are significant among most panels in the long run. The coefficients of CS-ARDL reveal that FD, FI, and FM have positive significant impacts on the labor productivity of SSA countries in the long run and still have positive significant impacts on CAPTA. Furthermore, a positive significant impact of LNCAPTA on LNPROD, according to the outcomes of FMOLS and PDOLS within the SSA region, concurs with the findings of Fonseca and Doornik (2022) for Brazil. When LNCAPTA increases, it enhances productivity, thereby boosting the capacity of any ventures be they governmental or nongovernmental. Additionally, FI has a negative effect on labor productivity in the SSA region. This finding entails that an increase in FI development results in a decrease in labor productivity in SSA. It is a result of the developing nature of FI within the region and is in agreement with Khraief et al. (2020) and Bakas et al. (2020). Last, FD outcomes indicate that FD makes a positive contribution to labor productivity in the SSA region. It further portrays how policymakers in SSA countries within this panel are saddled with the duty of promoting FD to improve labor productivity. This result is consistent with Baiardi et al. (2019) for African states. Meanwhile, the relationship between FD and labor productivity generally differs significantly from the report of Mohammed et al. (2019) for Turkey. Nevertheless, FM is also significant with a positive coefficient. Thus, a rise in FM by one unit results in an increase in labor productivity. Therefore, given that FM development improves labor productivity in the long run, the hypothetical impact is presumed to be consistently long-and short-run events for SSA countries. This result is in line with the findings of Ibrahim and Alagidede (2017) for SSA countries and Barucca et al. (2021) for the United Kingdom region. Conclusions and policy implications Our study uses effective and robust panel econometric approaches to model the impacts of CAPTA and FD on the validity of labor productivity within the SSA region in the presence of possible issues of heterogeneity and residual cross-sectional connectivity to avoid erroneous findings. Therefore, the key results drawn from the use of newly developed econometric techniques are in the following paragraphs. The estimates on LNPROD and LNCAPTA concerning FD, FI, and FM suggest significant effects among the panels. The CS-ARDL coefficient reveals that FD, FI, and FM positively impact the labor productivity of SSA countries in the long run. PDOLS and FMOLS among other techniques are also applied, the results of which indicate that FD, CAPTA, and FM have significant positive impacts on productivity in the SSA region; this result is in line with the studies of Mohammed et al. (2019) and Mendez and Kataoka (2021). Furthermore, our findings reveal that the development of the financial sector of the continent improves labor productivity. FI development also propels further labor productivity on the one hand, and the FM development index has a positive correlation to productivity in the continent on the other hand. This case can be a result of the underdevelopment of FM in Africa. We conclude that FD, LNCAPTA, and FM are positively related to labor productivity in Africa and its subregions. Therefore, policymakers should consider FD and LNCAPTA as indispensable for the enhancement of labor productivity in Africa and its subregions. In conclusion, financial administrators and Apex Bank supervisors in different SSA nations should promote different programs and policies, which can heighten the development of the financial sector toward achieving great productivity in the region.
2023-01-12T18:11:56.867Z
2023-01-03T00:00:00.000
{ "year": 2023, "sha1": "7c8120fd6600de822089b8314b7feea330297ee6", "oa_license": "CCBY", "oa_url": "https://jfin-swufe.springeropen.com/counter/pdf/10.1186/s40854-022-00397-8", "oa_status": "GOLD", "pdf_src": "Springer", "pdf_hash": "886a02fe66a7650422833e346e7b05a84db672ce", "s2fieldsofstudy": [ "Economics" ], "extfieldsofstudy": [] }
3414757
pes2o/s2orc
v3-fos-license
Exemestane Attenuates Hepatic Fibrosis in Rats by Inhibiting Activation of Hepatic Stellate Cells and Promoting the Secretion of Interleukin 10 Exemestane (EXE) is an irreversible steroidal aromatase inhibitor mainly used as an adjuvant endocrine therapy for postmenopausal women suffering from breast cancer. Besides inhibiting aromatase activity, EXE has multiple biological functions, such as antiproliferation, anti-inflammatory, and antioxidant activities which are all involved in hepatic fibrosis. Therefore, we investigated the role of EXE during the progress of hepatic fibrosis. The effect of EXE on liver injury and fibrosis were assessed in two hepatic fibrosis rat models, which were induced by either carbon tetrachloride (CCl4) or bile duct ligation (BDL). The influence of EXE treatment on activation and proliferation of primary rat hepatic stellate cells (HSCs) was observed in vitro. The results showed that EXE attenuated the liver fibrosis by decreasing the collagen deposition and α-SMA expression in vivo and inhibited the activation and proliferation of primary rat HSCs in vitro. Additionally, EXE promoted the secretion of antifibrotic and anti-inflammatory cytokine IL-10 in vivo and in HSC-T6 culture media. In conclusion, our findings reveal a new function of EXE on hepatic fibrosis and prompted its latent application in liver fibrotic-related disease. Introduction Liver fibrosis is a wound-healing response to chronic injury. In this process, the activation and phenotype change of hepatic stellate cells (HSCs) are the key cellular events. HCSs change from vitamin A storing quiescent cells to proliferative and contractile myofibroblast-like cells producing extracellular matrix (ECM) [1]. Hepatic fibrosis tends to occur in men and postmenopausal women but rare in premenopausal women [2,3]. Numerous evidences reveal that estrogen is the underlying factor of this phenomenon [4][5][6]. Accumulating studies have demonstrated that estrogen has protective effects for hepatic fibrosis and cirrhosis by inhibiting the activation and proliferation of hepatic stellate cells [7][8][9]. But unfortunately, except for its beneficial aspect, estrogen can also promote the growth of some malignant tumors. For example, targeting estrogen is one of the current effective therapeutic strategies for breast cancer. Exemestane (EXE), an aromatase inhibitor, can decrease estrogen levels by blocking estrogen synthesis in the adipose tissue [10]. Presently, EXE is widely applied as an adjuvant endocrine therapy for estrogen-receptor-(ER-) positive postmenopausal women to prevent breast cancer and for advanced breast cancer after treatment failure with tamoxifen [11,12]. In addition, to inhibit aromatase activity, EXE also had strong antiproliferative effect and can increase the occurrence of autophagy [11,12]. The latest report revealed that EXE had anti-inflammatory and antioxidant activities, probably unrelated to aromatase inhibition [13]. Even more, Masri et al. suggested that EXE could induce estrogen receptor alpha activity [14]. Theoretically, EXE may have a promoting function for hepatic fibrosis through diminishing the estrogen levels. However, almost no patients who took EXE underwent hepatic fibrosis and cirrhosis. A recent study showed that aromatase inhibitor EXE decreased radiationinduced lung fibrosis, suggesting that it has a protective effect [15]. Until now, there is no study to investigate the exact role of EXE during the hepatic fibrosis process. In the field of experimental liver fibrosis research, the two most commonly used animal models are carbon tetrachloride (CCl 4 ) and bile duct ligation-(BDL-) induced hepatic fibrosis models. CCl 4 is the most widely used hepatotoxin in the study of liver fibrosis and cirrhosis in rodents. In many aspects, it mimics human chronic disease associated with toxic damage. Common bile duct ligation is well-known to cause cholestatic injury and periportal biliary fibrosis. To explore the role of EXE in liver fibrosis process, we applied EXE in CCl 4 and BDL-induced hepatic fibrosis rat models and investigated its effects on collagen deposition and fibrotic marker expression. We further revealed its underlying mechanisms by exploring its roles in HSC activation, proliferation, and in the secretion of inflammatory factors. EXE Suppresses CCl 4 -Induced Rat Hepatic Fibrosis and Attenuates Hepatic Injury. To directly investigate the role of EXE on hepatic fibrosis, CCl 4 -induced rat hepatic fibrosis model was established. Rats were treated with CCl 4 for 8 weeks with or without EXE treatment. After the rats were sacrificed, the liver fibrosis degree was assessed by Sirius Red staining. Quantification of Sirius Red-stained collagen areas in images of rat liver tissues clearly showed that EXEtreated rats had less collagen deposition than control rats (Figure 1(a)). To further evaluate the function of EXE for liver injury, we detected the serum ALT and AST levels, which are the serological markers for liver fibrosis and liver function. We found that rats treated with EXE had a notable reduction of serum ALT and AST levels compared to rats without EXE treatment (Figure 1(b)). These data indicate that EXE has a protective effect on hepatic fibrosis and liver injury. EXE Ameliorates BDL-Induced Hepatic Fibrogenesis and Hepatic Injury. To validate our primary results, EXE was administered in BDL-induced rat hepatic fibrosis model. Consistent with the results of CCl 4 -induced model, EXE remarkably ameliorated the progression of BDL-induced liver fibrosis, as illustrated in the Sirius Red-stained liver specimens (Figure 1(c)). The serum AST levels were significantly decreased in the EXE-treated group. However, the serum ALT levels of the EXE-treated group had no significant alteration compared to the control group (Figure 1(d)). Our finding is further convinced by BDL-induced rat hepatic fibrosis model. Thus, EXE can significantly inhibit hepatic fibrosis in vivo. EXE Inhibits the Expression of Profibrogenic Markers. Meanwhile, the mRNA expression levels of Acta2, MMP13, Coll1a1, and TIMP1 were examined, which are all classical profibrogenic markers during liver fibrosis process [16,17]. In CCl 4 -induced rat model, EXE significantly inhibited the expression levels of Acta2 and Coll1a1 and elevated the expression level of MMP13. Nevertheless, it had no obvious effect on TIMP1 expression (Figure 2(a)). Likewise, EXE significantly decreased the BDL-induced upregulation of Acta2, Coll1a1, and TIMP1, but had no obvious impact on MMP13 expression (Figure 2(b)). EXE Inhibits HSC Activation and Proliferation In Vivo. Hepatic fibrosis is accompanied with the activation of hepatic stellate cells (HSCs) [18]. The activated HSCs increase the fibrillar extracellular matrix production and decrease the degradation and remodeling. To further explore the mechanism of EXE suppressing hepatic fibrosis, we performed immunohistochemical staining for α-SMA, a marker of HSC activation, and PCNA, a marker of cell proliferation in the rat liver samples from both EXE-treated and control groups. The results showed that EXE significantly decreased the expression of α-SMA in the liver and the α-SMA-positive HSCs mainly located in the septa of fibrotic liver (Figures 3(a) and 3(b)). The images of PCNA immunohistochemical staining revealed that the number of the PCNA-positive HSCs and hepatocytes in the liver of the EXE-treated rats was significantly reduced than that in the control rats (Figures 3(c) and 3(d)). The reduction of PCNA-positive hepatocytes also suggested that the EXE could attenuate liver injury. HSC activation and proliferation are key and symbolic events during the process of hepatic fibrosis [19]. Thereby, the inhibitory of EXE for HSC activation and proliferation in vivo might be the probable mechanism of its mitigation impact on liver fibrosis. EXE Restrains rHSC Activation and Proliferation In Vitro. To further investigate whether the inhibitory effect of EXE on fibrogenesis was due to the inhibition of HSC activation and proliferation, we incubated freshly isolated HSCs (rHSCs) from rats with or without EXE. Then, we investigated the effect of EXE on rHSC activation and proliferation. There is a clear discrepancy in morphological appearance of rHSCs treated with 10 μM EXE for 4 days (Figure 4(a)); therefore, this concentration was performed for all treatments in vitro. rHSCs treated with 10 μM EXE expressed significantly less α-SMA fibers and showed an inactivation phenotype in morphology compared with controls ( Figure 4(b)). In accordance with the results obtained in vivo, EXE markedly decreased the proliferation of rHSCs by EdU staining (Figure 4(c)). Furthermore, the mRNA expression levels of Acta2, Coll1a1, TIMP1, and Myh11, which are classical markers for HSC activation, were significantly reduced in EXE-treated rHSCs (Figure 4(d)). 2.6. EXE Promotes the Secretion of Antifibrotic Cytokine IL-10. Inflammation has a close relationship with chronic liver injury and hepatic fibrosis [20]. Numerous immunoregulatory cytokines are produced during the process of hepatic fibrosis and HSC activation. Among these cytokines, IL-10 has been proven to have a striking effect against liver injury and fibrosis [21,22], and IL-6 is a key profibrotic factor [23]. To elucidate whether IL-10 and IL-6 are involved in the suppressing effect of EXE on hepatic fibrosis, we detected their serum levels in the control group and EXE-treated group. The results showed that the level of IL-10 was higher in the EXE group than in the control group ( Figure 5(a)). In contrast, the EXE group showed less IL-6 than that of the control group ( Figure 5(b)). Since IL-10 and IL-6 were partly secreted from activated HSCs in the impaired liver, we examined the concentration of IL-10 and IL-6 in HSC-T6 cells culture media before and after treatment with 10 μM EXE. Likewise, EXE was also contributed to the secretion of IL-10, but not to the secretion of IL-6 in vitro (Figures 5(c) and 5(d)). Taken together, these data indicate that the antifibrotic effects of EXE are at least partially due to its anti-inflammatory activity. (b) Serum ALT and AST levels in the EXE-treated group and control group (10 rats/group). Data were expressed as means ± SD. (c) Collagen deposition was analyzed using Sirius Red staining and quantified six images using ImageJ software. Scale bar 50 μm. (d) Serum ALT and AST levels in the EXE-treated group and control group (10 rats/group). Data were expressed as means ± SD. * P < 0 05, * * P < 0 01. Discussion This study showed that treatment with EXE significantly attenuates CCl 4 and BDL-induced hepatic fibrosis in male rats. EXE inhibited the activation and proliferation of HSCs in vitro and in vivo and promoted the secretion of antifibrotic cytokine IL-10. Our results explained why occurrence of hepatic fibrosis or cirrhosis is not elevated in breast cancer patients who were administered with EXE, though it suppresses estrogen, and revealed that EXE might be a potential drug for treatment of hepatic fibrosis or cirrhosis. Hepatic fibrosis or cirrhosis is a common clinical situation that has various aetiologies and serious impairment for patients, but satisfactory therapies are still lacking. Sustained and exaggerated inflammation and hepatic stellate cell activation are the core processes during hepatic fibrosis initiation and progression [24]. EXE is a widely used drug for estrogen receptor-positive postmenopausal breast cancer patients. Our findings displayed its inhibitory effect Hepatic levels of Acta2, Coll1a1, MMP13, and TIMP1 mRNA for BDL-induced fibrosis rats were determined using qPCR (10 rats/group). Data were expressed as means ± SD. * P < 0 05, * * P < 0 01. in CCl 4 and BDL-induced hepatic fibrosis rat models by suppressing HSC activation and proliferation. Thus, it might be very interesting to make further clinical analysis of EXE on hepatic fibrosis and cirrhosis prevention and therapy. The roles of estrogen in hepatic fibrosis and cirrhosis are still controversial, but dominating evidences tend to support the idea that estrogen plays protective effects against hepatic fibrosis [5]. As expected, EXE should promote hepatic fibrosis as it is an aromatase inhibitor and can decrease the estrogen levels. Surprisingly, our results showed that EXE significantly suppressed hepatic fibrosis in vivo. And in vitro, our assay displayed that EXE also inhibited HSC activation. This suggested that the antifibrotic effect of EXE is not related to aromatase inhibition, but due to its inhibitory effects on HSC activation and proliferation. Liver inflammation is the hallmark of early-stage liver fibrosis, ultimately resulting in HSC activation and ECM deposition. In the numerous immunoregulatory cytokines, IL-10 is a potent anti-inflammatory cytokine. It can repress proinflammatory responses and limit unnecessary tissue disruptions caused by inflammation. By ELISA analysis, we found that IL-10 level was significantly elevated in the serum of EXE-treated rats. Therefore, it revealed that modulating anti-inflammatory cytokines might partly be the mechanisms of antifibrotic effects of EXE. However, inflammatory factors are mainly secreted by local inflammatory cells in the liver, such as Kupffer cells and lymphocytes. Hence, further researches about EXE on inflammatory cell infiltration and anti-inflammatory cytokine secretion are under investigation. In summary, our in vitro and in vivo findings demonstrated that treatment with EXE attenuates the process of hepatic fibrosis by inhibiting the activation of HSCs and upregulating the secretion of IL-10. To our knowledge, this study is the first to show the antifibrotic effects of EXE. As a widely applied medicine for breast cancer, EXE has shown no significant side-effects on renal or liver functions [25]. Therefore, EXE might be useful as an antifibrotic agent in chronic liver diseases. There are many common processes (c) At day 2, EXE-treated and control HSCs were exposed to EdU and fixed 2 days later and stained with Azide488 to visualize the DNA incorporated EdU. The percentage of EdU-positive cells was determined from three independent experiments. Scale bar 50 μm. (d) mRNA levels of Acta2, Myh11, Coll1a1, and Timp1 in HSCs with or without EXE treatment were determined by qPCR (10 rats/group). Data were expressed as means ± SD. * P < 0 05, * * P < 0 01. existing in different organ fibrotic diseases, including interstitial cell activation and inflammatory factor dysregulation. Thereby, EXE might also have extensive potential application in other different organ fibrotic diseases, such as renal fibrosis, pulmonary fibrosis, scleroderma, and so on. Cell Culture and Reagents. HSC-T6 cells, an immortalized rat HSC cell line, were obtained from the cell bank of the Chinese Academy of Sciences (Shanghai, China) and cultured in Dulbecco's modified Eagle's medium (DMEM, Gibco, USA) supplemented with 10% fetal calf serum (HyClone, Australia), 100 U/ml penicillin, and 100 μg/ml streptomycin. EXE was obtained from Sigma (St. Louis, USA). Animal Experiments. The hepatic fibrosis models of male Sprague-Dawley (SD) rats were induced by CCl 4 and BDL, respectively. For CCl 4 -induced hepatic fibrosis model, 20 SD rats about 200 g were given intraperitoneal injections of 1 ml CCl 4 /kg body weight diluted 1 : 1 in olive oil twice weekly for 8 weeks. One week after the first injection of CCl 4 , SD rats were randomly divided into two groups: CCl 4 group and EXE group (EXE + CCl 4 ) (n = 10 in each group), and started to treat with EXE. For BDL liver fibrosis model, 20 male SD rats were also divided into two groups one week after the rats underwent BDL (n = 10 in each group) and started to receive EXE or vehicle treatment. EXE was suspended in normal saline solution and administered twice a week by intraperitoneal injection at 4 mg/kg body weight. Rats were sacrificed 48 hours after the last EXE injection. All rat livers were fixed in formaldehyde or immediately frozen in liquid nitrogen. Rats were manipulated and housed according to protocols approved by the East China Normal University Animal Care Commission. All animals received humane care according to the criteria outlined in the "Guide for the Care and Use of Laboratory Animals" prepared by the National Academy of Sciences and published by the National Institutes of Health. Isolation and Culturing of Rat HSCs. Male SD rats (250 to 300 g) were used to isolate and purify HSCs. Rat HSC isolation method was a modification of the previously described method [26]. Rat HSCs were isolated using a twostep collagenase/pronase E perfusion of rat livers, followed by 18% Nycodenz two-layer discontinuous density-gradient centrifugation. After isolation, cells were suspended in DMEM with 10% fetal calf serum, 100 U/ml penicillin, and 100 μg/ml streptomycin, plated on plastic dishes (Corning, USA), and cultured at 37°C in a humidified atmosphere with 5% CO 2 and 95% air. Culture medium was renewed 24 hours after plating. Measurement of Liver Enzymes and Cytokines. Rat blood was collected in Eppendorf tubes and standing in 4°C overnight. The serums were separated by centrifugation at 3000 r/min for 20 minutes and stored at −80°C. Serum ALT and AST activities were detected with ALT and AST kits (Shen Suo You Fu Co. Ltd., Shanghai, China) by an automated analyzer. To study the effects of EXE on antiinflammatory cytokines, such as IL-10 and IL-6, rat blood was collected from the rat tails at 0.5 h, 4 h, 12 h, 24 h, and 48 h after the last injection of CCl 4 . At each time point, there were IL-10 and IL-6 levels that were quantitated with ELISA kits (eBioscience, USA) according to manufacturer's instructions. For in vitro experiments, after 48 hours of treatment with EXE, the IL-10 and IL-6 levels in culture media of HSC-T6 cells were detected. 4.5. Quantitative RT-PCR. Total RNA from liver tissues and cells was extracted using TRIzol (Takara, China) and reverse-transcribed using the PrimeScript™ RT reagent kit (Takara, China) according to the manufacturers' instructions. Quantitative PCR was carried out on the ABI 7300 sequence detector (Applied Biosystems, Rotkreuz, Switzerland). The primer sequences used are summarized in Table 1. Gene expression values were calculated based on −△Ct method and normalized to expression of GAPDH. Results were calculated as 2 −△△Ct and express the x-fold increase of gene expression compared to control rats or cells. 4.6. Cell Proliferation Assay. Cell proliferation was investigated by measuring active DNA synthesis with the Cell-Light™ EdU Apollo®567 cell tracking kit (RiboBio, Guangzhou, China). Isolated primary HSCs were seeded in 48-well plates containing round coverslips at density of 5000 cells/well with the presence or absence of 10 μM EXE. After 48 hours, EdU labeling was initiated. Another 48 hours later (day 4), cells were formalin fixed and visualization of the EdU incorporation was obtained according to the manufacturer's instructions. 4.7. Sirius Red Staining and Immunohistochemistry. Liver specimens were fixed in 10% formalin. To detect collagen fibers, paraffin-embedded liver sections were stained in 0.1% Sirius Red F3BA in a saturated picric acid solution. Randomly selected five fields from each section were photographed and analyzed. Red staining areas were quantified using NIH ImageJ software (http://rsb.info.nih.gov/ij/) and expressed as a percentage of total analyzed areas. For immunohistochemical staining, all tissue samples were fixed in phosphate-buffered neutral formalin, embedded in paraffin, and then cut into 5 μm thick sections. Tissue sections were deparaffinized and rehydrated and then incubated with 0.3% hydrogen peroxide/phosphate-buffered saline for 30 minutes and blocked with 10% BSA. Slides were first incubated using the mouse-anti-α-smooth muscle actin (α-SMA) antibody (clone 1A4; Sigma-Aldrich, USA) and a rabbit-anti-proliferating cell nuclear antigen (PCNA) antibody (ab29, Abcam, USA), respectively, at 4°C overnight with optimal dilution, labeled by HRP second antibody (A-11059 and A-21245, Thermo Scientific, USA) at room temperature for 1 hour, incubated with DAB substrate liquid (Thermo Scientific), and counterstained with hematoxylin. All sections were observed and photographed with a microscope. α-SMA-positive staining areas were quantified using ImageJ software and presented as a percentage of total analyzed areas. PCNA-positive and negative cells were counted at ×400 magnification. All reactive cells were counted as positive regardless of the intensity of staining. For HSCs, PCNA-positive cells were counted and expressed. For hepatocytes, in each picture, all cells were counted and the percentage of positive hepatocytes was determined. Statistical Analysis. Data are expressed as means ± standard deviation of at least three independent experiments unless indicated otherwise. Groups are compared using a two-tailed Student t-test. P < 0 05 is considered statistically significant.
2018-04-03T01:24:58.086Z
2017-12-10T00:00:00.000
{ "year": 2017, "sha1": "87c006ce2ab0e6f030958fe885655a8ee5942356", "oa_license": "CCBY", "oa_url": "http://downloads.hindawi.com/journals/jir/2017/3072745.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "9fb74e8be3de526dde4ff8b48df47d4df87cf6b2", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
258271261
pes2o/s2orc
v3-fos-license
Examining the Impact of Key Factors on COVID-19 Vaccination Coverage in India: A PLS-SEM Approach During the coronavirus disease 2019 (COVID-19) pandemic, numerous factors determined the performance of COVID-19 vaccination coverage. The purpose of this study is to examine the influence of factors such as government stewardship, planning and implementation, and community participation on COVID-19 vaccination coverage. This study applied partial least square structured equation modeling (PLS-SEM) by analyzing 187 responses from the stakeholders involved in vaccination programs in four select states of India. This study empirically validates a framework for improving vaccination coverage by confirming the significant impact of planning and implementation on vaccination coverage followed by government stewardship and community participation. Additionally, this study highlights the individual impact of each factor on vaccination coverage. Based on the findings, strategic recommendations were proposed that can be utilized for formulating policy-level actions to facilitate the vaccination program. Introduction Vaccination is a crucial public health intervention to protect the population from life-threatening diseases, including COVID-19 [1,2]. Despite the rate at which safe and effective vaccines were developed, vaccination coverage is still a matter of concern across the world [3]. In a developing country such as India, which amounts to one-seventh of the total global population of around 1.3 billion people, a robust vaccination coverage plan has helped to reduce the surge of disease and get the economy back on track [4]. As COVID-19 is a highly transmissible virus, one of the critical strategies to combat its ill effects is to develop population immunity that can be achieved through high vaccination coverage [5][6][7]. Various vaccines have been given Emergency Use Approval (EUA); however, the prime concern was concerning their implementation at the national, regional, and local levels [8]. India's nationwide COVID-19 vaccination program was launched on 16 January 2021. The Government of India concentrated all its efforts on ensuring logistic and financial resources available for the production, acquisition, and nationwide distribution of COVID-19 vaccines to control the COVID-19 pandemic [9]. India supported the research, development, and manufacturing of COVID-19 vaccines under the "Make-in-India" and "Make-for-World" Strategy, embarked on the use of cutting-edge technologies such as the COVID-19 vaccine intelligence network (CoWIN) for evaluating geographical coverage, tracking adverse events following immunization (AEFI) for vaccines, promoting inclusivity, and for providing a single reference point for citizens to follow their vaccination Several systematic interventions were also carried out in ensuring capacity building for carrying out this nationwide exercise. The existing supply chain for storage and transport of COVID-19 vaccines was leveraged and strengthened and effective monitoring of vaccine distribution and assured availability and efficient utilization of vaccines and syringes was ensured at all times. Additionally, the government ensured timely vaccination coverage in a planned manner, starting with front-line workers and a population >60 years in the first phase followed by the coverage of 45-60 years and >18 years' age group in the third phase. The significance of the critical factors that lead to high vaccination coverage started unfolding. India's free and voluntary nationwide COVID-19 vaccination exercise is also being carried out in a citizen-friendly approach through initiatives such as Har Ghar Dastak, Workplace COVID-19 vaccination center (CVC), School-based vaccination, vaccination of persons with no identity documents, Near-to-Home CVC, and Mobile Vaccination Teams. With 71% of CVCs located in rural areas and over 51% of vaccine doses administered to women, India's National COVID-19 Vaccination Program also ensured geographical and gender equity. India also laid out a well-organized communication strategy of providing correct information and customized guidelines on COVID-19 vaccination. It helped address vaccine hesitancy and promoted vaccine eagerness and COVID-19-appropriate behavior among the masses (https://pib.gov.in/PressReleasePage.aspx?PRID=1842157#:~:text=In%20a%20historic% 20achievement%2C%20India\T1\textquoterights,2%2C63%2C26%2C111%20sessions, accessed on 15 February 2023). Despite the decline in COVID-19 cases across the country, consistent efforts were ongoing to vaccinate all eligible citizens. This is exemplified by the fact that it took almost 9 months to reach the 1000 million mark and another 9 months to reach the 2000 million vaccination mark since the start of the vaccination drive on 16 January 2021, with the highest single-day vaccination record of 25 million doses achieved on 17 September 2021. On 15 July 2022, the Union Government launched a 75-day long 'COVID-19 Vaccination Amrit Mahotsav' to provide free precaution doses to all eligible adult populations at Government COVID-19 Vaccination Centers (CVCs). The Union Government, in close collaboration with State Governments/U.T administration has been working over a period of time to ensure that these efforts collectively culminated in the success of COVID-19 vaccination in India. The extant literature has examined the impact of numerous factors on vaccination coverage, including the significant impact of vaccine acceptance. However, most studies have focused on identifying the factors influencing vaccination coverage within a limited segment and context of the population. Therefore, a research gap has been identified in exploring and examining the factors impacting vaccination coverage in a comprehensive and broader context and population. In this study, the rationale behind this broad coverage is analyzed through the lens of the role of government stewardship (both central and state level), planning, and implementation for achieving coverage and active community participation in the COVID-19 vaccination program [10]. These factors have been critically analyzed for four states (Andhra Pradesh, Himachal Pradesh, Maharashtra, and Orissa) in India where there was significantly high vaccination coverage. The following subsections showcase the theoretical background of these identified factors and the proposed hypotheses for this study. Government Stewardship The extant literature evidences the significant impact of government stewardship as a form of government decision-making, support, and commitment to vaccination coverage [11,12]. The blueprint of the national-level strategy to combat the COVID-19 pandemic was shared with the states, and accordingly, the states formulated their state-specific plan to carry out vaccination coverage. Standard operating principles (SOPs) and guidelines drafted by the national and state governments such as transparent operations, real-time data collection, seamless communication with the health workers, vaccine storage, and han- dling helped in the optimum utilization of the resources by reducing wastage of the vaccine and, thereby, resulting in the vaccination coverage [13]. Vaccine procurement, allocation, and equitable distribution were also identified as contributing factors for vaccination coverage which involved government decision-making. Other contributing factors indicating government stewardship include collaboration with development partners, public-private health integration, financial support, and technological support. Planning and Implementation Planning and implementation have been identified as the critical contributing factor to vaccination coverage. The government in India prepared a responsive plan to ensure healthcare needs were provided by improving capacities [14]. Both the private and public healthcare sectors came together for the vaccination program to be successful. Budget planning, human resource management, training, immunization strategies, review, and monitoring were a few indicators of the process adopted by the government for planning and implementation [15]. The focus on these contributing factors by the government enabled policymakers and other private bodies to make suitable strategies revolving around the higher coverage of the population to be vaccinated [16]. Transportation, cold storage, and waste management also played a significant role in the seamless delivery of the COVID-19 vaccine [17]. Another important area in which the states worked proactively concerns the dispersal of information to create awareness around the COVID-19 vaccine [18]. At the state and district administrative level, communication strategies have played a pivotal role in the vaccination program, as it has led to phenomenal coverage and helped contain the pandemic, especially in hard-to-reach rural areas [19]. Community Participation Community participation has been regarded as a prominent factor influencing vaccination coverage in past studies. Strategies based on community participation have used a well-coordinated approach in tandem with the community mobilizers [16]. As the success of the vaccination program depends upon the number of vaccinated beneficiaries, a lot of attention is given to mobilization by ensuring clear communication about the aims and objectives of the vaccination program [20]. Community participation has also been studied in the context of community engagement, especially in tribal areas where a message from the community leader or an indigenous person creates more awareness among the people [21]. To improve community participation at the primary healthcare (PHC) level, accredited social health activist (ASHA) workers contributed tremendously to educating the population and resolving their doubts related to vaccine hesitancy [22]. Additionally, the nationwide network of health centers at the national, district, and local levels, along with institutional stakeholders, aided in the dispersion of strategies about vaccine administration, transportation, distribution, dispelling of misinformation, reaching marginalized populations-transgender, street vendors, elderlies, etc. The role of community mobilizers has also been identified as significant in improving community participation, thereby, resulting in enhanced vaccination coverage [23]. Theoretical Framework and Hypotheses Development To develop a theoretical foundation for this study, the following hypotheses were developed based on the theoretical background described in the aforementioned subsections. This research hypotheses' development demonstrates the relationships between the various constructs used in this study. Figure 1 illustrates the structural model designed to validate three research hypotheses evaluating the direct relationship between government stewardship, planning and implementation, and community participation with vaccination coverage. stewardship, planning and implementation, and community participation with vaccination coverage. Through this research, the intent is to highlight the learnings concerning the effectiveness of the vaccination program that was undertaken by the selected four states of the country. The relay of work can be benchmarked and customized by the practitioners and policymakers to formulate strategies and learned lessons can be tailored to other geographies in designing upcoming vaccination campaigns [24]. Therefore, the objectives of this study were two-fold as it aimed to (i) empirically examine the potential impact of government stewardship, planning and implementation, and community participation on vaccination coverage and (ii) to delineate strategic recommendations to be utilized by the vaccination program managers in the future in emerging and developing countries. The following subsections present the theoretical background of this study and the hypotheses' development for empirical validation. Development of Research Instrument The items in the questionnaire indicate four broad constructs described in the literature: government stewardship, planning and implementation, community participation, and vaccination coverage. Indicators of these parameters are displayed in Table 1. In the context of COVID-19 vaccination, the questionnaire items were developed and further simplified. Financial support H2. Planning and implementation positively influence vaccination coverage. H3. Community participation directly influences vaccination coverage. Through this research, the intent is to highlight the learnings concerning the effectiveness of the vaccination program that was undertaken by the selected four states of the country. The relay of work can be benchmarked and customized by the practitioners and policymakers to formulate strategies and learned lessons can be tailored to other geographies in designing upcoming vaccination campaigns [24]. Therefore, the objectives of this study were two-fold as it aimed to (i) empirically examine the potential impact of government stewardship, planning and implementation, and community participation on vaccination coverage and (ii) to delineate strategic recommendations to be utilized by the vaccination program managers in the future in emerging and developing countries. The following subsections present the theoretical background of this study and the hypotheses' development for empirical validation. Development of Research Instrument The items in the questionnaire indicate four broad constructs described in the literature: government stewardship, planning and implementation, community participation, and vaccination coverage. Indicators of these parameters are displayed in Table 1. In the context of COVID-19 vaccination, the questionnaire items were developed and further simplified. The survey questionnaire included twelve, twenty, seven, and three questionnaire items about government stewardship, planning and implementation, community participation, and vaccination coverage, respectively. An instance of a statement from the questionnaire characterizing administration under government stewardship is "State/National administration encouraged and facilitated streamlined work processes (such as SOPs or guidelines or strategy decisions) to leverage resource". The questionnaire items are based on the dimensions given in Table 1. Financial support 5 Vaccines procurement and allocation management 6 Technology support 7 Public-private health sector integration 8 Planning and Implementation Health infrastructure 9 Budget planning 10 Human resource management 11 Vaccine and ancillary supplies demand forecasting 12 Vaccine supply chain, logistics, and storage 13 Social and Behavior Change Communication (SBCC) strategies 14 Immunization campaign strategies 15 Review and monitoring 16 Interdepartment collaboration 17 Community Participation Mobilization initiatives 18 Awareness generation-demand generation and vaccine hesitancy 19 Community mobilizers Respondents made acceptable selections from 1 (strongly disagree) to 5 (strongly agree) for each item after reviewing the local procedures implemented during the COVID-19 vaccination program. The Likert scale, which ranges from strongly disagree (1) to strongly agree (5), was used to provide a simplified response since there are forty-two questionnaire items. To evaluate the questionnaire's face validity, a pretest was also conducted on it. Clarity, readability, understandability, and response format accuracy were all evaluated in the pretest. Data collection was performed online. Statistical Analysis Many studies employ empirical methodology in health and policy-related research [25,26]. PLS-SEM has gained prominence in various fields from contributing in its capacity to estimate route coefficients, model latent variables under non-normality conditions, and analyze data with small to medium sample sizes [27,28]. The partial least square structured equation modeling (PLS-SEM) approach was used to analyze the proposed research model. Smart PLS 4.0, a well-known tool for PLS-SEM analysis, was utilized in this study. Alternatives to SEM include Partial Least Squares Structural Equation Modeling and Covariance-based SEM (CB-SEM). The objectives and utilization purposes of each technique are diverse; however, the two procedures are complementary [29,30]. In a public health context, the PLS-SEM technique is more suitable than the CB-SEM technique for determining correlations between important driving factors. The PLS technique investigated the causal links between constructs using the software package Smart-PLS 4.0. Attributing to the exploratory nature of the investigation, the PLS technique was employed [31]. As validated and suggested by Henseler et al. (2009), a two-step technique for data analysis was employed [32]. First, the measurement model was analyzed, and then the structural relationships between latent constructs were investigated. Before assessing the model's structural relationship, the two-step procedure is designed to verify the measurements' reliability and validity. The data collected was analyzed using exploratory factor analysis (EFA) to find significant items related to the respective constructs (government stewardship, planning and implementation, and community participation), followed by confirmatory factor analysis (CFA) and structural model validation [33]. The EFA procedure was executed using the IBM SPSS 26 software package. The varimax rotation was used to optimize factor loading to improve factorability [34]. As a first step toward the factorization process, EFA, also known as the factor reduction technique [35] was carried out to extract a factor structure that conveys conceptual meaning to the overall concept of the study. A smaller subset of the overall sample was considered. With the guiding principle, the initial sample of 110 responses was considered for conducting EFA [36]. The process of reducing dimensions in EFA is iterative. CFA was performed to validate the observed factors during the EFA quantitatively. The conceptualization of the CFA measuring model is based on EFA output. In CFA, the same factor structure was employed, but a broader sample size of 187 responses was used to validate the factors. The CFA procedure was conducted using version 26 of the SPSS-AMOS program. In addition to factorization, this study also included model testing that demonstrated the impact of government stewardship, planning and implementation, and community participation on vaccination coverage. To test the hypothesized relationships incorporated in the proposed model, Smart PLS 4.0 software was used. Sampling Technique According to standard approach for calculating sample size for studies based on the PLS-SEM technique, the size of a particular structure in the model must have a minimum of 10 times the number of structural routes [37]. Furthermore, a strong association between sample size and statistical power was documented [38]. The study suggested that 169 respondents be the minimal number needed to analyze a model made up of five exogenous variables with 80% statistical power and a 5% level of significance [37]. In the current investigation, it we made sure that these requirements were met. State Selection The states were selected based on quantitative and qualitative aspects in order to obtain a mixed representation across different geographic regions of India. The states in India have been grouped under four regions: northern, southern, western, and eastern. The quantitative parameters considered for selecting states were the percentage of partially vaccinated or fully vaccinated population and the ratio of fully vaccinated to partially vaccinated population. The analysis of vaccination status was carried out with the secondary data retrieved from the CoWIN portal (an Indian government web portal for COVID-19 vaccination registration, owned and operated by India's Ministry of Health and Family Welfare). A few of the qualitative aspects, for instance, how the state has implemented changes in its policies to improve the uptake of the vaccines or the administrative support provided to resolve the challenges of vaccination campaigns or the variation in the speed of vaccinations were also included for the analysis. The longitudinal vaccination coverage data were obtained from secondary sources including government records since the campaign's inception, thus, facilitating the finalization of four states. The selected states were included in the study for the following reasons: Andhra Pradesh (southern region) for its highest immunization percentages with 61.9% for the fully vaccinated population, Himachal Pradesh (northern region) for the highest ratio of full and partial immunization (0.93) while ranking second-highest for fully vaccinated with 85% and 91% for the partially vaccinated population, Maharashtra (western region) for its vaccination reach that covered approximately 77% of its population as partially vaccinated and 57% as fully vaccinated, with a ratio of 0.74, and Odisha (eastern region) for its vaccination reach that covered approximately 77% of its population as partially vaccinated and 62% as fully vaccinated, with a ratio of 0.81. Data Collection The data were collected from 187 respondents from four states of India, namely, Andhra Pradesh, Himachal Pradesh, Maharashtra, and Odisha, directly involved in the COVID-19 vaccination campaign holding different levels of positions within the healthcare system. To collect data from respondents, a survey was conducted which lasted for four months (May 2022 to August 2022). Other qualitative characteristics, such as how the state amended the policies to improve the uptake of vaccinations, the administrative support offered to address the obstacles of the immunization campaign, and the variation in the immunization rate, are also included in the analysis. Results The respondents for the study were from different levels of human resources for inclusivity: namely, healthcare workers, support staff, and consulting partners. Table 2 shows the demographic profile of the respondents. The data was collected from four states of India, in which the responses from Andhra Pradesh account for 16.57%, Himachal Pradesh-33.15%, Maharashtra-23.52%, and Odisha-26.73%. Multiple designations that were similar in terms of roles and responsibilities were grouped under a broader designation term. The broad designation categories used for classifying the respondent's designation are mentioned in Table 2. For example, the designation 'immunization officer' included both state and district immunization officers. Similarly, the designations 'medical officer', 'chief health officer', 'community health officer', 'district health officer', 'block medical officer', and 'state medical officers' were grouped under the umbrella term of health officers. From the total set of valid responses received, the most responses were from health officers, which accounts for 37.96%. The second largest number of responses were collected from frontline workers, which was 19.78%. With sufficient data available for factorization, a smaller set of data was considered for EFA. Following the sequential and stepwise approach for empirical study, the factors extracted were validated through CFA. Extending the study from factorization, the proposed model was empirically validated to test the hypothesized relationships. Results of Factor Analysis In factor extraction, the factor accounting for maximum common variance is eliminated. To ensure the data sufficiency for the EFA, Kaiser-Meyer-Olkin (KMO) was observed. It was noted that the KMO reported in the analysis was 0.837, which is regarded as meritorious [39]. The process of EFA was conducted with principal component analysis as the extraction method and varimax rotation as the rotation method. To analyze convergent validity, two criteria were followed to retain the items based upon (1) no cross-loading of items and (2) factor loading to be greater than 0.5. In other words, items were deleted if the result of loadings was less than 0.5 on two or more [37]. In this study, a cut-off point of 0.5 or above was used. This value was critical in ensuring practical significance for sample sizes of 150 and above before proceeding to confirmatory factor analysis [31]. The cross-loading items were removed iteratively to improve the reliability parameters to obtain a perfect group of factors. In this iterative process, nine items were removed, which resulted in producing four factors that had an eigenvalue of more than one (refer to Table 3). Proactive budget planning and funds allocation were conducted for different scenarios before the launch of immunization. Vaccination Coverage There were opportunities for people to participate in small group meetings, conducted in a participatory way, and focused on the topic of vaccines (vaccination schedule, benefits, and risks). -0.679 -- The cost of the COVID-19 vaccine through the supply chain network to service delivery points was reviewed frequently. -0.657 --Vaccines and ancillary supplies were adequately and regularly forecasted based on the past days' immunization data. - There was a high impact of state administration on COVID-19 immunization coverage performance in the state. - There was a high impact of the planning aspect on COVID-19 immunization coverage performance in the state. - This study was executed in the following steps as proposed by Hair et al. (2021) to evaluate the obtained measurement model through IBM SPSS AMOS 26. First and foremost, Cronbach's alpha (α) and composite reliability (CR) were utilized to assess internal consistency reliability [37]. The values of α and CR were found between 0.792 and 0.918, which is higher than the permissible threshold of 0.7 for all the factors. This indicates a satisfactory level of internal consistency reliability. Secondly, outer loadings and average variance extracted (AVE) were analyzed to gauge the convergent validity [40]. All the outer loading values were found to be equal to or greater than 0.7, whereas the AVEs values were higher than 0.5. Corresponding to these results, the convergent validity of the factors was ensured. All three values of outer loadings, α, CR, and AVE are presented in Table 4. It was observed that the measurement model used for CFA was fitting well with the data. The analysis reported the Comparative Fit Index (CFI), Standardized Root Mean Squared Residual (SRMR), and Root Mean Square Error of Approximation (RMSEA) as 0.962, 0.064, and 0.037, respectively. The details are shared in Table 5. Lastly, discriminant validity was also assessed through the heterotrait-monotrait ratio (HTMT), as given in Table 6. It is evident from the given Table 6 that the requirements as per HTMT were met [41]. Therefore, the results were concluded, and correspondingly, the discriminant validity of the factors was strongly supported by HTMT in the proposed model. Results of PLS-SEM The model proposed in the study exhibits three major hypotheses illustrating the impact of government stewardship, planning and implementation, and community participation on vaccination coverage. For empirical validation of the model, the data collected was used to test the hypotheses statements. The model was empirically validated with the help of SmartPLS 4.0 software. The model mapped and validated in the SmartPLS 4.0 is shown in Figure 2. The results obtained by analyzing the structural model are presented in Table 7. It was observed that all the factors were significantly influencing COVID-19 vaccination coverage. From Table 7, it is evident that the model is validated, and the proposed hypotheses are fully supported. Lastly, discriminant validity was also assessed through the heterotrait-monotrait ratio (HTMT), as given in Table 6. It is evident from the given Table 6 that the requirements as per HTMT were met [41]. Therefore, the results were concluded, and correspondingly, the discriminant validity of the factors was strongly supported by HTMT in the proposed model. Results of PLS-SEM The model proposed in the study exhibits three major hypotheses illustrating the impact of government stewardship, planning and implementation, and community participation on vaccination coverage. For empirical validation of the model, the data collected was used to test the hypotheses statements. The model was empirically validated with the help of SmartPLS 4.0 software. The model mapped and validated in the SmartPLS 4.0 is shown in Figure 2. The results obtained by analyzing the structural model are presented in Table 7. It was observed that all the factors were significantly influencing COVID-19 vaccination coverage. From Table 7, it is evident that the model is validated, and the proposed hypotheses are fully supported. The model also reported 0.064 as the SRMR value, indicating a good model fit. The R-squared value of vaccination coverage obtained from the analysis is 0.294. By and large, all three relationships have β values between 0.2 and 0.3. However, the factors 'government stewardship' and 'community participation' have a greater impact on vaccination coverage in comparison with 'planning and implementation'. The reported β values H1 and H3 are 0.294 and 0.279, respectively. In the comparison of both relationships, it is observed that the impact of government stewardship is highest, followed by community participation and planning and implementation. The reported β value for the relationship between planning and implementation and vaccination coverage is 0.214, which is the least among the three proposed hypotheses. Discussion The success of mass vaccination programs has been among the major concerns of healthcare practitioners and researchers. For the last three years, with the emergence of the COVID-19 pandemic, healthcare researchers, practitioners, and governments have shown their greater concerns regarding the effective allocation, utilization, and equitable distribution of vaccines. India's COVID-19 vaccination drive claimed the position of being the fastest drive in the world. However, the major efforts to ensure that the COVID-19 vaccination drive was successful were determined by the intensity of planning and implementation of a series of events which included vaccine allocation, distribution, financial support, technological advancements, collaborations with private players, budget planning, strategizing the role of frontline workers, mobilizing the community, and removing vaccine hesitancy at the national level. In this regard, the current study examined the factors and underlying indicators that influenced vaccination coverage in India. Government stewardship, planning and implementation, and community participation were identified as the three prominent contributing factors to vaccination coverage. Previous studies have identified the practices of planning and implementing the decisions of the government in mobilizing the community to accept vaccines. This study is incremental in identifying the indicators of vaccination coverage with a micro-lens view and examining their impact on vaccination coverage by formulating a structural framework that can be used in future vaccination programs. This study identified the customized vaccination campaign strategies, streamlining the strategic decisions in the form of SOPs and guidelines, and ensuring availability of trained human resources before vaccination programs started as the prominent indicators of government stewardship. The government encouraged and facilitated streamlined work processes to leverage resource opportunities. Additionally, each state was provided with the flexibility to customize the SOPs and guidelines for vaccination in terms of their regions and population segments. The government also showed their due concern and diligence in bringing in the technological advancements and acquiring the technical staff to implement hassle-free technical operations for vaccination programs. The results identified the integration of the public-private health infrastructure, capacity building of healthcare human resources, proactive budget planning, adequate training to the frontline healthcare workers, and conducting the capacity building sessions before the vaccination program started as the significant contributing factors of planning and implementation. The entire public and private health infrastructure was integrated for testing, tracing, and treatment of COVID-19 patients. Proactive budget planning and funds allocation was performed for different case scenarios even before the launch of the vaccination program. Furthermore, the cold chain storage capacity, to maintain COVID-19 vaccine effectiveness, was planned according to the geographical/topographic/security aspects. The opportunities were planned for people to participate in small group meetings, to be conducted in a participatory way, and focused on the topic of vaccines (vaccination schedule, benefits, and risks) before the vaccination program started in India. This study revealed the significant indicators of community participation. The study identified that the willingness of the community to get vaccinated improved when proper information was provided to them regarding the free distribution of vaccines in their nearby centers. Active involvement of ANM, AWW, ASHA workers, vaccination team members, cold chain handlers, local influencers, prominent religious leaders, village panchayats, forest department, education department, institutions, certain people from NGOs, etc. was noted for community mobilization. The required information was made available and accessible in the public domain from multiple communication channels to dispel misinformation and improve vaccine trust and acceptance, thereby improving vaccination coverage. To curtail the infection rate and transmission of COVID-19, government agencies responded through vaccination programs [42]. Unlike other routine immunization programs, COVID-19 immunization requires swift and immediate actions [43]. The past experiences of India in conducting mass vaccination campaigns provided a sufficient foundation, and therefore, the health workers were able to contribute in a notable way that led to exemplary planning and implementation of the COVID-19 vaccination program [44]. These findings align with the studied observations. The study identified the pivotal role of government stewardship on immunization coverage as exhibited in Figure 2, representing the empirical relationships. The prominent indicators under government stewardship that lead to the direct impact on vaccination coverage were the integration of the public-private health sector infrastructure, frequent interaction (dialogue, meetings, presence) between the public and private healthcare sectors, handling logistics and transportation, addressing workforce shortage in many regions with challenging terrains as well as to build trust among the general population regarding the vaccine's efficacy, safety, and affordability. This aspect of social sustainability can be achieved through tailor-made strategies for the local population that provides reassurance to them regarding vaccine safety [45]. Future vaccination initiatives may also consider accounting for using private business partners for logistics. The public-private collaboration is specifically suggested for the difficult-to-reach areas. In the event of an emergency, such as COVID-19, in recent years, the public-private partnership model can also assist in resolving workforce difficulties in other geographical regions. Creating new funding sources and studying private-sector financial methods, such as blended finance and impact investments, are equally crucial. Public funds and corporate social responsibility (CSR) can also be used in rural areas to construct vaccination programs and other primary medical care facilities. Therefore, the need to strengthen public-private partnerships has been identified to improve public healthcare systems and attract private investment in the sector operating in geographically challenging areas. This study revealed the significant impact of community participation on vaccination coverage. The awareness among the people and mobilization initiatives were found to have a significant impact on vaccination coverage. While exploring the events for the impact of vaccine education on its coverage, as proclaimed by experts, the urban and semi-urban areas were found to have less vaccine hesitancy after communicating the information through various social media channels. These informative messages and videos dispelling vaccine hesitancy and creating confidence in populations were possible due to the digital infrastructure in urban areas. However, several rural areas in India, due to their difficult geographic terrains, remain digitally divided; hence, it became very challenging for the vaccination awareness team to improve community participation in rural and difficult-toreach areas. Therefore, the expert opinions suggested creating and verifying community awareness through digital interventions. For this purpose, a dedicated digital infrastructure must be planned and implemented. A hybrid technical system shall be implemented which can provide the facilities of online and offline portals to address the digital divide. Due consideration has to be provided by framing national policies to improve the digital infrastructure in digitally divvied areas. The local, state, and national governments will be required to work closely for removing the digital divide and addressing inequities in future healthcare programs. Based upon the empirical findings of this study and SWOT analysis, this study has used the recommendation matrix (Table 8) to help readers to summarize how countries or policy makers can improve performance if similar challenges for vaccination coverage might be faced in the future. Strengths, Limitations, and Future Research Agenda The merit of the online survey approach was that it allowed to quickly acquire information regarding diverse stakeholders' perceptions of government stewardship, planning and implementation, and community participation from four states in India representing India comprehensively across the distinct geographies with common quantitative and qualitative characteristics related to the partially or fully vaccinated population. This study provided vaccination program managers and policymakers with the tools to examine the key variables affecting vaccination coverage. The introduction of the PLS-SEM methodology in the survey makes the findings robust, relaxes assumptions on normal distribution, and provides the ability to estimate more complex models using smaller studies, while also recommending various avenues for future research. This study provides fundamental information regarding government stewardship, planning and implementation, and community participation toward key outcomes and stakeholders involved in the COVID-19 vaccination process. Other aspects, such as innovations and flexibility of the procurement and delivery system of vaccinations must be further explored in future qualitative studies with the help of grounded theory. Future studies can also identify other indicators of public health outcomes and measure the impact of government stewardship, planning and implementation, and community participation on public health outcomes at large and extending to a wider range of geographies in India. An unequal distribution of participants from various states was limited due to the online survey form and participant self-selection. To comprehend the perspectives, thoughts, and opinions of a diverse set of respondents, online surveys should be followed by or supplemented with other research designs, such as in-depth qualitative investigations. Future research should employ more stringent sampling techniques to ensure an equal representation across various respondent profiles as represented in a nationally conducted household survey study [46]. Conclusions The present study's premise was based upon the backdrop of two research objectives. Firstly, this study aimed to empirically examine the potential impact of government stewardship, planning and implementation, and community participation on vaccination coverage. For this purpose, the data were collected from four different states including the northern, eastern, western, and southern regions of India. The study employed structural equation modeling to analyze the hypothesized relationships. The results confirmed the significant impact of government stewardship on vaccination coverage, followed by community participation and planning and implementation. Secondly, the study aimed to delineate strategic recommendations to be utilized by vaccination program managers in the future in emerging and developing countries. Various strategic recommendations have been provided based on the findings of the study that shall be useful for program managers, policymakers, and public health officials in future vaccination programs. The identified factors and their relevant relationship with vaccination coverage will not only help the program managers of India but also serve as a successful operational model for vaccination programs in other parts of the world.
2023-04-22T15:26:21.423Z
2023-04-01T00:00:00.000
{ "year": 2023, "sha1": "a089fcf520c0f81df66dc1d7cb6f827543d6f20a", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2076-393X/11/4/868/pdf?version=1681969385", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "bf9db8059589816322c07d01a573b7cdb45c3808", "s2fieldsofstudy": [ "Medicine", "Political Science" ], "extfieldsofstudy": [ "Medicine" ] }
144012340
pes2o/s2orc
v3-fos-license
Factors Affecting Cooperatives’ Performance In Relation To Strategic Planning and Members’ Participation The cooperative society plays significant roles towards the economic development in Malaysia. It was first established in 1922 to protect the welfare of rural people and has expanded its establishment until today. With the aim to help its members, it is said that cooperative needs to accelerate its performance. However, there are issues regarding its weaknessess, especially in the management, financial and members’ participation. Therefore, this study aims to identify the factors influencing its performance through strategic planning and members’ participation. Questionnaires constructed have been distributed among cooperatives board members in Kota Setar District, Malaysia. The findings provide guidance for the cooperative to improve its shortcomings performance towards realizing the National Cooperative Policy 2011-2020. © 2012 Published by Elsevier Ltd. Selection and/or peer-review under responsibility of JIBES University Jakarta Indonesia. Introduction The cooperative society was first established in Malaysia since 1922 to protect the welfare of rural people and to avoid them from the exploitation. The establishment is not only to improve the wellbeing of its members but also to eradicate the poverty and act as the distribution tools of national wealth. With the aim to help its members, it is said that cooperative needs to accelerate its performance in order to transform the nation to be the high income nation by the year 2020. Cooperatives in Malaysia are regulated under the Commission of Co-operative Malaysia, also known as Suruhanjaya Koperasi Malaysia under the Ministry of Domestic Trade, Co-operatives and Consumerism. Generally, cooperative is one of the entities owned and controlled by the same person using the services. Cooperatives can be defined as "a society registered under the Co-operative Societies Act 1993 with the objective is to promote economic interest among its members in accordance with cooperatives principles" (Suruhanjaya Koperasi Malaysia, 2009). Co-operative Societies Act 1993 stated on the procedures of the registration, rights, privileges and other matters related to the cooperatives. There are 8,300 associations in this country as compared to only 4,000 cooperatives in the year 2008 (Suruhanjaya Koperasi Malaysia, 2011). One of the reasons of this increment is because of the government effort and support to implement programs to help the growth of cooperative movement in this country. Cooperatives in Malaysia are categorised according to the nine categories based on their functions namely banking, credit / financial, agriculture, housing, industry, consumer, construction, transportation and also services (Suruhanjaya Koperasi Malaysia, 2011). In terms of movement, number of cooperative positively increase every year with approximately 11.58 percent increase in a year, membership increase by 2.83 percent, share capital improve by 5.7 percent, assets increase by 15.71 percent and revenue also increase approximately by 15.68 percent a year within five years (2006)(2007)(2008)(2009)(2010). Nevertheless, the movement of the cooperative society in our country is still considered less develop as compared with other countries though the cooperative society plays a significant role towards the economic development in Malaysia. This happens due to lack of active participation among the cooperative societies in doing the business. In effort to help the cooperatives, government had first introduced the National Cooperative Policy 2002-2010 with the focus to assist the cooperatives to actively participate in the country's development. Therefore, study aims to examine the factors influencing cooperatives performance through strategic planning and members' participation. This study focuses on cooperatives in Alor Setar, Kedah as it is one of the state where there is a high number of registered cooperatives. Literature Review There are several contributing factors that lead to cooperatives' performance. Strategic planning, members' participation, human capital, structural and relational capital are among the identified factors by past literatures. Past and present literatures agree that strategic planning is one of the factors contribute towards firms' performance. Strategic planning is a process of carrying out the firms' mission, vision, objectives and goals of the organisation. Every board of directors must understand the strategic planning they have in their organisation to ensure that their business runs, moves toward achieving their objectives. A survey conducted among the 200 general managers and board presidents of agricultural cooperatives in Minnesota and Wisconsin reported that all Board agreed that cooperative should had well defined missions, objectives and goals but disagreed that their cooperative had a well developed and written strategic planning (University of Wisconsin Centre for Cooperatives, 2000). Past study had found a positive significant influence of the strategic planning on cooperatives' performances. This is supported by a tentative framework developed in study conducted that having a long term plan for cooperative will influence the performance of cooperatives in Malaysia (Sushila, Nurizah, Mohd Shahron, Rafedah and Farahaini, 2009). Moreover, a study conducted among the 250 board of directors of cooperatives in Malaysia also reveals that cooperative that has a strategic plan for at least 3 years significantly contribute towards the success of cooperatives (Sushila, et. al., 2010). On the other hand, Falshaw, Glaister and Tatoglu (2005) in their study of 113 UK companies found that there is no relationship between strategic planning and company performance. The importance of strategic planning in cooperatives cannot be denied as Pathak and Kumar (2008) in a study conducted in Fiji on factors contributing towards cooperatives' successful performance demonstrated the main reason that cooperatives were unsuccessful in Fiji was due to inadequate planning. Therefore, these studies incorporate strategic planning as one of the important factors to determine cooperatives performance. Besides having a good strategic planning, the goals and objectives of the organization can be achieved if there is a contribution from its members. Participation is defined as the involvement or state of participating of the members in the activities in the organization. Member participation in cooperatives activities especially in the cooperative governance is very important for the long run survival of a cooperative. According to report produced by United States Department of Agriculture (2011), active member participation would help the management in carrying out their responsibilities since the members' involvement would maintain the direction of the cooperatives towards enhancing the cooperatives' performance. Study conducted among cooperatives in Malaysia produced two main elements that reflect the members' participation which firstly is the participation in the policy making process through the attendance at annual general meeting and secondly is the patronage on cooperative products and services offered by their association . Amini and Ramezani (2008) in investigating the success factors of Poultry Growers' Cooperatives in Iran identified active member participation in the administration of cooperatives as a key factor influencing the successful performance of cooperatives. On the other hand, evidence from French cooperatives point out the typical findings that productivity effect from participation, however, is small, around 5% of the total cooperatives output. In addition, Dato Seri Najib Tun Razak in his message in National Cooperatives Policy (2011-2020) agrees that members' active participation and loyalty among the cooperatives members will determine the success of cooperative societies as they are also the consumers, employees and leaders in the cooperatives. Members' participation was therefore incorporated as a variable in this study. In this study, cooperatives' performance had been measured based on profit and sales growth, after-tax return on asset and after-tax return on sales except for school cooperatives. As for the school cooperatives, their performance is measured by profit and sales growth. It is supported with the research report produced by United States Department of Agriculture in measuring their cooperatives' performance (USDA, 2006) Method This research used the quantitative method to examine the factors affecting cooperatives performance in relation to their strategic planning and members' participation. A survey using Self-Administered Questionnaires had been developed and it was distributed to the respondents among 50 Cooperatives Board Members. The data collected from questionnaires were analyzed using Statistical Package for Social Sciences (SPSS) version 16.0 software. Descriptive statistics is used to explain the types of cooperatives in Malaysia. Besides, Pearson Correlation is employed to explain the factors affecting cooperatives' performance. Descriptive analysis is used to determine the demographic variables while Pearson's correlation coefficient will be use to determine whether there is a significant relation between the independent variables with the dependent variable. The hypothesis is developed to know whether there is a relationship between strategic planning and human capital with performance of the cooperatives. Results and discussion In order to identify the relationship between strategic planning and members' participation with cooperatives' performance, the findings first outline the discussion on the types of cooperatives' and characteristics of cooperatives board members based on the frequency analysis, followed by the discussion on the factors affecting cooperatives performance in relation to their strategic planning and members' participation. Total 50 100 Table 4.1 shows the types of cooperatives performance in Kedah, Malaysia. Most of the cooperatives (24 cooperatives, 48%) are Services Cooperatives followed by 9 Credit Cooperatives. In Kedah, majority of cooperatives are Services Cooperatives because these cooperatives run the business and provide the services such as spa, mini market, taxi and bus services and many more. In addition, School Cooperatives are among the identified cooperatives in this study and they operate the business with the major functions of providing the services to their students and staff as their members. Majority of cooperatives (20 cooperatives, 40%) in Kedah were established since 1970 while another 15 cooperatives were established since 1990 onwards. In terms of characteristics' of board members, this study has a total of 50 respondents whereby majority of board members are male which contribute to 31 respondents (64.6%) and the rest are female consists of 17 board members (35.4%). Among the board members, majority of them, 36 respondents (74.7%) aged more than 36 years old. This is because most of them are pensioner and experienced teachers. Based on the observation, the researcher found that cooperatives board members in Kedah, Malaysia consist of retired people. This is because younger generations were not interested to be part of the board members not even as the ordinary cooperatives' members. The study also found that most of the respondents are married which consists of 48 respondents (96 %) with majority of them are Malays consists of 49 board members (98%). Only one cooperative board member is Chinese because the nature of cooperatives in Malaysia consists of Malay as majority members and only few members are Chinese. This might be due to lack of awareness among them to join the cooperatives. In term of their educational level, 19 board members having Degree as their higher qualification and they are among the school teachers in Kedah area. Based on analysis, most of the cooperatives board members consist of 42 have attended the compulsory course known as ML100 which is compulsory for all the board members which was mostly conducted by Malaysian Cooperatives Commissions. However, another 8 board members did not attend the course due to no information from 'Maktab Kerjasama Malaysia (MKM)' and some of them were busy with their work. Table 4.2: The relationship between strategic planning and cooperatives' performance Results of the correlation indicate that there is a weak positive relationship between strategic planning and cooperatives performance measured by their profit growth as the Pearson correlation value 0.253, r = 0.253. Strategic planning is important for cooperatives in ensuring their continuous business. Every cooperative should conduct at least short-term plan and the top management must responsible for the cooperatives strategic planning. It is supported by a study conducted among the 250 board of directors of cooperatives in Malaysia also reveals that cooperative that has a strategic plan for at least 3 years significantly contribute towards the success of cooperatives (Sushila, et. al., 2010). Data shown supported the previous study, however, the cooperatives' strategic planning is not the main factor that directly affects their overall performance. This is because most of the cooperatives in Kedah did not properly develop their strategic planning and some of them only develop their short term planning instead of their long-term. In addition, most of school cooperatives did not have their own strategic planning. On the other hand, Falshaw, Glaister and Tatoglu (2005) in their study of 113 UK companies found that there is no relationship between strategic planning and company performance. Table 4.3: The Relationship between members' participation and cooperatives performance Results of the correlation indicate that there is a weak positive relationship between members' participation and cooperatives performance measured by their profit growth as the Pearson correlation value 0.236, r = 0.236. Study conducted among cooperatives in Malaysia produced two main elements that reflect the members' participation which; one is participation in the policy making process through the attendance at annual general meeting and second is patronage the cooperatives products and services offered by their cooperative . This study shows that even though participation from members are importance for the cooperative movement and board members agree that opinion from members during their annual meeting may contribute towards their performance, but still there is lack of participation from their members as some of the cooperatives viewed the cooperatives not as importance as the other business. They only attend the annual meeting but not actively involved in the administration of those cooperatives and resulted to weak relationship between the variables. Conclusion and Recommendations Overall, this study shows that cooperatives' strategic planning and participation from their members are the identified factors that contribute to their overall success and performance. The result confirmed the hypothesis developed as it shows that there is a relationship between the variables involved. Nevertheless, these two factors are not considered as the major factors affecting the cooperatives' performance as the results indicated the weak positive relationship between the variables. Even though most of the cooperatives developed their strategic planning in guiding their business, but it is very profound that strategic planning is not significantly affecting the direction of their cooperatives. Several recommendations have been made in guiding the cooperatives to further improve their performance. First, cooperatives may develop their strategic planning that can strengthen their cooperatives' activities. In addition, it is highly suggested that these cooperatives need to have their own mission and vision and focus on the long-term planning which reflected from their vision and mission. In terms of members, the cooperatives need to regularly communicate with them regarding any updated information, activities and also increased their involvement in cooperatives' decision making so that this third economic contributor can help the government effort in becoming the high-income country. This study revealed that it is importance for the cooperatives to have adequate planning and encourage the participation from its members in their administration. This is because, when their performance is improved, indirectly it can boost the economics of the country as well as promoting job creations as a strategy in reducing poverty. Since their establishment aims to help its members, it is vital to ensure their success. Thus, this result may give the benefits to the other country in order to further improve their performance by focusing on these factors as it can help them in determining their success and indirectly can improve their standard of living of their society. However, due to certain limitations involved such as limited scope of study, method used and variables involved, this study suggests that more variables should be employed in determining the factors influencing their performance by looking at the aspects of corporate governance, financial support, marketing strategies and combination of quantitative and qualitative method is proposed for future research.
2019-05-04T13:06:46.519Z
2012-12-03T00:00:00.000
{ "year": 2012, "sha1": "5382894e21bf880ba40685132b3b3c395c4cdc5f", "oa_license": null, "oa_url": "https://doi.org/10.1016/j.sbspro.2012.11.098", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "766c6d53afcb5ff8e10654ef5411db72a30b4f42", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [ "Business" ] }
2563063
pes2o/s2orc
v3-fos-license
The UBA-UIM Domains of the USP25 Regulate the Enzyme Ubiquitination State and Modulate Substrate Recognition USP25m is the muscle isoform of the deubiquitinating (DUB) enzyme USP25. Similarly to most DUBs, data on USP25 regulation and substrate recognition is scarce. In silico analysis predicted three ubiquitin binding domains (UBDs) at the N-terminus: one ubiquitin-associated domain (UBA) and two ubiquitin-interacting motifs (UIMs), whereas no clear structural homology at the extended C-terminal region outside the catalytic domains were detected. In order to asses the contribution of the UBDs and the C-terminus to the regulation of USP25m catalytic activity, ubiquitination state and substrate interaction, serial and combinatorial deletions were generated. Our results showed that USP25m catalytic activity did not strictly depend on the UBDs, but required a coiled-coil stretch between amino acids 679 to 769. USP25 oligomerized but this interaction did not require either the UBDs or the C-terminus. Besides, USP25 was monoubiquitinated and able to autodeubiquitinate in a possible loop of autoregulation. UBDs favored the monoubiquitination of USP25m at the preferential site lysine 99 (K99). This residue had been previously shown to be a target for SUMO and this modification inhibited USP25 activity. We showed that mutation of K99 clearly diminished USP25-dependent rescue of the specific substrate MyBPC1 from proteasome degradation, thereby supporting a new mechanistic model, in which USP25m is regulated through alternative conjugation of ubiquitin (activating) or SUMO (inhibiting) to the same lysine residue (K99), which may promote the interaction with distinct intramolecular regulatory domains. Introduction Ubiquitin (Ub) modifies protein architecture when covalently attached to its substrates. Besides being the main tag for sending misfolded proteins to the proteasome, Ub also plays a relevant role in protein-protein interaction and modulation of catalytic activity or protein fate [1][2][3]. The intrincate Ub-signalling networks require a tight regulation of both conjugation and deconjugation processes, and the final fate of the modified protein depends on several factors, including the ubiquitin chain length and the configuration of Ub-Ub linkages within the poly-Ub chain [4,5]. In particular, monoubiquitination is not related to proteasome targeting but to modification of enzymatic activity and subcellular localization [6,7]. On the other hand, ubiquitin-like molecules (Ubls), such as SUMO, are also covalently bound to their substrates, and thus are conjugated, deconjugated and recognized by specific enzymes and their targets [8,9]. Although many studies have investigated the activation of Ub and its transfer to substrates [10], the biochemical mechanisms downstream of ubiquitination are not completely understood. It is known that the subsequent events are mediated by ubiquitin receptors, which interact with monoubiquitin and/or polyubiquitin chains through small (20-150 amino acids) Ub-binding domains (UBDs) [11,12]. At least fifteen classes of UBDs have been annotated [13] and this profusion of motifs has launched the study of Ub signalling by: i) providing clues on the roles and modes of action of ubiquitinated substrates, and ii) showing that UBDcontaining proteins interact either with Ub or with a ubiquitinated protein. UBD-Ub interactions are usually weak and generate a dynamic protein network that is rapidly assembled and disassembled, thus hindering their study. Moreover, UBDs can modulate the activity of the host protein, as intramolecular interactions between a UBD and a Ub moiety covalently attached to another region of the same protein lead to structural changes that alter the enzymatic activity [11,12]. UBDs are found not only in proteins that interact with ubiquitinated substrates, but also in ubiquitinating or deubiquitinating enzymes. The deubiquitinating enzymes (DUBs) hydrolyze the Ub moieties conjugated to substrates and thus, process newly synthesized Ub, recycle Ub, or edit polyUb chains [14,15]. Ubiquitination, like phosphorylation, is reversible [16] and, therefore, DUBs can affect the stability and fate of Ub-conjugated proteins, and also allow a tight control of Ub-induced switches. It is assumed that the presence of UBDs in DUBs favor the specific recognition of the ubiquitin modifications, whereas the N-and Cterminal long extensions flanking the DUB-conserved catalytic core may be involved in substrate recognition irrespective of their ubiquitination state. Data on the substrate specificity and physiological function of most DUBs, including USP25, are still scanty. USP25 encodes three different protein isoforms produced by alternative splicing: two of them are expressed ubiquitously, while the longest (USP25m) is restricted to muscle tissues [17] and is upregulated during myogenesis. Among several sarcomeric substrates, USP25m was reported to specifically interact and rescue MyBPC1 (Myosin Binding Protein C1) from proteasome degradation, thereby raising its cellular half-life [18]. We aimed to identify structural domains relevant for USP25m regulation. By in silico analysis we identified three potential UBD signatures in the N-terminal region of USP25m. Here, we characterized USP25 by assessing the contribution of these UBDs, as well as the long C-terminal region of USP25, to the catalytic activity. Our results showed that USP25m was monoubiquitinated in cultured cells, and that the UBDs modulated this modification. The preferential site for monoubiquitination is lysine 99 (K99), a residue that has been recently reported to be also the target of sumoylation [19,20]. According to our results, mutation of the K99 residue diminishes the rescue of the specific substrate MyBPC1 from proteasome degradation. In view of these results and those of other authors [19], we propose a novel mechanistic model for USP25m regulation in which the same lysine residue can be either ubiquitinated or sumoylated, and these mutually exclusive modifications have opposite effects on the enzyme activity. This regulatory model bridges the Ub and SUMO pathways and may be extrapolated to other ubiquitin-specific proteases. Results Mutation of the Cys178 catalytic residue abrogates USP25m deubiquitinating activity USP25m sequence (1125 aa) alignments revealed five highly conserved distinct motifs (I to V), embedded in two domains (USP1 and USP2) characteristic of the ubiquitin-specific protease family (UBPs, USPs in humans) [17]. The conserved catalytic triad (Cys, Asp and His), in which Cys-178 was the presumed key residue for DUB activity, was located in the motifs I, II and IV, respectively ( Figure 1A). Evidence of Cys-178 direct role in USP25m DUB activity was obtained by site-directed mutagenesis to Ser (C178S mutant). A deubiquitinating activity assay for USP25m was used to verify this hypothesis. USP25m and Ub-b-Galactosidase cotransformation in BL21 cells rapidly induced the proteolysis of the fusion between Ub and b-Gal ( Figure 1B). This proteolityc activity was not observed with the C178S mutant, thus showing that Cys-178 is essential for the deubiquitinating activity of USP25m. The deubiquitinating activity of USP25m depends on the presence of a long coiled-coil stretch, but does not require the N-terminus Ubiquitin Binding Domains In silico homology searches across several motif databases revealed three Ubiquitin Binding Domains at the N-terminus of Figure 1. USP25 domain dissection and their contribution to the catalytic activity. A. Sequence homologies revealed five highly conserved USP motifs (I to V) in two domains (USP1 and USP2) that catalogue USP25m as a deubiquitinating enzyme. Cys-178 is the putative active site of the enzyme, since it is conserved in all analyzed members of the family. B. Deubiquitinating activity assays in E.coli cells co-transformed with the recombinant substrate Ub-bgalactosidase and either wild type (WT) USP25m or the C178S mutant confirmed that Cys 178 is the active site of USP25m. bgal immunodetectection shows a lower band using WT USP25m, indicating hydrolysis of Ub from bgal, while the mutated form is catalytically inactive and displays the band of the uncleaved fusion Ub-bgal. Note that the endogenous b-galactosidase is of lower molecular weight. doi:10.1371/journal.pone.0005571.g001 USP25m, one UBA and two UIM signatures (Figure 2A). These domains are known to interact with ubiquitinated proteins, although they seem not to be required for catalytic activity. To assess whether the UBA and UIM domains contribute to USP25m deubiquitinating activity, we co-expressed GST epitopetagged deletion mutants of USP25m, which lacked one or several . The constructs bearing serial deletions of USP25m at the C-terminus are also shown (E679X, E769X, Q863X, E1020X). C. Deubiquitinating activity assays indicated that UBDs were not required to cleave off ubiquitin (left upper panel). The mutant USP25mE679X was unable to hydrolyze Ub from the Ub-bgal substrate, indicating that the region between the amino acids 679 and 769 was required for enzymatic activity (right upper panel). The empty GST vector and the full length USP25m were respectively used as negative and positive controls. The expression level of each USP25m mutant was comparable (lower panels). doi:10.1371/journal.pone.0005571.g002 of the UBDs ( Figure 2B), with the recombinant substrate Ub-b-Gal in E. coli. Under these conditions, the deubiquitinating activity-assay clearly showed that deletion of UIM1, UIM2 and UBA domains, alone or in combination, did not abolish neither diminish the USP25m DUB-activity compared to the wild type enzyme ( Figure 2C, left panel). USP enzymes are usually proteins of high molecular weight, which stretch at the N-and/or the C-terminus of the USP catalytic domains. These extensions have been proposed to be involved in substrate recognition, regulation of the catalytic activity or subcellular localization. USP25 stretches more than 450 amino acids at the C-terminus, including the muscle-specific peptide (introduced by alternative spliced exons 19a and 19b, see Figure S1). We had previously shown that this tissue-specific peptide (70 amino acids) was required for recognition and rescue from proteasome degradation of sarcomeric substrates [18], but except for this experimental evidence, the function of this long Cterminus remained unassigned. We decided to perform serial deletions by introducing STOP codons by site-directed mutagenesis at positions E679X, E769X, Q863X and E1020X. As in silico searches did not find any functional motif or obvious homology in this region, the positions for the STOP codons were chosen by avoiding to impair secondary structures such as alpha helices or coiled-coils ( Figure 2B). In contrast with the results obtained with the UBD mutants, the analysis of the serial truncated proteins at the C-terminus of the USP25m protein clearly showed that mutant E679X was incapable of cleaving off the ubiquitin moiety of the Ub-b-gal protein, whereas mutants E769X, Q863X and E1020X still retained the enzymatic activity ( Fig. 2C right panel). Thus, even though the catalytic USP domains relevant for DUBs were present in E679X ( Figure S1), the deletion of 90 amino acids between E679 and E769 completely abrogated the deubiquitinating activity of USP25. It is worth noting that in silico predictions showed a long coiled-coil domain in this region. As UBDs have also been involved in shifts in subcellular localization, we asssessed whether the wild-type USP25m and UBD-deleted constructs, either in their catalytically active or inactive forms, showed different localizations. No change in the distribution pattern was observed in any condition, indicating that the UBA and UIM domains were not required for targeting USP25 to its localization ( Figure S2). We also monitored Ub distribution on the same cells and ruled out a possible effect on the accumulation of ubiquitinated proteins ( Figure S2), as described for other USPs [21]. Nor did the USP25 C-terminal truncated mutants show any shift in their subcellular localization, as they all remained cytosolic in transient transfections on cultured cells (data not shown). USP25m forms complexes by dimerization/ oligomerization The dynamic nature of the Ub-pathway requires the formation of complexes in which enzymes and cofactors are transiently recruited, not only E2 and E3 ligases but also DUBs [22][23][24]. We explored whether USP25m was able to dimerize/oligomerize. To this end, we used two tags, c-Myc and GFP, fused to the wild-type USP25m protein and each of the deletion USP25m mutants, respectively. Co-immunoprecipitation assays showed the interaction between the cMyc-and GFP-tagged USP25 proteins, indicating that USP25 formed homodimeric or oligomeric complexes in vivo ( Figure 3A). The catalytically inactive enzyme, as well as all the UBD deletion mutants, also dimerized (or oligomerized) ( Figure 3A and 3B). Similar results were obtained when assaying the C-terminal mutants ( Figure 3C). A double mutant USP25mD153-E679X (in which the first 153 amino acids have been deleted, and the protein is truncated at amino acid 679) could also oligomerize ( Figure 3C). Therefore, neither the UBDs nor the C-terminus of USP25m were required for this interaction. Taken together, these results suggested that the region between amino acids 153 to 679, which contained the USP domains and was not deleted in any construct, was relevant for dimerization/ oligomerization. Native gel electrophoresis followed by western blot immunodetection confirmed that USP25 was included in high molecular weight complexes (.250 kDa, data not shown). As non-denaturing conditions were used to detect protein complexes, the dimerization (oligomerization) of USP25 could either be direct or require some other substrate/partners. USP25m was ubiquitinated and autodeubiquitinated, and the target residue for ubiquitination in vivo is K99 Many E3 ligases and some DUBs undergo post-translational modifications, such as ubiquitination or sumoylation, which modulate the recognition of their substrates [25]. Of particular interest was to determine whether USP25m was modified by ubiquitin, given the deubiquitinating activity of the enzyme and the fact that both, mono-and poly-ubiquitination have been widely reported to regulate enzyme function. We investigated the USP25m ubiquitination status in HEK293T cells transiently cotransfected with His(6x)-Ubiquitin and Myc-tagged USP25m constructs. Immunodetection of USP25m showed an additional higher molecular weight-band ( Figure 4A), around 25% of the total USP25m. This band was much weaker in lysates of cells that did not over-express the Ub construct, indicating that only a fraction of USP25m was ubiquitinated in vivo under our experimental conditions. Unexpectedly, the expression of the USP25m catalytically inactive form produced a much stronger high molecular-weight band ( Figure 4A, lanes 3-4), which amounted to 60% of total USP25m when co-transfected with the Ub construct ( Figure 4A, histogram). The fact that the proportion of modified enzyme was increased in the catalytically inactive C178S mutant, strongly indicated that the wild-type enzyme is able to autodeubiquitinate. Taken together, these results indicate that USP25m is mono-ubiquitinated in vivo, and that the enzyme may revert this modification by autodeubiquitination. To examine the possible involvement of UIMs in USP25m ubiquitination [26,27], we assayed the ubiquination state of the UBD deletion constructs. The deletion of any UIM and UBA domains, or their combination, prevented, at least partially, USP25m mono-ubiquitination ( Figure 4B, upper panel). The mono-ubiquitinated bands were more apparent if the catalytically inactive forms of the deletion constructs were used ( Figure 4B, lower panel). In all deletions and constructs the proportion of modified protein was clearly lower than that of the full-length USP25m. To further study USP25m ubiquitination, we performed a Ni 2+ pull-down assay in cells co-expressing the different USP25 mutants together with His-tagged Ubiquitin. We recovered ubiquitinated USP25m proteins in all UBD deletion mutants ( Figure 4C). We also tested the C-terminus deleted USP25m forms; all of them showed intense high molecular-weight bands indicative of monoand possible multi-or poly-ubiquitination. Noticeably, when testing the mutant that lacked the coiled coil region (E679X), most of the protein was Ub-modified. This result supported the proposed autodeubiquitination activity, as this mutant was catalytically inactive. To discern whether this multiple band pattern was caused by poly-ubiquitination to tag the deleted USP25m proteins to proteasome degradation, we performed an A. Coimmunoprecipitation assays after co-expressing two differently tagged forms (cMycor GFP-) of either the wild-type USP25m or the C178S mutant, showed that USP25 dimerized in vivo (upper panel). The catalytic Cys was not required for interaction. The empty GFP vector was used as a negative control. B. The same co-immunoprecipitation experiment co-expressing the cMyc-USP25m with each of the UBD deletion mutants fused to GFP showed that none of the UBDs was critical for dimerization or formation of the complex. Last lanes of the panels correspond to the co-immunoprecitation of the two mutants bearing the deletion of the 3 UBD domains (D19-141, inclusive). Single transfection with the GFP-USP25m construct was used as a negative control. C. The same assays using the constructs with serial deletions of the C-terminal region of the enzyme showed that the C-terminus was not required for dimerization of USP25m. The last lane of the panels at the left corresponds to the cotransfection with two differently tagged E679X mutants. Single transfection with the GFP-USP25m construct assay of protein stability with the proteasome inhibitor MG132 ( Figure 4D). USP25m full-length as well as the UBD deletion mutants were stable at 16 hours treatment, supporting mono-and multi-Ub modification. In contrast, the protein levels of the Cterminal deletion mutants were clearly increased when the proteasome was inhibited, indicating poly-ubiquitination ( Figure 4D), therefore the most C-terminal region is required for USP25 estabilization. To identify the lysine residue involved in the mono-ubiquitination, we co-transfected cells with the mutant USP25mC178S with His(6x)Ub, enriched the lysate in USP25m forms by immunoprecipitation with an anti-cMyc antibody, and analysed the obtained bands by LC-ESI-QTOF mass spectrometry. One Ub-modified peptide appeared consistently, indicating that K99 was the most likely acceptor site (Table 1). This lysine is located at the beginning of UIM1 and most interestingly, had been previously reported to be the main acceptor for USP25 sumoylation, suggesting a dual regulatory role for this residue. Given that deletion of UIMs, although clearly diminishing USP25 ubiquitination, did not completely preclude it, other less preferential sites might become alternative acceptor sites for ubiquitination. Taken together, our results strongly suggest that: i) USP25m was ubiquitinated and underwent autodeubiquitination, ii) UIM1, UIM2 and UBA domains promoted, but were not strictly required for monoubiquitination, iii) the C-terminal region is relevant for the protein stability and, when deleted, USP25m is polyubiquitinated and targeted for proteasome degradation, and iv) the preferential target lysine for ubiquitination is K99. Given that USP25 was also reported to be a target for SUMO [19], we assayed other potential USP25 post-translational modifications. The in vitro assay showed that indeed SUMO-1 and SUMO-2 were conjugated to USP25m. In addition, our results in cultured cells revealed that USP25m was phosphorylated (in Tyr and Ser/Thr residues) and acetylated, and that these modifications were independent of USP25m catalytic activity, as the wild-type protein and the inactive mutant were similarly modified ( Figure S3). Further work is needed to assess whether these modifications modify the catalytic activity of USP25. UBDs modulate USP25m substrate recognition Although the targets of most DUBs are unkown, USP25 is a DUB that specifically recognizes and binds its substrates in physiological conditions. We previously reported that the muscle-specific isoform USP25 interacted with MyBPC1, and that the DUB activity of USP25m rescued this substrate from proteasome degradation. This recognition was highly specific and depended on the peptide encoded by the muscle-specific exons 19a and 19b, as the ubiquitous USP25 isoform was unable to rescue this substrate [18]. Given the reported relevance of UBDs in the regulation of protein folding and modular domain interactions, we were prompted to test the effect of the absence of UBA and/or UIM domains of USP25m in the rescue of MyBPC1 from proteasome degradation. As a positive control, the expression of the wild-type USP25m rescued MyBPC1 to the levels attained with the MG132 proteasome inhibitor ( Figure 5A). Interestingly, all the UBD mutants recognized and rescued MyBPC1 from proteasome degradation, although with varying efficiency (compare lane 1 with lanes 5 to 10 in Figure 5A). The activity of the mutants in rescuing MyBPC1 was compared, considering the rescue by the wild-type USP25m as the reference ( Figure 5B). Single deletion of the UIM2 did not significantly affect the recognition and rescue of MyBPC1, whereas the deletion of UIM1 (2-fold) or the UBA domains (3-fold) considerably increased the levels of MyBPC1. The double deletion of the UIM1UIM2 decreased the rescue of the substrate. Interestingly, the deletion of the three UBD domains, increased the rescue of MyBPC1 up to 8-fold, indicating that UBDs are not strictly required for MyBPC1 recognition and rescue, but rather they are involved in the enzyme catalytic regulation and/or access to the substrate, probably as a response to cellular requirements. Of note, this deletion not only included the UBDs, but also the SIM domain (SUMO-interacting motif) and the preferential ubiquitin/SUMO target, the residue K99. Mutation of K99 inhibits USP25m activity As aforementioned, previous reports showed that sumoylation of USP25 occurred at K99, and this modification inhibited USP25m deubiquitinating activity on tetraubiquitin chains [19]. We have also showed in this work that this residue was also the preferential target for ubiquitination. Given that the two modifications are mutually exclusive, we hypothesized that ubiquitination in K99 would cause the opposite effect, activating USP25. Mutation of K99 to arginine would eliminate the preferential sites for both, sumoylation and ubiquitination, and we could then explore the effect of this mutation in USP25m activity directly on MyBPC1, a physiological substrate. As controls, we used both the wild-type enzyme and the catalytically inactive mutant C178S. As expected, the USP25mC178S was not able to rescue MyBPC1 in a time-course experiment, whereas the expression of the wild-type USP25m raised the half-life of MyBPC1, as its levels were steadily maintained through time when protein synthesis was inhibited ( Figure 5C and [18]). Remarkably, USP25mK99R was not able to rescue MyBCP1 from proteasome degradation, as the MyBPC1 levels steadily declined. After 16 h treatment with cicloheximide, the MyBPC1 levels were already decreased to 50%, and after 24 h, the levels of MyBPC1 were much lower than those obtained by the wild-type enzyme, although higher than those obtained by the catalytically inactive mutant (statistical significance p,0.05, Mann-Whitney test) ( Figure 5D). If the K99 mutation merely prevented sumoylation, and sumoylation inhibited USP25, then we should have expected an increase of DUB activity for the USP25mK99R mutant. However, the fact that the K99R mutant was less effective in rescuing its substrate indicated that the alternative modification of this lysine, namely ubiquitination, resulted in USP25m activation. Discussion As DUBs are the least known members of the UPS, we studied the physiological function of USP25 by domain dissection. We particularly focussed in the three predicted UBDs, as these motifs are usually clustered in the same protein and confer subtle differences in the interaction with ubiquitinated substrates. By generating serial and combinatorial deletions, we assessed USP25 protease activity on a recombinant substrate, and showed that all UBD deletion mutants were catalytically active. We concluded that these domains were not strictly required for ubiquitin recognition or the deubiquitinating activity. was used as a negative control (first lane). The separated panels at the right correspond to the co-immunoprecipitation of the double mutant USP25m bearing the deletion of the first 153 amino acids and truncated at residue 679 (USP25mD153-E679X) with the wild-type USP25m, and their positive control. doi:10.1371/journal.pone.0005571.g003 Figure 4. USP25m is ubiquitinated and autodeubiquitinated. A. Immunodetection of cell lysates expressing Myc-tagged USP25m showed one additional high molecular-weight band. This band was stronger when co-expressing His(6x)-Ub, suggesting that it corresponded to monoubiquitinated USP25m. The high molecular weight bands were stronger when co-expressing His(6x)-Ub and the catalytically inactive mutant USP25mC178S. The lower histogram shows the percentage of non-modified versus mono-Ub-conjugated USP25m. B. The same experiment was performed co-expressing His(6x)-Ub with all the UBD USP25m deletion mutants, in combination or not with the C178S mutation. Again, the Figure 5. UBDs modulate substrate recognition by USP25m and K99 is the key regulatory residue. A. MyBPC1 is differentially rescued from proteasome degradation depending on the presence of the distinct UBDs. Transfection of MyBPC1 with the empty GFP vector was used as the negative control, and addition of MG132 was used as a positive control. B. Relative quantification of the MyBPC1 rescue by different USP25m mutants. a-tubulin was used for normalization of protein concentration (data not shown) and USP25m expression levels were used to normalize for transfection efficiency. The rescue achieved by the wild-type USP25m was considered as the reference (value of one). At least three different replicates were used for quantification. Asterisks indicate statistical significance (p,0.05, Mann-Whitney test). C. The catalytically inactive C178S and the K99R mutants behaved similarly and are unable to rescue MyBPC1 from proteasome degradation in a time-course experiment when new protein synthesis is inhibited. The rescue achieved by expression of the wild-type USP25m was used as a control. D. The MyBPC1 levels (normalized by atubulin expression) were quantified and expressed relatively to those observed at time 0 h (30 h post-transfection, before cycloheximide treatment), which were considered 100%. The values corresponded to a minimum of three different replicates in several independent experiments. Asterisks indicate statistical significance (p,0.05, Mann-Whitney test). CHX-cicloheximide. doi:10.1371/journal.pone.0005571.g005 ubiquitinated band was much visible in the C178S version of the mutants. C. Ni 2+ pull-down assays to purify His(6x)Ub-conjugated proteins confirmed that USP25m was ubiquitinated. All the mutant constructs were tested, confirming that monoubiquitination (and multi-or polyubiquitination) did not depend on UBDs, neither on the presence of the C-terminus. The ratio output/input is 4. (Output samples were eluted at pH 4.5, which could account for the slight variation in the apparent protein molecular weight compared to inputs). D. Protein stability of the USP25m full-length and mutant constructs. Cells were grown in standard conditions (2), or treated with MG132 (+). Immunodetection of a-tubulin was used as a loading control. doi:10.1371/journal.pone.0005571.g004 Increasing evidence support that ubiquitin-pathway enzymes (E2-E3 ligases, and more recently, DUBs) form cooperative complexes [22][23][24]. Our results indicate that USP25 was able to dimerize/oligomerize. Although most cysteine proteases have not been reported to require oligomerization for catalysis, crystallographic data showed homodimerization for another USP, USP8 [28], providing further grounds for USP25 dimerization. This interaction could occur before or upon substrate binding, and thus provide a means of regulation. In this context, a plausible explanation for the formation of dimers would be USP25 intermolecular autodeubiquitination (see the model below). In addition, dimers/oligomers could facilitate the progressive deubiquitination of a multi-or poly-ubiquitinated substrate, or alternatively, alter the interfaces displayed for substrate recognition. One of the reported functions of UBA and UIM sequences is the promotion of ubiquitination of the protein in which they are embedded, thus facilitating autoregulation [29]. In the ubiquitin pathway enzymes, feedback self-regulation loops become more complex, as E3 ligases promote their autoubiquitination and DUBs, their autodeubiquination, under certain physiological stimuli [30,31]. Evidence for mono-or multi-ubiquitination of USP25m was gathered as a faint high molecular weight band (around 8 kDa larger than that of USP25m) after co-expression of USP25m and ubiquitin. Notably, this band was significantly enriched in lysates of the catalytically inactive USP25mC178S, further suggesting both, that it corresponded to monoubiquitinated forms, and that USP25 catalyzed its own deubiquitination. Mass spectrometry of enriched USP25mC178S samples indicated that USP25 was conjugated to ubiquitin, and that K99 (located in UIM1) was the main target residue. Deletion of the UBDs reduced considerably, but did not abrogate ubiquitination of USP25m, as in all cases ubiquitinated forms were recovered. Thus, the USP25 UBDs, in particular UIM1, enhanced the ubiquitination state of the protein by either providing the preferred lysine residue, directly recruiting E2 or E3 ligases, or both. In the deletion mutants, ubiquitination might take place in alternative lysines with less efficiency. In fact, the use of preferential and alternative lysine residues for mono-ubiquitin conjugation had been previously reported [32]. Concerning the ubiquitination state and fate of the wild-type protein and the UBD mutants, we surmised that it corresponded mainly to mono-and multi-ubiquitinated forms, not related to protein degradation, as they were stable through time under our conditions. In contrast, the modification of the C-terminus mutants was compatible with polyubiquitination, as their protein levels were increased when the proteasome was inhibited, pointing to the relevance of the last 106 amino acids in USP25m stability. Polyubiquitination did not appear to be related to the catalytic activity of USP25m as: i) truncated mutants E1020X, Q863X and E769X were enzymatically active but degraded by the proteasome, and ii) of the two catalytically inactive E679X and C178S, the former was polyubiquitinated and degraded, whereas the latter was monoubiquitinated and this modification was not related to degradation. Therefore, autodeubiquitination does not seem to be required for USP25m stability. Finally, we assessed the contribution of the UBD deletion mutants to the recognition of the USP25m specific substrate MyBPC1, considering that the requirements for the interaction with a specific physiological substrate might be different from those of a synthetic polyubiquitin substrate. None of the UBDs was critical for enzyme-substrate interaction, as all the mutants rescued the substrate from proteasome degradation. However, the effects were distinct depending on the domains deleted or preserved. The analysis of the contribution of the single and combined domains suggest that the UBA domain negatively modulated the USP25 function mainly by interaction with the UIM1 domain. The effect of the two UIM domains on the substrate rescue appeared to fit an additive/synergical mode of action. Deletion of the three UBDs would effectively remove all these regulatory domains, including those involved in SUMO modification and the target K99. Given that the overexpression of this UBD-deleted USP25 construct caused increased rescue of MyBPC1, we interpreted that the lack of these regulatory domains allowed USP25m free (non-regulated) access to its substrate. UBDs then would mostly contribute to the enzyme regulation in response to cellular requirements rather than to strict substrate recognition. Model for USP25m regulation: the dual role of K99 Ubiquitin and SUMO pathways may engage in cross-talk, determining opposite fates or functions of a particular substrate, and even compete for the same residues [33]. This seems to be the case for USP25 regulation. Our results together with those of other authors [19] support a combined model for the regulation of USP25 enzymatic activity and substrate recognition based on the dual role of K99 as a target for both SUMO and ubiquitin. In response to cellular requirements, USP25 would undergo several modifications, in particular, monoubiquitination at K99 or sumoylation at the same residue ( Figure 6A). SUMO conjugation at K99 (and also at the secondary site K141) depends on the interaction with the proximal SIM domain, and results in inhibition of the USP25 protease activity on polyubiquitinated chains in vitro [19,20]. Therefore, in physiological conditions sumoylation would impair the rescue of substrates from proteasome degradation by USP25. On the contrary, ubiquitination of K99 would result in enzyme activation by either preventing sumoylation or by allowing new interactions. The modulation of the active enzyme would depend then on the interplay between the UBA and the ubiquitinated K99 in UIM1, either intra-o or inter-molecularly ( Figure 6B, for the sake of simplicity the model only shows intramolecular recognition). Further regulation of the enzyme activity would rely on autodeubiquitination (either intra-or inter-molecularly in a dimer/complex), which would make this lysine residue available for alternative modifications, thus allowing the shift between the enzymatic activity states ( Figure 6C). According to this model, regulation and integration of cell signals would be exerted through the N-terminus of USP25, where the SIM, UBDs and the preferential sites for SUMO and ubiquitin conjugation are clustered. In this context, the deletion of the three UBDs would remove all the regulating domains of the enzyme and permit free access to the substrate, which would explain the higher rescue obtained for this mutant. Indeed, we have previously showed that the recognition of the specific substrate MyBPC1, was dependent not on UBDs but on the peptide encoded by exons 19a19b [18]. Deubiquitinating enzymes have to integrate cellular signals and promote dynamic interactions with their substrates, similarly to what occurs with E2-E3 ligases. Modification of a single target residue in USP25 by SUMO (inhibiting) or ubiquitin (activating), combined with the cluster of SIM and UBDs domains in the same molecule, provides new insights and open new avenues for the study of DUB regulation concerning substrate recognition and catalytic activity. To illustrate this statement, the USP25 closest homolog (sharing 52% of amino acid identities) is USP28 [34], a DUB involved in MYC stability as response to DNA damage [35]. USP28 displays UBA and UIM1 domains in the same location as USP25, including a conserved lysine 96 that may play a similar role in the USP28 regulation as K99 in USP25. Further work will show whether this enzyme similarity extends from structural domains to regulatory mechanisms. In silico identification of USP25 structural domains The USP25m protein sequence was analyzed using the InterPro (http://www.ebi.ac.uk/InterProScan/) and Pfam (http://www. sanger.ac.uk/ Software/Pfam/search.shtml) databases to search for functional domains. Both tools retrieved a UBA domain and two UIM domains at the N-terminus of USP25m. Constructs for expression of serial and combinatorial deletions as well as generation of USP25m point mutants Mutants USP25mC178S and USP25mK99R were generated by site directed mutagenesis to serine using the QuickChange Site-Directed Mutagenesis Kit (Stratagene). Expression constructs with the full-length USP25m cloned in pGEX-4-T1 (GE Healthcare), pcDNA3 (Invitrogen) and pEGFP-C2 (Clontech), were used to generate by PCR the UBD deletion mutants of USP25m (DUBA, DUIM1, DUIM2, DUBAUIM1, DUBAUIM1UIM2 and DUI-M1UIM2). The Accuprime TaqDNA Polymerase High Fidelity (Invitrogen) was used to avoid possible mutations. Serial deletions of the C-terminal USP25 region were generated by introducing a STOP codon by site-directed mutagenesis in positions E679X, E769X, Q863X and E1020X, and cloned into pGEX-4-T1 (GE Healthcare), pcDNA3 (Invitrogen) and pEGFP-C2 (Clontech). Integrity of the clones was verified by sequencing. Ubiquitin-specific protease activity assay The ubiquitin-specific protease activity of USP25m and of all the mutant constructs was analyzed as described elsewhere [17]. Briefly, the corresponding cDNAs cloned in-frame in pGEX-4-T1 Amp R downstream the glutathione-S-transferase (GST) gene, and the plasmid pACY184 Cm r expressing Ub-Met-b-gal (a kind gift from Dr. M. Hoschtrasser) were co-transformed in E. coli XL1blue. Clones resistant to both Amp and Cm were grown and induced for 3 hours with isopropyl-b-thiogalactopyranoside (final concentration 1 mM). Total protein extracts were analyzed by western blot using anti-b-galactosidase mouse monoclonal antibody (dilution 1:1000, Sigma-Aldrich) and anti-GST monoclonal (dilution 1:1000, Santa Cruz Biotechnology). Co-immunoprecipitation assays HEK293T cells were seeded on 100 mm tissue culture dishes (2610 5 cells/dish). After 16 h, cells were transiently co-transfected with cMyc-USP25m and GFP-USP25m, either full-length or the deletion mutants at the N-and C-terminus, using Lipofectamine 2000 (Invitrogen). Cells were collected 42 h postransfection, resuspended in lysis buffer (0.5% Nonidet P-40, 50 mM TrisHCl pH 7.5, 1 mM EDTA, 150 mM NaCl and protease inhibitor cocktail (Roche) and lysed by sonication. Protein extracts were recovered after removal of cellular debris by centrifugation, incubated at 4uC with 2 mg of anti-cMyc mAb (Santa Cruz Biotechnology) during 4 hours with end-over-end mixing. The protein-antibody complexes were removed with 1 hour incubation at 4uC with protein G-Sepharose beads (Amersham GE-Healthcare). After washing, bound proteins were eluted from the beads by boiling 5 min with protein loading buffer, loaded onto 8% SDS-PAGE gels and analysed by Western Bloting using anti-GFP pAb (1:1000, Santa Cruz Biotechnology), anti-cMyc mAb (1:1000, Santa Cruz Biotechnology). For the Ni 2+ pull-down assay, cell lysates obtained as described above, were incubated with 80 ml of His-Select Nickel Affinity Gel (Sigma-Aldrich) during 3 h at room temperature. After 3 washes with the following buffer at pH 6.3 (50 mM sodium-phosphate buffer pH 6.0, 8 M urea, 300 mM NaCl), samples were eluted by boiling 5 minutes in 100 ml of protein-loading buffer (60 mM TrisHCl pH 6.8, 10% glycerol, 2% SDS, 0.1% bromophenol blue and 10% b-mercaptoethanol) and loaded onto 8% SDS-PAGE gels. After blotting, the proteins were detected by Western as stated above. For further assessment of ubiquitination, cell lysates were incubated at 4uC with 2 mg of anti-cMyc mAb (Santa Cruz Biotechnology) during 4 hours with end-over-end mixing. The protein-antibody complexes were removed by one hour incubation at 4uC with protein G-Sepharose beads (Amersham GE-Healthcare). After thorough washing, bound proteins were eluted from the beads by boiling 5 min with protein loading buffer and loaded onto 8% SDS-PAGE gels. Bands were excised after Coomassie-Blue R250 staining and trypsinized. Tryptic peptides were analyzed in MALDI-TOF/TOF (4700 Proteomics Analyzer, Applied Biosystems) and/or in LC-ESI-QTOF (Q-TOF Global, Micromass-Waters) mass spectrometers and submitted using a MASCOT database search engine against non-redundant NCBi or SwissProt databases. MyBPC1 rescue assays and protein stability of USP25m constructs HEK293T cells were seeded on 24-well plates (2610 5 cells/ well). After 12 hours, cells were transiently co-transfected with constructs expressing HA-MyBPC1 and GFP-USP25m (fulllength, or the corresponding deletion mutants), using Lipofectamine 2000 (Invitrogen). When stated, the proteasome inhibitor MG132 (10 mM, Sigma) was added to the medium during the last 16 hours of culture and collected 48 hours postransfection. Inhibition of new protein synthesis was achieved by adding cycloheximide (CHX, 150 mmol/ml, Sigma) to the medium 30 h postransfection and cells were collected immediately or after 4, 16 or 24 hour treatment. Cells were washed with PBS and recovered with 250 ml of protein loading buffer. Samples were loaded onto 8% SDS-PAGE gels and analyzed by western blotting using anti-HA monoclonal antibody (1:1000, Santa Cruz Biotechnology) and anti-GFP polyclonal antibody (1:1000, Santa Cruz Biotechnology) to assess the expression levels of MyBPC1 and USP25m, respectively. Films were scanned and quantified using Quanti-tyOne software (Bio-Rad). Figure S1 In silico predictions of functional domains and secondary structure of USP25m. Localization of the predicted Ubiquitin Binding Domains (one UBA and two UIMs), the catalytic deubiquitinating domains (USP), the peptides encoded by the muscle-specific alternatively spliced exons (19 a and 19b), and several potential sumoylation sites and phosphorylation sites. In silico searches used the InterPro (http://www.ebi.ac.uk/Inter-ProScan) and Pfam (http://www.sanger.ac.uk/Software/Pfam/ search.shtml) databases. The red star indicates the position of the catalytic cysteine mutated on the inactive mutant. Blue arrowheads indicate the C-terminal truncation mutants. The lysines that can be conjugated to either SUMO (K99 and K141) or ubiquitin (K99) are highlighted. The SUMO Interacting Motif (SIM), which partially overlaps the first UIM is also indicated (from Meulmeester et al., 2008). Found at: doi:10.1371/journal.pone.0005571.s001 (6.61 MB TIF) Figure S2 UBDs do not alter USP25m subcellular localization. USP25m localization was monitored by immunohistochemistry using a polyclonal antibody against USP25. Localization of full length USP25m and deletion mutants is predominantly cytosolic, with certain accumulation in the perinuclear region. Transfection of full length USP25m, or the deletion mutants, does not affect distribution of Ub, as assessed by immunodetection with an anti Ub antibody. Found at: doi:10.1371/journal.pone.0005571.s002 (2.19 MB DOC) Figure S3 USP25 is sumoylated, phosphorylated and acetylated. A. USP25m is sumoylated. USP25m and all the UBD deletion mutants display an extra higher molecular weight band (asterisk) after in vitro sumoylation assays with SUMO-1 (middle lanes) and SUMO-2 (right lanes). In the case of USP25m lacking both UBA and UIM1, the band corresponding to SUMO-USP25m is weaker (two asterisks). Note that the absence of all three UBDs rendered similar levels of USP25m sumoylation to that of the full-length protein. B. USP25m is phosphorylated. Myc-tagged USP25m and USP25mC178S were immunoprecipitated with Myc antibodies and detected in Western blots with pan-anti-Phospho-Ser and pan-anti-phospho-Tyr. Bands appearing at the size corresponding to USP25m indicate that USP25m is phosphorylated both in serine(s) and threonine(s) (1st and 2nd panel, middle lane). This band also appears when expressing USP25mC178S, indicating that USP25m phosphorylation occurs irrespectively of its catalytic activity (1st and 2nd panel, right lane). Membranes were stripped and detected with a Myc antibody to confirm that the band corresponded to USP25m (3rd panel). Immunoprecipitation inputs were assessed with antibodies against phosphorylated AKT and Myc as phosphorylation and transfection controls respectively (4th and 5th panels). C. USP25m is acetylated. Myctagged USP25m and USP25mC178S were immunoprecipitated with Myc antibodies and detected in Western blots with pan-antiacetylated-Lys. Bands appearing at the USP25m size indicate it is acetylated, both WT and C178S (upper panel). The same membrane was stripped and detected with anti-Myc to confirm the identity of the bands (2nd panel). Immunoprecipitation inputs were assessed with antibodies against acetylated p53, Myc and a-Tubulin as acetylation, transfection and loading controls, respectively (3rd, 4th and 5th panels). Found at: doi:10.1371/journal.pone.0005571.s003 (1.01 MB DOC)
2016-05-15T15:06:21.575Z
2009-05-15T00:00:00.000
{ "year": 2009, "sha1": "7a010d4a79d14012d1fc9243d5411bf4180238cc", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0005571&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "7a010d4a79d14012d1fc9243d5411bf4180238cc", "s2fieldsofstudy": [ "Biology", "Chemistry" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
210860189
pes2o/s2orc
v3-fos-license
Patterns of cervical cancer brachytherapy in India: results of an online survey supported by the Indian Brachytherapy Society Purpose Cervical cancer is the most common gynecological cancer in India. Uniform protocol-based treatment is important for achieving optimal outcomes. We undertook a survey to investigate patterns of care with special regard to patterns of care in cervical cancer brachytherapy in India. Material and methods A 17-question online survey was sent to radiation oncologists across India. Respondents were required to have a minimum of 1-year experience. One response per center was accepted and deemed as representative. Results Out of 116 centers, 59 responses were generated. Two-thirds (66.1%) were from academic centers and the majority (96.6%) used high-dose-rate (HDR) brachytherapy. The centers treated an average of 255 patients per year (median 161 patients, IQR 76-355). The majority were locally advanced cancers (FIGO 2009 stage II-IV 87.5%). External beam radiotherapy (EBRT) schedules were fairly consistent, administering doses of 45-50 Gy over 5 weeks. Brachytherapy was performed towards EBRT completion by 37/59 (62%) and 43/59 (74.3%) centers used a schedule of 7 Gy × 4 fractions (HDR). Brachytherapy was commonly performed under anesthesia (spinal/general: 44% each) with ultrasound (USG) guidance (29%). Computed tomography (CT) imaging (65%) and orthogonal X-rays (35%) represented the most common imaging for planning, while point A prescription (66%) or GEC-ESTRO based parameters (35%) with manual/geometric methods represented the most common methodology for dose volume prescription and optimization. Overall treatment time (OTT) reported was within 49-56 days in 50%. Complex implants (IC + IS) were performed for more than 30% of cases by 3 centers. Conclusions Our survey suggested a fairly uniform treatment paradigm for cervical cancer brachytherapy, with a progressive shift from 2D to 3D image-based parameters for planning, with persistence of point A based prescription. Further efforts are needed to augment and ease this transition. in terms of scheduling, protocol, imaging used, dose volume prescription and usage of and aspirations regarding future implementation of image-guided brachytherapy (IGBT). The results of the survey form the basis of this report. The survey was conducted under the aegis of the Indian Brachytherapy Society (IBS), a non-profit all-India organization which provides the primary impetus towards improving brachytherapy practice and knowledge in the country [4] and has recently published comprehensive guidelines related to the management of carcinoma with emphasis on ICBT [5]. Material and methods The survey was exempt from Institutional Review Board submission. A 17-question online survey was sent via electronic mail to radiation oncologists involved in the treatment of gynecological malignancies at 116 centers in India. The questionnaire included the nature of practice (government vs. private, academic vs. non-academic), caseload of cervical cancers handled every year, distribution of cases, timing of ICBT relative to EBRT, dose fractionation schedules used, usage and type of anesthesia, type of applicators used and type of implants performed, imaging used intra-operatively during insertion and for planning, method of dose prescription and dose constraints used. The responders were also asked regarding the overall treatment times typically achieved during treatment and also as to their desire for implementing 3-dimensional image-based brachytherapy in the near future (see Table 1). Respondents were required to have at least 1-year post-residency experience to be eligible. One response per center was deemed as representative of clinical practice at the center. In the case of multiple responses, the responder with the greater experience was chosen. Data were collected from the online survey responses and analyzed using SPSS Version 22 (IBM Corp. Armonk, NY, USA). Descriptive statistics were used to analyze the responses in terms of frequencies and percentages. For looking at factors determining the use of IGBT the study team first selected certain factors deemed relevant in terms of prognostic significance (experience, type of setup, caseload and percentage of advanced cases seen in clinical practice). All values were dichotomized at the median except experience, which was divided into 5-year intervals to facilitate analysis (experience less than five years and experience less than ten years). Associations with categorical variables (such as experience, type of setup) were tested by the chi square test, while those with continuous variables (such as caseload, percentage of advanced cases seen) were tested by means of the t-test. Results Between January and June 2017, 116 respondents were contacted over e-mail and postal mail. Fifty-nine respondents replied to the survey mail and contributed data for the analysis. The overall response rate was 51% (59/116). Fifty-nine percent (35/59) of respondents were from academic centers (including government medical colleges, private medical colleges with radiotherapy departments with residency courses, and regional and state cancer centers funded by the government), and the remaining 34% (24/59) were from private oncology centers. Eightyeight percent of respondents (51/59) had more than five years of experience in treating gynecological cancers, and 53% of respondents (30/59) reported more than ten years of experience. The average number of cases of cervical cancer treated every year was 255 (median: 161 cases, IQR 76-355). The pattern of case presentation was typically dominated by advanced stages (FIGO 2009 stage IB2-IVA), which constituted 87% of the cases seen by respondents in their clinical practice. All respondents used a combination of EBRT and ICBT to treat cervical cancer. EBRT practice was fairly constant, with all centers administering doses ≥ 45 Gy at 1.8-2 Gy per fraction. Sixty-two percent of centers (37/59) performed ICBT after conclusion of EBRT. The usage of midline blocks (MLB) was reported by 20% of centers (12/60). The most common dose fractionation schedule used was 7 Gy per fraction, once weekly for a total of 3-4 applications (74%, 43/59 centers). High-dose-rate (HDR) brachytherapy was the predominant mode of administration of ICBT, with 97% (57/59) of centers reporting its usage. The majority of centers were able to complete treatment within 49-56 days (44%, 26/59 centers), with 10% of centers (6/59 centers) exceeding 56 days. Ultrasound (USG) guidance was used for optimizing applicator insertion and placement by 29% of respondents (16/59 centers). Computed tomography (CT) scan was used to check the correctness of applicator placement by 46% of respondents (27/59 centers). One center reported the use of magnetic resonance imaging (MRI) for intra-operative optimization of applicator placement. Intra-cavitary brachytherapy was predominantly performed using tandem and ovoid (80%, 47/59 centers) and tandem ring (24%, 14/59 centers) applicators. Complex applications (intracavitary with interstitial or interstitial alone, IC + IS/IS only) were being performed by 49% (29/59 centers) in < 10% of all cases and 15% (9/59 centers) in 10-30% of all cases and by 5% (3/59 centers) in more than 30% of cases. Some form of imaging was performed for planning by 97% (57/59 centers) of respondents, with CT scan (65%, 38/59 centers) and orthogonal X-rays (36%, 21/59 centers) being the most common modalities. MRI-based IGBT was performed by 9% (5/59 centers). Point A based reporting remained the most commonly used form of method for dose volume reporting, with 66% (39/59 centers) using the same. Contemporary dose volume reporting, as mandated by the GEC-ESTRO guidelines [6,7], was performed by 36% (21/59 centers) of respondents. Seven percent (4/59 centers) of respondents reported using the 60 Gy reference isodose volume as stipulated by the ICRU-38 [8]. However, details of target volumes and dose constraints used for organs at risk were poorly reported and no analysis could be done for them. What is the stage grouping of cervical cancers that you see at your institute? IA-IB2 IIA-IIB IIIA-IIIB IVA-IVB 7 Kindly specify the commonly used/ institutional protocol for dose fractionation schedules for external beam radiotherapy (EBRT) and intracavitary brachytherapy (ICBT) in your institute in the table as indicated (Please mention all doses in grays in single digits, e.g. 9 Gy -please enter as 9, if using decimals, please restrict yourself to a single decimal, e.g. 7.5 Gy) Kindly specify the dose objectives that you aim to achieve in dayto-day planning of ICBT (Please mention all doses in grays in single digits, e.g. 9 Gy -please enter as 9, if using decimals, please restrict yourself to a single decimal, e.g. 7.5 Gy) Plan optimization was performed by a variety of methods, with manual optimization of dwell weights and times being most common (52%, 30/59 centers), followed by geometrical optimization (44%, 26/59 centers). Use of graphical optimization (24%, 14/59 centers) and inverse optimization (12%, 7/59 centers) remains lower. Overall, 64% of respondents (38/59 centers) reported using some form of IGBT based on cross sectional imaging. In case of non-usage of IGBT, we enquired as to whether there were plans to implement IGBT in the near future, and the tentative time frames for implementing it. There were 24/59 responses; 7% (4/59 centers) replied that they had no plans for implementing IGBT at any time, 17% (10/59 centers) replied that they had plans to introduce IGBT in their clinical practice within the next 1-5 years. One respondent reported plans to implement IGBT within 1 year and 15% (9/59 centers) reported that they had plans of commencing after more than 5 years. A comparison with other surveys is presented in Table 2. Discussion Radiation therapy including brachytherapy forms an indispensable part of the curative management of LACC [9]. Even in FIGO stage IIIB-IVA, local control outside the setting of a clinical trial can be as high as 85% [10], which is unprecedented in locally advanced squamous carcinomas originating at other sites. However, there are several challenges in achieving this, especially in LMICs, including India, in terms of resources, logistics and expertise. For example, there is shortage of radiation therapy units, skilled human resources [11], and EBRT and brachytherapy units required for timely ICBT and execution of optimal standard treatment protocols [12,13]. We undertook this survey with an aim to understand the gaps in radiotherapeutic management of cervical cancers and to capture details related to radiation therapy for cervical cancer in India. An additional important objective was to understand the acceptance and uptake of newer technologies in brachytherapy in the light of emerging evidence on improved outcomes with image-guided brachytherapy in routine clinical practice [14]. Our survey attained a response rate of 51%, which was higher than a recent survey conducted in an LMIC [15] setting, but lower than those in an non-LMIC setting [16,17]. The majority of our respondents were from academic institutes with a single response from each institution rather than individual physicians. Also, all the respondents in our study are radiation oncologists and experienced in treating a relatively large number of cervical cancer patients annually (median 161 patients). This is an important and somewhat reassuring finding as treatment at academic and high volume centers has been shown to be associated with better outcomes in cervical cancer [18,19,20]. All the respondents reported a near uniform EBRT treatment protocol of 45-50 Gy followed by ICBT schedules most commonly prescribing 7 Gy × 3-4 fractions, mirroring earlier reported practice in India [21]. Also, the overwhelming majority reported using HDR brachyther- *indicates percentages may add up to more than 100, HDR -high-dose-rate, LDR -low-dose-rate, PDR -pulsed-dose-rate apy (97%), which is reflective of changing global practice [22] and > 90% completing the treatment within 56 days and 10% exceeding the overall treatment time (OTT). The exceeded OTT is highly likely to directly impact patient outcomes, and probably arises out of a large unmet need for more EBRT and brachytherapy units which has been found to be substantial and accordingly deserves attention [13]. Approximately two thirds (62%) perform ICBT after completion of EBRT, which is in line with contemporary global practice [23]. The applicator insertion procedure is usually done under some form of anesthesia. Brachytherapy application is a relatively painful procedure involving pelvic examination, negotiating the utero-cervical canal, cervical canal dilatation and placement of applicators in the upper vagina followed by vaginal packing. To mitigate the resultant pain and discomfort, the brachytherapy procedure is usually done under anesthesia and analgesic cover. Our survey results suggest that the majority of centers perform the brachytherapy procedure under some form of anesthesia, which is encouraging and will likely translate into appropriate placement. Asymptomatic perforation is a known entity during ICBT, which may not be picked up on routine imaging. As many as 13.7% of insertions, including 8.7% of insertions where the treating radiation oncologist was confident regarding correct tandem placement, were found to harbor perforations on CT imaging [24]. To minimize the uterine perforation rates use of real-time ultrasonography during the procedure is attractive and implemented in clinical practice [25]. In our survey, 29% of respondents used USG guidance and 46% of respondents performed CT imaging for optimizing applicator insertion and placement, suggesting some QA for optimizing BT insertions. Advanced BT applications including IC + IS were performed in approximately 30% of cases by only 3 centers (5%) in a setting where locally advanced cervical cancer is seen in more than 2/3 in routine clinical practice. This may be attributed to lack of: (i) advanced BT applicators and (ii) availability of expertise and skills. Almost all of the respondents (97%) reported use of imaging for brachytherapy planning. Also, the rates of usage of orthogonal X-rays (36%) are lower than previously reported from India [12] and data from North America published 5-8 years ago [17], while CT imaging was used by two thirds, which is relatively few as compared to other series [26]. The use of MR imaging has improved from 2% to 8% since 2007 but is substantially lower than the western reports [26]. A comparative table between the current survey and similar surveys conducted in the West illustrates these differences in greater detail ( Table 2). Despite the relatively good availability and uptake of cross sectional imaging, especially CT, point A based dose prescription remained the most common form of prescription. The ICRU-38 60 Gy reference isodose volume reporting was negligible, reflecting lower acceptability, similar to published literature [27]. Use of GEC-ESTRO volume-based parameters was performed by 36% of respondents, similar to other reported series [26]. This suggests that point A prescription is still a preferred method, with approximately one third of respondents using it during the transition from 2D to 3D based parameters. Although MRI-based dose volume parameters are the gold standard, the major hurdle in the Indian setting is lack of MRI in radiation oncology departments and limited access to MRI in the radiology clinic due to competing indications and long queues. This is also reflected in our survey, with extremely poor reporting of target and organ at risk (OAR) related dose volume parameters and limited optimization utilization even for routine ICBT applications. To mitigate this, there have been attempts for alternatives. A recent publication by Mahantshetty et al. showed that using intraoperative trans-rectal ultrasonography (TRUS) and peri-operative CT combined with image information from MRI at diagnosis would lead to target and OAR delineation which was just as robust as gold-standard MRI-based image-based brachytherapy (IBBT) [28]. Such attractive alternatives would be more beneficial and will be better utilized in Indian and LMIC settings. Finally, in terms of implementation plans in the near future, approximately one third of the respondents wish to introduce some form of image-based brachytherapy within the next 5 years. The results of a survey conducted 4 years ago amongst participants of an IBS conference identified training as the main hurdle towards practicing brachytherapy and showed resolve towards changing practice patterns after the conference [15]. This seems to be reflected in the increased uptake of imaging, especially cross sectional imaging in ICBT planning in the results of our survey. The increased resolve of stakeholders in conjunction with the ongoing efforts of various national (AROI, IBS) and international organizations (ESTRO) for dissemination and implementation of IGBT in cervical cancers seems to be reaping rewards. Also, various strategies to motivate economically viable solutions are being reported for increasing viability for potential stakeholders, e.g. a health economic model for MRI-based IGABT approach [29] and alternative imaging protocols including ultrasonography/CT hybrid combinations [28]. We believe these efforts will be highly fruitful in due course. Our survey report has some limitations including relatively low participation rates, inherent bias in terms of responses from possibly motivated institutions/centers, and no detailed questions on the quality indicators of EBRT techniques and chemotherapy. Despite these limitations, the current publication represents patterns of care in cervical cancer as practiced among experienced clinicians with a high caseload in an LMIC setting and is likely to offer useful insights into improving outcomes in cervical cancer in many other similar settings. Conclusions Our survey results suggest a fairly uniform pattern of radiotherapy treatment for cervical cancer, with EBRT doses of 45-50 Gy and 3-4 fractions of high dose rate ICBT of 7 Gy each with reasonable overall treatment times. There is an increasing trend for use of cross sectional imaging, particularly CT imaging for BT planning with point A prescription/reporting only, while target concept and volume based prescription is still in transition. Finally, the intentions to implement 3D IGBT for cervical can-cer are increasing. A follow-up survey after a few years and comparison with the results of the current survey would be useful to evaluate transition/changing practices in brachytherapy for cervical cancers.
2020-01-16T09:08:29.596Z
2019-12-01T00:00:00.000
{ "year": 2019, "sha1": "7cc844113f442c8d29653798b9db1bd27d2fc718", "oa_license": "CCBYNCSA", "oa_url": "https://www.termedia.pl/Journal/-54/pdf-38860-10?filename=Patterns%20of%20cervical.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ba40c9571f4ba4ade9b8730ade328ec8b4ab8130", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
16393137
pes2o/s2orc
v3-fos-license
On two-color QCD with baryon chemical potential We study SU(2) color QCD with even number of quark flavors. First, using QCD inequalities we show that at finite baryon chemical potential mu, condensation must occur in the channel with scalar diquark quantum numbers. This breaks the U(1) symmetry generated by baryon charge (baryon superconductivity). Then we derive the effective Lagrangian describing low lying meson and baryon excitations using extended local chiral symmetry of the theory. This enables us to determine the leading term in the dependence of the masses on mu exactly. Introduction QCD at finite baryon number density has been intensely studied recently [1,2,3,4,5]. Knowing the behavior of QCD in this regime will enable us to understand the physics of heavy ion collisions, neutron stars and supernova explosions. First principle calculations using methods of lattice field theory have presented an insurmountable theoretical challenge to date due to the absence of techniques to deal numerically with complex measure path integrals. The two-color QCD model is an exceptional case where conventional methods work due to positivity of the Euclidean path integral measure [6]. The two-color QCD model is also exceptional from the point of view of BCS-type diquark condensation phenomenon which received attention recently [3]. In 3-color QCD such a condensate is not gauge invariant and it leads to the phenomenon of color superconductivity. In 2-color QCD the diquark condensate is a well-defined gauge invariant observable. Before we learn how to deal with the three-color QCD it would be very helpful to get as much insight as possible from the apparently easier (both conceptually and technically) case of two-color QCD. Numerical calculations in SU (2) QCD are now being actively pursued [7]. In this letter we develop analytical methods which enable us to study two-color QCD at finite baryon number density, and in particular, to determine the spectrum of excitations. We shall work in the Euclidean formulation of the theory. The Lagrangian is given by: and we shall omit the flavor indices f in the following. In this letter we shall consider the case of massless quarks, m q = 0. The analysis of the more general massive case will be presented elsewhere. We also illustrate our methods using the simplest case of N f = 2 quark flavors. The results can be easily extended to arbitrary even N f . It is known that the 2-color N f = 2 theory with massless quarks at µ = 0 possesses SU(4) global flavor symmetry which is broken spontaneously to Sp(4) [8]. As a result the spectrum contains 5 Goldstone bosons. At finite µ the symmetry of the theory is reduced to the usual SU(2)×SU(2)×U(1). We shall show, using QCD inequalities and, independently, the exact effective Lagrangian, that this symmetry is spontaneously broken down to SU(2)×SU (2), creating a single Goldstone boson corresponding to spontaneous breaking of baryon number symmetry. The other 4 Goldstones acquire a common mass which is proportional to µ for small µ ≪ Λ QCD . 1 We find that the coefficient of proportionality can be determined exactly, and is equal to 2. QCD inequalities In Euclidean QCD, having a positive measure, one can majorate all correlators with the correlator π(x)π(0) , where π =ūγ 5 d is the pion field [9]. Therefore, one can prove that 1 Such a linear dependence on µ can also be seen in the simple effective sigma-model of Rapp et al. [3]. Note that this linear dependence of Goldstone masses on µ contrasts with the usual dependence on another symmetry breaking parameter, the quark mass: m π ∼ √ m q . 0 − is the lightest meson with I = 1. As a consequence, one obtains an important restriction on the pattern of the symmetry breaking: it has to be driven by a condensate ψ ψ (not ψ γ 5 ψ , for example, which would give 0 + Goldstones). Let us sketch the argument. Consider the Dirac operator in QCD: D = γ · (∂ + A) + µγ 0 + m q . When µ = 0 this operator obeys (matrix A is antihermitian in Euclidean formulation, while the γ-matrices are hermitian): Now consider the correlator of a generic meson: M =ψΓψ: where we did the obvious integration over the ψ's andψ's and left the integration over the A's (it is important, as is true for I = 1, that there is no disconnected piece). S ≡ D −1 . When Γ = γ 5 we can use (2) to rewrite the expression in brackets as: which is manifestly positive (the dagger in this formula only transposes the color and Dirac indices, the coordinate indices x, 0) we transposed explicitly). Moreover, for any Γ (such that Γ 2 = 1) we can write, using the Schwartz inequality: If the measure is positive this inequality should survive the averaging, and we get the desired inequality for the correlators, and therefore for the meson masses. 2 For µ = 0 we lose the positivity and we lose the inequalities in SU (3). But, in SU(2) QCD we also have a positive measure! Can we, perhaps, derive some inequalities for the meson masses and consequently make some conclusions about the symmetry breaking pattern? The relation (2) holds in either SU (3) or SU (2). It also fails in both theories at µ = 0. But there is another relation, which holds in SU(2), due to its pseudo-reality, for arbitrary µ: where C = iγ 0 γ 2 (C 2 = 1, Cγ µ C = −γ * µ ) all γ-matrices are hermitian, and T 2 is a generator of the SU(2) color (the second Pauli matrix, T 2 T a T 2 = −T * a ). It is a consequence of this relation that the measure is positive, in fact. If we construct now the correlator of the diquark M ψψ = ψ T CT 2 γ 5 ψ (this is 0 + , I = 0, i.e., antisymmetric in flavor), we have: Now, as before, one can show that the correlator of ψ T CT 2 γ 5 ψ meson majorates a correlator of any other meson ψ T CT 2 γ 5 Γψ. In particular, we see that it is 0 + , not 0 − , which is the lightest. Therefore, if there is condensation it has to be that of ψ T CT 2 γ 5 ψ, not violating parity, in particular. One can also majorate the correlator of any meson of the typeψΓψ. In other words, the 0 + diquark must be the lightest meson in this case. This excludes the possibility of conventional condensation of ψ ψ which otherwise would lead to 3 massless pions (unless the inequality is saturated, which is the case at µ = 0). Symmetries, breaking and Goldstone counting We shall construct the effective Lagrangian describing light excitations in 2-color QCD at finite µ in the next section. Here we shall analyze the global symmetries of our theory -a necessary ingredient of this construction. We start from the known case of µ = 0 and recall the fact that the global symmetry of the theory is SU(2N f ) rather than the usual SU(N f )×SU(N f )×U(1) [8]. This can be seen explicitly by using left and right chiral Weyl components of the Dirac spinor ψ = (q L , q R ): where , and σ k are usual Pauli matrices. The fact that (1) has higher flavor symmetry is based on the property of the 2-color Dirac operator, which in turn is based on the (pseudoreality) property of the generators of SU(2) (Pauli matrices): We introduce: We then substitute (11) into (8) and use the property (10) of Pauli matrices for both T a of color and σ k of Euclid, together with the anticommutativity ofq,q † (we need to transpose) to arrive at: which now has a manifest SU(2N f ) "flavor" symmetry. The Ψ denotes a Weyl spinor which has 2N f "flavor" components. E.g., for N f = 2: where 1, 2 are the original flavor indices. The total global symmetry of the action is SU(2N f )×U(1) A . Note that the baryon symmetry, under which B(q) = +1 and B(q) = −1 is a subgroup of this SU(2N f ). Theq are, therefore, conjugate quarks (since they have opposite baryon charge to normal quarks q) in the terminology of [2]. Under axial U(1) A q andq have the same charge (because . This symmetry is broken by the anomaly, however, so the actual symmetry of the quantum field theory is SU(2N f ). Now, let us write down various useful quark bilinears in terms of q,q and determine their transformation properties under this SU(2N f ). The matrix in (14) is The matrices σ 2 and T 2 carry SU(2) spin and SU (2) color indices respectively and no SU(2N f ) indices. They are just antisymmetric ǫ-symbols for their indices. We see that the chiral condensate is not a singlet under SU(2N f ). Since it is an antisymmetric product of two fundamental SU(2N f ) spinors Ψ, it transforms as an antisymmetric tensor of rank 2. The dimension of this representation is N f (2N f − 1). We shall continue our discussion using N f = 2 case as an example. For N f = 2 (14) transforms as a 6-plet. The (14) gives us one component of this 6-plet (sigma). The remaining 5 are: 3 pions, scalar diquark and anti-diquark. What does the chemical potential do? We see that this term is not a singlet under SU(4). Since 4 × 4 = 1 + 15 it is a component of a 15-plet (adjoint representation (2N f ) 2 − 1). It is easy to understand the meaning of +1 and −1 in the matrix in (15) -these are just baryon charges of quarks and conjugate quarks. What is the remaining subgroup of SU (4), under which (15) is invariant? From the block-diagonal structure of (15) it is clear that SU(2) L ×SU(2) R rotations preserve it, since these rotate the first two components of Ψ, or the last two, separately. The U(1) B , which can be thought as generated by the block τ 3 generator (the charges are (+, +, −, −)), also preserves (15). All other generators are broken by (15). To summarize, we start with an SU(4) symmetry; then we add a term proportional to µ which transforms as a component of a 15-plet of this SU(4) which breaks this symmetry explicitly down to SU(2) L ×SU(2) R × U(1) B . Now let us do the Goldstone counting. At µ = 0 we have SU(4) global symmetry. The non-zero expectation value of the quark bilinear (14) which develops spontaneously breaks it down to Sp(4). This produces 5 Goldstone bosons (15 generators minus 10). On the other hand, when µ = 0 the symmetry of the theory is SU(2) L ×SU(2) R ×U(1) B . As we concluded in the previous section this symmetry should break down to SU(2) L ×SU(2) R by the non-zero expectation value of scalar diquark. Therefore at µ = 0 the theory has only one Goldstone. What happened to the other 4? As is easy to guess, and as we shall see explicitly, they form a representation (2,2) of the manifest SU(2) L ×SU(2) R group, and acquire the same mass. This mass should vanish at µ = 0. In the next section we shall calculate the dependence of the mass of the 4-plet of these pseudo-Goldstones, m pG as a function of µ for small µ. Global symmetry In this section we construct the effective Lagrangian for the low energy degrees of freedom, which in our theory with spontaneous symmetry breaking are the Goldstone bosons [10]. The basic steps we follow are: (i) identify the symmetries of the underlying (microscopic) theory; (ii) identify degrees of freedom of the effective (macroscopic) theory; (iii) ensure that the effective theory is invariant under the symmetries of the microscopic theory.The microscopic theory at µ = 0 has a global SU(4) symmetry. In the effective theory, which we want to construct, the degrees of freedom are given by the fluctuations of the condensate of Σ: which is a Lorentz and color singlet but flavor SU(4) 6-plet. Fluctuations of the orientation of Σ give us our 5 Goldstones. Under the action of U ∈SU (4): and thus Σ → UΣU T . The low-energy effective Lagrangian invariant under the flavor SU (4) can be written as a non-linear sigma model [10,8]: The matrix Σ in the effective Lagrangian is a unitary antisymmetric matrix (which has exactly 5 independent real parameters). The degrees of freedom are the rotations of Σ generated by U as in (18). The transformations U which leave Σ invariant form the Sp (4) group. The nontrivial degrees of freedom of the Lagrangian (19) -the Goldstones -live in the coset SU(4)/Sp(4). In the microscopic theory, the term: breaks the SU(4) symmetry explicitly. However, we can save this symmetry by transforming also the source coupled to the breaking term. Rewriting (20) as where B µ is an SU(4) matrix. The value of B µ fixed by (20) is: If under the transformation (17) the matrix B µ also transforms as: the microscopic Lagrangian will be invariant. Thus the effective Lagrangian must be also invariant under such an extended transformation. For example, this requirement rules out the term linear in B (and therefore in µ) in the effective Lagrangian. This is because B transforms as (23) under SU(4), and one cannot construct a non-trivial invariant out of Σ and only one power of B. The lowest order nontrivial term which we can write is: This term will produce the mass for the Goldstone bosons linear in µ. Local symmetry We see that the symmetry considerations help us find the form of the symmetry breaking term in the effective Lagrangian, and thus determine the dependence of the m pG on µ. However, it does not tell us what the coefficient of proportionality in m pG ∼ µ is, since it does not specify the coupling of (25). This coefficient can be determined if we notice that the global symmetry (17), (23) can in fact be promoted to a local symmetry in the microscopic theory [10]. This will require the transformation of B: In order to ensure that the effective Lagrangian is also invariant under this local symmetry we have to replace the derivatives in (19) by the long covariant derivatives: The signs here are important and are fixed by the local symmetry. The Lagrangian must have the form: Expanding the long derivatives we find: where we used the property Σ T = −Σ to simplify the second term. Now we shall analyze the last term in (29). There are two main effects of this term. First, the minimum of it with respect to all possible orientations of Σ obtained by rotations (18) determines the direction of the condensation. Second, the curvature matrix around this minimum gives us the masses for the (pseudo)-Goldstones. We see that the local symmetry relates the mass term to the kinetic term in the Lagrangian. This means that the coefficient of proportionality in the equation m pG = const · µ, which is just a dimensionless number, is fixed by the local chiral symmetry! In particular, f π does not enter at all into this relation. In the remainder of this note we shall set f π = 1 to simplify the formulas. Vacuum alignment Using the fact that B † µ = B µ , we can see that the last term in (29) is seminegative definite: where In order to find the vacuum alignment of Σ we must minimize L 3 . Let us try first the alignment corresponding to the usual chiral condensate: Using B given by (22) we find that A = 0 and therefore L 3 = 0 which is the absolute maximum of L 3 , not the minimum which we seek. Therefore the standard vacuum alignment (with no baryon charge in the condensate) is unstable. One can see that the minimum can be achieved for a Σ = Σ 0 such that: A solution to (33) is given by: This minimum is not unique, there is a U(1) degeneracy, corresponding to the rotation with the generator given by B 0 (22). This gives the Goldstone corresponding to the spontaneous breaking of the baryon charge symmetry. Any other rotation will raise the value of the effective potential. Mass spectrum Now we shall consider the curvature of the potential in more detail, to determine m pG . We can rewrite (30) as: The dependence on Σ sits in the second term. In order to find the mass matrix for the (pseudo)-Goldstones we should expand Σ in small fluctuations around the vacuum value Σ 0 (34). These small fluctuations are given in terms of the transformation (18) with U close to unity: We shall write U as an exponent of the generators of the SU(4). But first let us, following the formalism and notations of Peskin [11], separate the generators into those which do not change Σ 0 -T i , and those that do -X a . The transformations U generated by T i : form an Sp(4) subgroup of SU (4). It follows from (37) that these generators obey: The remaining 5 generators X a can be shown, using the block representation of Peskin, to obey: The corresponding fields π a defined as: and by (36) are the dynamical degrees of freedom of the Lagrangian (29). We shall write here, to provide an example, the explicit form of the generators T and X in our case of the SU(4) flavor group. There are 10 generators T a : And here are the 5 generators X a : We have normalized the generators as: where Tr1 = 4. Let us make the following observations. First, B should be one of the generators of SU(4). From (33) and (39) we conclude that it has to belong to the set of broken generators X a . Our explicit example confirms this, indeed X 5 = B. Second, the remaining generators (a = 1, 2, 3, 4 in our example) anticommute with B. For a transformation U generated by X which commutes with B we can write for the last term in (35): which is a constant. So there is no mass for the corresponding boson. We conclude that π 5 is a true Goldstone. For the remaining 4 generators X we can write, using the fact that they anticommute with B: where we have used properties of the X generators (39), (33) and B 2 = 1. Now using (40), and expanding to quadratic order in the fields we find: where a = 1 − 4. Now expanding the kinetic term we find, using (36) and (39): (47) where a = 1 − 5. Desired normalization of the kinetic term can be trivially achieved by rescaling the fields π a . Taking together (46), (47) and (29) we find: This is our result. Linear term What is the significance of the second term in (29): Let us expand it for the generators X using (43): Only the generator X 5 gives a nonvanishing contribution to the linear term in π, and the fact that TrX 5 X 5 B = 0 ensures that there are no terms quadratic in π 5 . This linear term means that B µ (the baryon charge current) is a source of the Goldstone field π 5 (similar to the axial current in QCD being the pion source). Generic even N f Most of the derivation goes through mutatis mutandis in the general case. Here we shall summarize the results. The global symmetry at µ = 0 is SU(2N f ). This symmetry is broken spontaneously: The number of the Goldstones in this case is: At nonzero µ the symmetry is broken down to: The condensate, being an antisymmetric rank 2 tensor (cf. (34), breaks it now in the following way: Comparing (52) and (55) we find that there are N 2 f pseudo-Goldstone bosons. Their masses are given by (48): m pG = 2µ for small µ. In terms of group representations, we have the following picture. First, µ = 0. The fermions transform as a fundamental 2N f -plet under SU(2N f ). The fermion condensate transforms as an antisymmetric tensor of rank 2. The dimension of this representation is N f (2N f − 1). After the breaking to Sp(2N f ) the Goldstones fall into an irreducible representation of Sp(2N f ) given by the antisymmetric tensor of rank 2 with the condition that trace of that tensor times the matrix Σ 0 is zero. The dimension of this representation is N f (2N f − 1) − 1 which is exactly (52). The baryon charge current to which µ couples transforms in the adjoint representation of SU(2N f ), which has dimension (2N f ) 2 − 1. After the spontaneous breakdown (54) the N 2 f pseudo-Goldstones are degenerate and form an (N f , N f ) irreducible representation of the remaining manifest Sp(N f )×Sp(N f ). The true Goldstones fall into 3 irreducible representations: a singlet (1,1), (N f (N f − 1)/2 − 1, 1) and (1, N f (N f − 1)/2 − 1), with the total count given by (55). The irreducible representation N f (N f − 1)/2 − 1 of Sp(N f ) is the antisymmetric tensor of rank 2 with the condition that the trace of that tensor times a certain antisymmetric matrix vanishes (this representation does not exist for N f = 2). This breakdown is convenient to view in terms of Young diagrams in Fig. 1. Understanding the multiplet structure turns out to be very important in the analysis of the spectrum at small µ and small quark mass m q . This analysis will be presented elsewhere. Conclusions In this letter we used two methods to study the physics of 2-color QCD at finite baryon chemical potential. We used the fact that the measure of the Euclidean path integral in such a theory remains positive definite even at finite µ to derive certain inequalities between non-singlet meson correlators. These inequalities translate into inequalities between masses of the lightest mesons and impose strong restrictions on possible patterns of the symmetry breaking. In particular, we show that the lightest meson is the 0 + diquark, and therefore condensation (if it occurs) must occur in the channel ψ T Cγ 5 ψ thus leading to baryon charge superconductivity. This fact is in perfect agreement with model calculations which show that both instanton-induced and one-gluon exchange interactions are most attractive in this channel [3]. We also derived the low-energy effective Lagrangian describing the mesons and baryons of the 2-color QCD with massless quarks. We found that both the sign and the magnitude of the coefficient of the potential term of this Lagrangian is fixed by a local chiral symmetry. The sign determines the pattern of the spontaneous symmetry breaking, and is such that it agrees with the QCD inequalities. The masses of the mesons as a function of µ can be also determined exactly for small µ. For example, in the case of N f = 2 flavors of quarks the low energy spectrum at small µ consists of one massless particle, and a 4-plet of massive particles with masses equal to 2µ.
2014-10-01T00:00:00.000Z
1999-06-11T00:00:00.000
{ "year": 1999, "sha1": "8f486513ba5efa474bae798a2d41fbf8821e888f", "oa_license": null, "oa_url": "http://arxiv.org/pdf/hep-ph/9906346", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "ff3ace184fbe7f299f643abf6cb8a928cf9bc6cf", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
214684257
pes2o/s2orc
v3-fos-license
Risk factors for severe bleeding events during warfarin treatment: the influence of sex, age, comorbidity and co-medication. Purpose To investigate risk factors for severe bleeding during warfarin treatment, including the influence of sex, age, comorbidity and co-medication on bleeding risk. Methods Patients initiating warfarin treatment between 2007 and 2011 were identified in the nationwide Swedish Prescribed Drug Register, and diagnoses of severe bleeding were retrieved from the National Patient Register. Hazard ratios (HR) with 95% confidence intervals (CI) for severe bleeding were estimated using multiple Cox regression adjusting for indications and including covariates age, sex, comorbidities and co-medications. Interactions between sex and other covariates were investigated. Results The study cohort included 232,624 patients ≥ 18 years (101,011 women and 131,613 men). The incidence rate of severe bleeding was 37 per 1000 person-years, lower among women than men with an adjusted HR (95% CI) of 0.84 (0.80–0.88). Incidence of bleeding increased with age, HR 2.88 (2.37–3.50) comparing age ≥ 80 to < 40 years, and comorbidities associated with the highest risk of severe bleeding were prior bleeding, HR 1.85 (1.74–1.97); renal failure, HR 1.82 (1.66–2.00); and alcohol dependency diagnosis, HR 1.79 (1.57–2.05). Other comorbidities significantly associated with bleeding events were hypertension, diabetes, peripheral vascular disease, congestive heart failure, liver failure, stroke/TIA, COPD and cancer. Conclusion Most of the well-established risk factors were found to be significantly associated with bleeding events in our study. We additionally found that women had a lower incidence of bleeding. Potential biases are selection effects, residual confounding and unmeasured frailty. Electronic supplementary material The online version of this article (10.1007/s00228-020-02856-6) contains supplementary material, which is available to authorized users. Introduction There are several known risk factors for bleeding during treatment with oral anticoagulants, such as age, chronic comorbidities, prior bleeding and certain co-medications which are included in the HAS-BLED score [1]. Sex is not included in this risk score, and conflicting results have been found in different populations with several studies showing no difference in bleeding risk between the sexes [2][3][4][5][6][7], while other studies found a higher risk of bleeding in men [8][9][10][11]. To our knowledge, there is a lack of large population-based register studies on sex differences in severe bleeding risks in warfarin-treated patients. Therefore, we performed a study using national health registers with the aim to investigate risk factors for severe bleeding after initiation of warfarin including the influence of sex on the incidence of bleeding events. Data sources As data sources in this study, we used Swedish national health registers covering the entire population. Data were linked using the personal identity number (PIN) that uniquely identifies all citizens in Sweden. For information on dispensed prescription on warfarin and co-medication, we used the Swedish Prescribed Drug Register (PDR), held by the National Board of Health and Welfare, with data on all dispensed prescriptions in Sweden since July 2005 [12], including Anatomical Therapeutic Chemical classification (ATC) codes [13]. The coverage of the PDR is high with > 99.7% of all prescriptions being recorded with PINs [14]. Diagnoses corresponding to the indications for warfarin treatment, comorbidity and bleeding diagnoses were identified through the Swedish National Patient Register (NPR) [15][16][17][18]. The NPR holds information on primary and up to 30 secondary diagnoses from all hospitalizations, nationwide since 1987 and outpatient encounters since 2001. Diagnoses are recorded by the International Classification of Diseases (ICD) system, and the version used in this study is the 10th version (ICD-10), used since 1997. Additionally, the register holds information on surgical procedures performed at hospitals using the Nordic Classification of Surgical Procedures [19]. Information on cancer, including the date of diagnosis, was retrieved from the Swedish Cancer Register [20]. The Cause of Death Register [21] and the Register of the Total Population [22] hold information on individual's sex, dates of birth, death and migration. Register data were de-identified for research use. Study population and follow-up Women and men over 18 years of age with a dispensed warfarin prescription (ATC code B01AA03) in PDR during the study period January 1, 2007, until December 31, 2011, were included in the study cohort. The inclusion period ended before the introduction of non-vitamin K oral anticoagulants (NOACs). The index date was the first date of a warfarin dispensing during this period. We only included new users, i.e. patients with no vitamin K antagonist (VKA) use 1 year prior to index date. We excluded subjects not resident in Sweden the year before and included the index date (Fig. 1). All patients in the cohort were followed for the occurrence of bleeding events until a maximum of 12 months after the index date, emigration or death, whichever occurred first (intention-to-treat-like approach). Indications The PDR does not hold information on the indication for drug treatment, and therefore, we as proxies included covariates corresponding to the likely indications of warfarin identified in the NPR through the main and secondary discharge diagnosis, as well as the outpatient visit diagnosis (suppl. Table 1). The indications for warfarin included in the analyses were venous thrombosis (VT), pulmonary embolism (PE), venous thromboembolism (VTE) prophylaxis, peripheral systemic embolism, vascular prosthesis, valvular disease, valvular and non-valvular atrial fibrillation (VAF and NVAF), cardioversion, cardiomyopathy, valvular prosthesis and mitral stenosis. For VAF and NVAF, we used diagnoses occurring up to 10 years before the index date, and for the other indications, we used a time window of 3 months before the index date. In the analysis, a patient could be classified as having several possible indications. Outcomes The primary outcome was the first severe bleeding event leading to hospitalization, identified as a main or secondary diagnosis in the NPR. We used the approach for identification of severe bleeding in health registers validated by Friberg et al. [23]. As secondary outcomes, we investigated severe bleeding categorized by anatomical site [23] (Suppl. Table 2). Comorbidity and co-medication In the analyses, we included covariates representing comorbidities and co-medications. For comorbidities, similar definitions and International Classification of Diseases, 10th revision (ICD-10) codes as in two previous studies [14,24] were used, supplemented with definitions and diagnoses used in the Charlson comorbidity index (CCI) [25,26] (Suppl. Table 1). Hospital admissions and outpatient contacts for comorbidities were identified up to 10 years before the index date, as were recorded in the cancer registry. Because of the lack of information on international normalized ratio (INR), a modified HAS-BLED score [1,27] without INR was used for classifying the risk of severe bleeding (Suppl. Table 3). As co-medication, the covariates we included were low-dose aspirin, other antiplatelet agents, NSAIDs, proton pump inhibitors (PPIs), systemic corticosteroids, antidepressants, selective serotonin reuptake inhibitors (SSRIs), antidiabetics and alcohol dependency drugs dispensed within 1 year before index date (Suppl. Table 4). Female hormone therapy was not included due to the different indications and the very different prevalence of use in women and men. However, we performed a restricted analysis excluding patients treated with female hormones. Drugs assessed as having clinically relevant drug interactions with warfarin, i.e. azole antibiotics, macrolides, quinolones, lipid-lowering agents, amiodarone and fluorouracil [28], were also included in the analyses (Suppl. Table 4). These consist of drugs where it is either recommended to avoid concomitant treatment with warfarin (D-interactions) or recommended to monitor INR for warfarin dose adjustment (C-interactions) [29]. Statistics Descriptive statistics are presented as numbers and proportions. Using multiple Cox regression, we estimated hazard ratios (HR) for severe bleeding in models including as covariates sex, age, comorbidities and co-medication. The HRs were presented with a 95% confidence interval (CI). We finally investigated the effect modification for each covariate by including an interaction term between the covariate and sex in the model. In additional regression models, we adjusted for age as a continuous variable instead of categorical and included co-medications that could lead to drug interactions with warfarin. All analyses were carried out using SAS® software, Version 9.4 (SAS Institute Inc., Cary, NC, USA). Results We included 232,624 patients (101,011 women and 131,613 men) in the cohort. Baseline characteristics of the study population are presented in Table 1. The mean (SD) age was 72.2 years for women and 68.5 years for men, with an excess of persons in the age group ≥ 80 years among females. The most common indications for warfarin treatment were atrial fibrillation, venous thromboembolism (VT and PE), followed by valvular diseases. The indications for warfarin treatment differed between women and men, with VT, PE and NVAF being more common in women compared to men. On the other hand, less women than men had valvular disease. The most common comorbidities were cardiovascular diseases, i.e. hypertension, ischemic heart disease, and congestive heart failure, followed by ischemic stroke or TIA. The frequency of comorbidities also differed between the sexes (Table 1) with, e.g. more women with hypertension but more men with diabetes mellitus, myocardial infarction and ischemic heart disease. Women more often had "high" HAS-BLED risk scores. There were also sex differences in co-medication, with more women treated with NSAIDs, PPIs, antidepressants, SSRIs and systemic corticosteroids, compared to men. More men than women were treated with low-dose aspirin, other antiplatelet agents, alcohol dependency drugs and antidiabetics. The crude incidence rate of severe bleeding was 37 per 1000 person-years, in women 35 and in men 38 per 1000 Table 4 for definitions c Included in the modified HAS-BLED score person-years. In the analyses of overall risk of severe bleeding, we found a significantly lower risk in women compared to men with a crude HR (95% CI) of 0.94 (0.90-0.98) that was further reduced 0.84 (0.80-0.88) after adjustment for age, indications, comorbidities and co-medication (Table 2). Table 2 shows the association of covariates with severe bleeding risk. Indications for warfarin associated with a higher risk of severe bleeding were valvular disease with a HR of 1. 53 Among drugs with a known interaction with warfarin, only sulfamethoxazole, ciprofloxacin and simvastatin significantly influenced bleeding risk in an adjusted analysis (data not shown). Including these covariates in the analysis did not change our estimates. In the analysis of site-specific severe bleedings (Suppl Table 5), women had a lower adjusted risk of CNS bleeding and urogenital bleeding than men, while there was no difference in the risk of GI bleeding and other bleedings. In the analyses of effect modification (Suppl Table 6), women in the age groups 40-49 and 50-59 had a higher risk of severe bleeding than men. The lower severe bleeding risk in women was independent of indications, HAS-BLED score and comorbidities except renal failure, COPD and prior bleeding. For patients with renal failure, the risk in women exceeded the risk in men. For co-medications, only low-dose aspirin differed from the general pattern with an even more pronounced lower risk of severe bleeding in women (Suppl Table 6). Adjustments for age as a continuous variable did not change the overall estimates. Neither did the exclusion of patients receiving female hormone therapy and contraceptives lead to important changes in HRs. Discussion In our study, we found that the risk of severe bleeding was significantly associated with the majority of the risk factors included in the HAS-BLED score: age, hypertension, renal and liver failure, ischemic stroke or TIA, prior bleeding, alcohol dependency and co-medication with antiplatelet agents and NSAIDs. We additionally found a higher bleeding risk associated with other factors: diabetes, peripheral vascular disease, congestive heart failure, COPD and cancer. Furthermore, we found an overall lower risk of severe bleeding during warfarin treatment in women compared to men, even more pronounced after adjustment for other factors. The HAS-BLED risk score has been compared to other risk scores which include additional or other risk factors, such as diabetes and cancer [30][31][32]. Cancer patients with venous thrombosis are more likely to develop major bleeding during anticoagulant treatment than those without malignancy [33]. Diabetes and congestive heart failure have not previously been associated with a higher bleeding risk during treatment with anticoagulants [34,35]. A higher bleeding risk during warfarin treatment after 2 years was seen in patients with peripheral artery disease (PAD) [36]. An association between a higher risk of GI-bleeding in patients with COPD has also previously been found [37,38]. The finding of a lower risk of severe bleeding during warfarin treatment in women is in line with two other studies. A study with elderly patients with AF or VT on VKA treatment with a higher rate of bleeding events in men [10] and a Swedish cohort study on warfarin-naïvepatients showing male sex as an independent risk factor of severe bleeding [9]. The lower risk of CNS bleeding in women found in our study was also in line with another Swedish study on warfarin-treated AF patients [8]. Furthermore, despite the on average lower risk and consistency across analyses stratified on most risk factors, our results showed that in the age groups of 40-49 and 50-59 and in patients with renal failure, women may have a higher risk of severe bleeding than men. In a study on older patients with VKAs [39], frequent use of NSAIDs or selective COX-2-inhibitors was a strong risk factor for upper gastrointestinal haemorrhage. In our study, however, co-medication with NSAIDs only slightly increased bleeding risk ( Table 2). This could be due to the physicians selecting low-risk patient for combination therapy. Concomitant use of aspirin or other antiplatelet drugs in patients with anticoagulants is a known risk factor for bleeding complications [40,41] with an especially high risk in elderly patients [42,43]. The risk of low-dose aspirin disappeared in the adjusted analysis (Table 2), which may be explained by a similar selection effect or correlation of aspirin use with other strong risk factors. Our findings of a lower risk of severe bleeding in women compared to men should be viewed in the light of the risk benefit balance for stroke prevention in women with AF on warfarin, where the differences in the epidemiology of stroke among women and men must be acknowledged. In Sweden, men are more frequently prescribed antithrombotic treatment compared to women [44], and the national US registry data show that women were significantly less likely to use any oral anticoagulant for AF overall and at all levels of CHA 2 DS 2 -VASc score compared to men [45]. Data from a global register study on patients with newly diagnosed NVAF show that the use of anticoagulant therapy for stroke prevention is similar for women and men (approximately 60%), with underuse of anticoagulation therapy in high-risk patients reported for both sexes. At the same time, an overuse of anticoagulation was also reported in individuals with a low risk of stroke [46]. Meta-analyses on sex differences in stroke in AF patients found higher risk of stroke in women [6,47], and in patients with ischemic stroke and ICH, there were fewer women with good post-stroke functioning compared to men [48], and a possible higher net clinical benefit of VKA treatment in women was suggested in a study showing a slightly higher rate of stroke in women [7]. Our results could partially reflect the fact that the physicians avoid anticoagulation treatment in women with a high bleeding risk to a higher extent than in high-risk men, especially in the older age groups. Strengths and limitations The use of data from population-based healthcare registers with full coverage implies that we avoided recall bias and that there is no selection bias related to the study population. By using the validated bleeding diagnoses by Friberg et al. [23], we ensured the correct identification of the outcome. With the introduction of the NOACs, prescription patterns changed, and switches between the different antithrombotic substances became common [49].We therefore chose a period before the introduction of NOACs to avoid the complexity with several different antithrombotic substances and indications to consider and a possible selection bias related to the choice of therapy (channelling). The PDR lacks information on indications, and therefore diagnoses from the NPR corresponding to the indication for warfarin treatment were used as a proxy. Receiving a certain diagnosis depends on patient or physician attitudes and careseeking behaviour, which potentially could lead to sex differences in diagnoses recorded in the registers. Women have more contact with the healthcare system throughout their lifespan [50][51][52], which gives them an extra opportunity for disease detection and perhaps more diagnoses/comorbidities. We thus cannot exclude that differential misclassification of diagnoses in women and men could have affected our results. Furthermore, some NSAIDs are available over-the-counter (OTC). Therefore, NSAID use is likely underestimated in our analysis which is based on dispensed prescriptions. Information on dosage and dates of treatment discontinuation were not available. We performed an intention-to-treat analysis, with the assumption that the warfarin treatment was ongoing throughout the 12-month follow-up period. A gender difference in adherence to warfarin treatment could have contributed to the sex difference in severe bleeding. However, a Swedish nationwide observational study showed no difference between women and men for persistence to warfarin treatment in patients prescribed secondary preventive drugs after stroke [53]. Similarly, differential adherence or persistence could lead to biased effects of other risk factors. Our results may be confounded by patient frailty. Age and several of the chronic comorbidities we include in the analysis are likely to be associated with frailty, but clinical assessments of frailty were not available. It is noted that the association of bleeding risk with age is only moderately attenuated after adjustment, which could be ascribed to confounding by unmeasured frailty. We did not have access to diagnoses from primary care, and therefore, we do not have complete information on comorbidities. Hypertension was adjusted for in our analysis, but no data on blood pressure control were available. In a study with data from the Swedish Primary Care Cardiovascular Database, fewer women than men reached target blood pressure [54], but among US adults, women had generally higher hypertension control [55]. Sex differences in hypertension control could potentially contribute to differences in the risk of severe bleeding. Finally, it is a limitation that our study lacked data on INR and time in therapeutic range (TTR). The adjustment for diagnoses representing the indication for treatment may in part control for systematic differences in INR level and monitoring intensity. For example, valvular disease was associated with a higher bleeding risk. In a study investigating adverse outcomes in women and men with AF taking warfarin in the AMADEUS trial, TTR but not female sex was an independent predictor for combined cardiovascular death and stroke/ systemic embolism and clinically relevant bleeding events [11]. Studies based on data from the Swedish national quality registry for AF and anticoagulation have shown that there was no significant difference in TTR between women and men [56,57]. However, these studies did not assess the direction of the INR deviation from the therapeutic range that could result in either an increased risk of bleeding or an increased risk of thromboembolic events. In a study on epidemiology of subtherapeutic anticoagulation in the USA, women treated for venous thromboembolism were particularly likely to experience low INR [58]. Thus, a lower treatment intensity in women could contribute to a lower bleeding risk. Conclusion In this population-based cohort study in patients on warfarin, the majority of risk factors included in the HAS-BLED score could be confirmed to be significantly associated with a higher risk of bleeding. We also identified an association with several other comorbidities, i.e. diabetes, peripheral vascular disease, congestive heart failure, COPD and cancer. Women had a lower overall incidence of severe bleeding even after adjusting for age, comorbidity and co-medication. The apparent effect of sex was, however, relatively small compared with the effects of other risk factors. Our findings could partially be explained by selection effects and confounding due to the limitations of our data, including unmeasured confounders, notably treatment intensity and patient frailty. The individualized dosing may be a key factor, and therefore, exploring risk factors including sex differences in severe bleeding in patients on NOACs with standardized dosing becomes highly relevant. Future studies should also investigate factors not present in healthcare registers that may influence treatment choice and intensity of treatment. For VKA, including information on INR is highly relevant. Acknowledgements Open access funding provided by Karolinska Institute. Authors' contributions DMR: Responsible for study design, planning of statistical analyses, interpretation and presentation of results, writing and revising the manuscript ML: Statistical advice and supervision on study design and statistical analyses, interpretation and presentation of results, reviewing the manuscript REM: Responsible for study design, interpretation of results, reviewing the manuscript MA: Responsible for study design, planning and conducting of statistical analyses, interpretation and presentation of results, reviewing the manuscript Compliance with ethical standards Conflict of interest MA reports grants from AstraZeneca, Pfizer, H. Lundbeck and Mertz, Novartis, Janssen and Novo Nordisk Foundation (NNF15SA0018404), outside the submitted work; all grants received by the institutions of his employment; and personal fees for organizing pharmacoepidemiology courses and teaching at Atrium, the Danish Association for the Pharmaceutical Industry. ML is employed at the Centre for Pharmacoepidemiology, which receive grants from pharmaceutical companies, including Takeda, regulatory authorities and contract research organizations for performance of drug safety and drug utilization studies. DMR and REM have no conflicts of interest. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
2020-03-29T08:57:34.107Z
2020-03-28T00:00:00.000
{ "year": 2020, "sha1": "6a2590e5b4beded71d2cd4398d0031f9765b3814", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s00228-020-02856-6.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "6a2590e5b4beded71d2cd4398d0031f9765b3814", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
219683584
pes2o/s2orc
v3-fos-license
Antimicrobial efficacy of herbal, homeopathic and conventional dentifrices against oral microflora: An in vitro study INTRODUCTION This study compares the efficacy of herbal, homeopathic, and conventional dentifrices, on oral microflora using antibiotic susceptibility tests. METHODS Three strains of microorganisms, Streptococcus Mutans, Escherichia Coli, and Candida Albicans, were taken and incubated in Mutans media, Mueller Hilton agar, and Sabouraud Dextrose agar, respectively. Different dilutions (1:5, 1:10 and 1:15) of several brands of commercial toothpaste with different compositions were made. Sterile disks were incorporated with an equal amount of prepared toothpaste formulations using a micropipette. These disks were then placed equidistant to each other, and the plates were incubated for 24 hours. RESULTS The zone of inhibition against S. Mutans is found to be higher in homeopathic dentifrice 24 mm, 19 mm, and 20 mm, followed by herbal dentifrice 19 mm, 17 mm, and 13 mm, and the least by conventional dentifrice 17 mm, 15 mm and no inhibition, at 1:5, 1:10 and 1:15 dilution, respectively. The zone of inhibition against E. Coli is found to be higher in herbal dentifrice 18 mm, 17 mm and 16 mm followed by conventional dentifrice 18 mm, 17 mm, and 14 mm, and no inhibition by homeopathic dentifrice at 1:5, 1:10 and 1:15 dilution, respectively. Zone of inhibition against C. Albicans is found to be higher in herbal dentifrice 14 mm, 12 mm and 9 mm followed by conventional dentifrice 14 mm, 9 mm, and no inhibition, and the least by homeopathic dentifrice 10 mm, 9 mm and 7 mm, at 1:5, 1:10 and 1:15 dilutions, respectively. CONCLUSIONS Toothpaste formulations containing homeopathic and natural antimicrobial agents were more effective in controlling the oral microflora compared to toothpaste containing synthetic antimicrobial agents like triclosan. INTRODUCTION Globally, dental caries and gingivitis are the most common oral diseases that affect people of all ages1. The occurrence of dental caries is approximately 60–65% among the Indian population2,3. Dental diseases are primarily caused by the virulence of complex oral micro-communities4. Dental plaque micro-organisms degrade the dietary carbohydrates and produce lactic acid, which leads to localized demineralization and dental caries, eventually5,6. Poor oral hygiene is the major risk factor for the accumulation of microbes and their harmful activities4,7. Most of the dental diseases can be prevented from occurring by the simple practice of personal hygiene habits8,9. Besides mechanical cleansing of teeth, the use of chemical agents has been proposed as an effective method of reducing plaquemediated disease10. Most effective among them is the tooth brushing habit as the antibacterial efficacy of the toothpaste has a major role to play in the outcome11,12. Triclosan, an antibacterial agent, which is a common ingredient in toothpaste, helps to prevent gingivitis by reducing calculus formation13. Herbal products contain antimicrobial agents and may be more appealing as they Short Report| Population Medicine Popul. Med. 2020;2(May):14 https://doi.org/10.18332/popmed/122529 2 do not require alcohol, preservatives, and flavours, for their activity14,15. The basic homeopathic principle is that a substance in fewer doses efficiently cures those similar symptoms that would require a larger dose. It has been proven that the homeopathic formulations are non-toxic, antiinflammatory, and antimicrobial16. Considering the availability of abundant brands of toothpaste in the market, their efficacy to control the bacterial count has to be monitored and analysed scientifically. Currently, the increased popularity of herbal products has mandated that dental professionals evaluate the effectiveness of these products in the prevention of oral ailments and provide evidence-based suggestions to their patients for making a better choice17. Also, there is a growing interest around the world to study the benefits of various medicinal plants, over the last decade18. Antibiotic susceptibility testing is the method employed for testing the effectiveness of agents against specific microorganisms19. Disk diffusion assay offers many advantages such as simplicity, low cost, ability to test large numbers of microorganisms and antimicrobial agents, and the ease to interpret results19. Hence, this study aims to compare the efficacy of herbal, homeopathic, and conventional dentifrices, on oral microflora using antibiotic susceptibility tests. METHODS For the present in vitro study, three strains of microorganisms, Streptococcus Mutans, Escherichia Coli, and Candida Albicans, were considered. The media used were Mutans media, Mueller Hilton agar, and Sabouraud Dextrose agar, respectively, for each micro-organism. We prepared dilutions of several brands of commercial toothpaste with different compositions. Double distilled water, pyrogen-free water, test tubes, Petri plates, micropipettes, and gel puncher were the other materials utilized for the study. Solutions of selected dentifrices were made by mixing 1 g of dentifrices in 4 mL of distilled water to give 1:5 dilutions, in a sterile container. Further dilutions were made by mixing 1 g of toothpaste with 9 mL and 14 mL to give 1:10 and 1:15 dilutions, respectively (Figure 1). Three strains of microorganisms, S. Mutans, E. Coli, and C. Albicans, were grown in Mutans media, Mueller Hilton agar, and Sabouraud Dextrose agar media, respectively. These bacteria were inoculated in their respective medium by the swab method (Figure 2). Sterile disks were incorporated with an equal amount of different dilutions of prepared toothpaste formulations using a micropipette. These disks were then placed equidistant to each other and the plates were incubated for 24 hours. The zones of inhibition were measured to the nearest whole mm using a graduated ruler by holding the test plates in front of a desk lamp. Ethical clearance was obtained from the institutional ethical committee before beginning the study. RESULTS Zone of inhibition against S. Mutans was found to be higher in homeopathic dentifrice followed by herbal dentifrice, and the least by conventional dentifrice, and no inhibition at 1:5, 1:10 and 1:15 dilutions, respectively (Table 1). Zone of inhibition against C. Albicans was found to be higher in herbal dentifrice followed by conventional dentifrice, and no inhibition, and the least by homeopathic dentifrice at 1:5, 1:10 and 1:15 dilutions, respectively. Zone of inhibition against E. Coli was found to be higher in herbal dentifrice followed by conventional dentifrice, and no inhibition by homeopathic dentifrice at 1:5, 1:10 and 1:15 dilutions, respectively. Figure 1. Preparation of dentifrice formulation Short Report| Population Medicine Popul. Med. 2020;2(May):14 https://doi.org/10.18332/popmed/122529 3 DISCUSSION Microbial biofilms are the major causative factors of caries and periodontal disease, and it is of utmost importance to control these biofilms by mechanical and chemical debridement15. Prevalence of dentinal caries result from poor oral hygiene. It has been proven scientifically that maintaining good oral hygiene is the key to preventing dental diseases4. Toothpaste is a gel dentifrice used with a toothbrush accessory to clean and maintain oral hygiene. There is a paucity of literature about the efficacy of different types of dentifrices against oral microorganisms. Hence, this study was conducted to test the antimicrobial efficacy of three different dentifrices (herbal, homeopathic, and conventional fluoride containing) against S. Mutans, E. Coli, and C. Albicans. Three different dentifrice formulations 1:5, 1:10, and 1:15, were made by mixing distilled water with dentifrice. The media used were Mutans media, Mueller Hilton agar, and Sabouraud Dextrose agar media, respectively, for each microorganism by swab method. The antimicrobial efficacy was then tested using the disk diffusion method. In this study, the antimicrobial efficacy of dentifrice formulations against S. Mutans was measured as a zone of inhibition. The zone of inhibition is found to be higher in homeopathic dentifrice 24 mm, 19 mm, and 20 mm, followed by herbal dentifrice 19 mm, 17 mm, and 13 mm, and the least by conventional dentifrice 17 mm, 15 mm, and no inhibition, at 1:5, 1:10 and 1:15 dilutions, respectively. Similar results were obtained in the study conducted by Mohankumar et al.17 where herbal toothpaste was found to have similar antibacterial activity compared to the conventional toothpaste. The herbal toothpaste with a combination of different varieties of herbs has better antimicrobial activity compared to those with a single ingredient. However, studies conducted by Anushree et al.14 and Prasanth3 showed fluoride and triclosan containing dentifrices had higher zones of inhibition at higher dilutions. Homeopathicbased dentifrices showed antimicrobial efficacy against S. Mutans since the major ingredients of its formulations were kreosotum, Plantago major, and calendula14. Similar results were observed by Gibraiel et al.12 in their study where the toothpaste having natural formulation had given maximum zones of inhibition against E. Coli at 1:1 dilution. However, studies conducted by Anushree et al.14 and Prasanth3 showed fluoride and triclosan containing dentifrices had higher zones of inhibition at higher dilutions. The difference in the result may be attributed to the active ingredient used in the herbal dentifrices. However, studies conducted by Anushree et al.14 and Prasanth3 showed fluoride and triclosan containing dentifrices had higher Figure 2. Antimicrobial activity of various dentifrices against various microorganisms The zone of inhibition is calculated as the diameter of the area around the disk that is free of microbial colonies. Table 1. Zone of inhibition (mm) of various types of dentifrices at three dilutions Dentifrices Zone of inhibition (mm) INTRODUCTION Globally, dental caries and gingivitis are the most common oral diseases that affect people of all ages 1 . The occurrence of dental caries is approximately 60-65% among the Indian population 2,3 . Dental diseases are primarily caused by the virulence of complex oral micro-communities 4 . Dental plaque micro-organisms degrade the dietary carbohydrates and produce lactic acid, which leads to localized demineralization and dental caries, eventually 5,6 . Poor oral hygiene is the major risk factor for the accumulation of microbes and their harmful activities 4,7 . Most of the dental diseases can be prevented from occurring by the simple practice of personal hygiene habits 8,9 . Besides mechanical cleansing of teeth, the use of chemical agents has been proposed as an effective method of reducing plaquemediated disease 10 . Most effective among them is the tooth brushing habit as the antibacterial efficacy of the toothpaste has a major role to play in the outcome 11,12 . Triclosan, an antibacterial agent, which is a common ingredient in toothpaste, helps to prevent gingivitis by reducing calculus formation 13 . Herbal products contain antimicrobial agents and may be more appealing as they do not require alcohol, preservatives, and flavours, for their activity 14,15 . The basic homeopathic principle is that a substance in fewer doses efficiently cures those similar symptoms that would require a larger dose. It has been proven that the homeopathic formulations are non-toxic, antiinflammatory, and antimicrobial 16 . Considering the availability of abundant brands of toothpaste in the market, their efficacy to control the bacterial count has to be monitored and analysed scientifically. Currently, the increased popularity of herbal products has mandated that dental professionals evaluate the effectiveness of these products in the prevention of oral ailments and provide evidence-based suggestions to their patients for making a better choice 17 . Also, there is a growing interest around the world to study the benefits of various medicinal plants, over the last decade 18 . Antibiotic susceptibility testing is the method employed for testing the effectiveness of agents against specific microorganisms 19 . Disk diffusion assay offers many advantages such as simplicity, low cost, ability to test large numbers of microorganisms and antimicrobial agents, and the ease to interpret results 19 . Hence, this study aims to compare the efficacy of herbal, homeopathic, and conventional dentifrices, on oral microflora using antibiotic susceptibility tests. METHODS For the present in vitro study, three strains of microorganisms, Streptococcus Mutans, Escherichia Coli, and Candida Albicans, were considered. The media used were Mutans media, Mueller Hilton agar, and Sabouraud Dextrose agar, respectively, for each micro-organism. We prepared dilutions of several brands of commercial toothpaste with different compositions. Double distilled water, pyrogen-free water, test tubes, Petri plates, micropipettes, and gel puncher were the other materials utilized for the study. Solutions of selected dentifrices were made by mixing 1 g of dentifrices in 4 mL of distilled water to give 1:5 dilutions, in a sterile container. Further dilutions were made by mixing 1 g of toothpaste with 9 mL and 14 mL to give 1:10 and 1:15 dilutions, respectively ( Figure 1). Three strains of microorganisms, S. Mutans, E. Coli, and C. Albicans, were grown in Mutans media, Mueller Hilton agar, and Sabouraud Dextrose agar media, respectively. These bacteria were inoculated in their respective medium by the swab method ( Figure 2). Sterile disks were incorporated with an equal amount of different dilutions of prepared toothpaste formulations using a micropipette. These disks were then placed equidistant to each other and the plates were incubated for 24 hours. The zones of inhibition were measured to the nearest whole mm using a graduated ruler by holding the test plates in front of a desk lamp. Ethical clearance was obtained from the institutional ethical committee before beginning the study. RESULTS Zone of inhibition against S. Mutans was found to be higher in homeopathic dentifrice followed by herbal dentifrice, and the least by conventional dentifrice, and no inhibition at 1:5, 1:10 and 1:15 dilutions, respectively ( Table 1). Zone of inhibition against C. Albicans was found to be higher in herbal dentifrice followed by conventional dentifrice, and no inhibition, and the least by homeopathic dentifrice at 1:5, 1:10 and 1:15 dilutions, respectively. Zone of inhibition against E. Coli was found to be higher in herbal dentifrice followed by conventional dentifrice, and no inhibition by homeopathic dentifrice at 1:5, 1:10 and 1:15 dilutions, respectively. DISCUSSION Microbial biofilms are the major causative factors of caries and periodontal disease, and it is of utmost importance to control these biofilms by mechanical and chemical debridement 15 . Prevalence of dentinal caries result from poor oral hygiene. It has been proven scientifically that maintaining good oral hygiene is the key to preventing dental diseases 4 . Toothpaste is a gel dentifrice used with a toothbrush accessory to clean and maintain oral hygiene. There is a paucity of literature about the efficacy of different types of dentifrices against oral microorganisms. Hence, this study was conducted to test the antimicrobial efficacy of three different dentifrices (herbal, homeopathic, and conventional fluoride containing) against S. Mutans, E. Three different dentifrice formulations 1:5, 1:10, and 1:15, were made by mixing distilled water with dentifrice. The media used were Mutans media, Mueller Hilton agar, and Sabouraud Dextrose agar media, respectively, for each microorganism by swab method. The antimicrobial efficacy was then tested using the disk diffusion method. In this study, the antimicrobial efficacy of dentifrice formulations against S. Mutans was measured as a zone of inhibition. The zone of inhibition is found to be higher in homeopathic dentifrice 24 mm, 19 mm, and 20 mm, followed by herbal dentifrice 19 mm, 17 mm, and 13 mm, and the least by conventional dentifrice 17 mm, 15 mm, and no inhibition, at 1:5, 1:10 and 1:15 dilutions, respectively. Similar results were obtained in the study conducted by Mohankumar et al. 17 where herbal toothpaste was found to have similar antibacterial activity compared to the conventional toothpaste. The herbal toothpaste with a combination of different varieties of herbs has better antimicrobial activity compared to those with a single ingredient. However, studies conducted by Anushree et al. 14 and Prasanth 3 showed fluoride and triclosan containing dentifrices had higher zones of inhibition at higher dilutions. Homeopathicbased dentifrices showed antimicrobial efficacy against S. Mutans since the major ingredients of its formulations were kreosotum, Plantago major, and calendula 14 . Similar results were observed by Gibraiel et al. 12 in their study where the toothpaste having natural formulation had given maximum zones of inhibition against E. Coli at 1:1 dilution. However, studies conducted by Anushree et al. 14 and Prasanth 3 showed fluoride and triclosan containing dentifrices had higher zones of inhibition at higher dilutions. The difference in the result may be attributed to the active ingredient used in the herbal dentifrices. However, studies conducted by Anushree et al. 14 and Prasanth 3 showed fluoride and triclosan containing dentifrices had higher Figure 2. Antimicrobial activity of various dentifrices against various microorganisms The zone of inhibition is calculated as the diameter of the area around the disk that is free of microbial colonies. E. Coli Herbal dentifrice 18 17 16 Homeopathic dentifrice 0 0 0 Conventional dentifrice 18 17 14 zones of inhibition at higher dilutions. This difference may be because different herbal products exhibit different levels of antimicrobial activity. Herbal dentifrice showed an increased antimicrobial activity than homeopathic and conventional dentifrices, against all the test organisms. It is thus necessary to acquire and preserve this traditional system of medicine by proper documentation. Further, in vivo studies have to be conducted to assess the antimicrobial effectiveness of these dentifrices. CONCLUSIONS The results of the present study show that toothpaste formulations containing homeopathic and natural antimicrobial agents were more effective in controlling oral microflora compared to conventional toothpaste. This provides insight to the dentists that homeopathic and herbal dentifrices are effective in controlling microorganisms related to major oral diseases and can be advised to the patients.
2020-05-28T09:12:52.498Z
2020-05-27T00:00:00.000
{ "year": 2020, "sha1": "922895c6d448b7b0dfed03a2c8885db991e5a0f0", "oa_license": "CCBYNC", "oa_url": "http://www.populationmedicine.eu/pdf-122529-51070?filename=Antimicrobial%20efficacy%20of.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "b917493abc71c116f0a4eeda6cb55ebf478a3cb4", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
259797767
pes2o/s2orc
v3-fos-license
Neurofibromatosis: Evaluation of Clinical Features of 39 Cases Objective: Our study aims to evaluate the clinical findings of childhood neurofibromatosis type 1 cases. Material and Methods: The clinical features of childhood patients who were followed up and treated by Pamukkale University Faculty of Medicine, Department of Pediatric Neurology between 2015 and 2023 were evaluated retrospectively. Results: 39 children were included in the study. Twenty-one of the cases were male and 18 were female. The mean age was 11.71±4.05 years. 11 (28.2%) patients had a family history of neurofibromatosis. Lisch nodule was seen in 14 patients, and axillary freckling was seen in 21 patients. Six of the cases had neurofibroma. Plexiform neurofibroma was not present in any of the cases. Four children had scoliosis. Nine of the cases had learning disabilities. Conclusion: The symptoms, signs, and complications of the cases in our study are consistent with the literature. It was thought that the low number of complications was due to the young age of the cases. In this study, we emphasized the importance of early recognition of NF-1 in terms of informing families about the disease and preventing treatable complications with regular clinical follow-up of these children. INTRODUCTION Nofibromatoses are a group of diseases in which nerve sheath tumors are seen. Neurofibromatosis consists of 3 groups of diseases: Neurofibromatosis Type 1 (NF-1), Neurofibromatosis Type 2 (NF-2), and schwannomatosis. NF-1 type, which is the most common neurocutaneous disease, is autosomal dominant, and its incidence is reported as 1/3000-1/4000 (1). The NF-1 gene has been cloned in the 11p12 region of the 17th chromosome, this gene encodes a tumor suppressor protein called Neurofibromin. Today, more than 1500 mutations specific to the NF-1 gene have been reported (2). One of the reasons for tumor formation in NF1 is explained by the '2 hit hypothesis'. With 'first hit', one of the alleles is structurally inactivated. With the 'second hit', loss of heterozygosity develops as a result of a somatic germline mutation in the other allele (3). Since NF-1 is a disease that can affect many systems, its findings vary. Cafe au late, Lisch nodules, neurofibromas, axillary and inguinal freckles, and hamartomatous of the brain are quite common. NF-1 patients may have learning difficulties, endocrine disorders, bone defects, nutritional problems, and additional problems such as hypertension. Disease symptoms may differ between patients as well as among affected individuals within the same family (4). MATERIAL AND METHODS In this study; File information of patients with NF1 who applied to Pamukkale University Pediatric Neurology Outpatient Clinic between January 2015 and January 2023 were retrospectively analysed. Demographic information (age, gender), physical examination findings, brain Magnetic Resonance Imaging (MRI), abdominal ultrasound (USG), electrocardiography (ECHO) results, and laboratory data were recorded for each patient. The files of the other polyclinics where the patients followed up with the diagnosis of NF-1 were referred in terms of endocrinological, cardiological, orthopedic, and psychiatric pathologies that may accompany their routine controls were reviewed. National Health Organizations (NIH) NF-1 diagnostic criteria were used in the diagnosis of NF1. With the presence of two or more of these criteria, the patient was diagnosed with NF-1 (5). Cases that came to regular controls were included in the study. Statistic The collected data were analyzed using SPSS Statistics 18.0 (Predictive Analytics Software Statistics for Windows, Version 18.0, SPSS Inc., Chicago, IL, US, 2009) software package. Continuous variables were expressed as median (interquartile range) values, and categorical variables were expressed as numbers and percentage values. Kolmogorov-Smirnov and Shapiro-Wilk tests were used to analyze the normal distribution characteristics of the continuous variables. Mann-Whitney U and Kruskal-Wallis analysis of variance (post hoc) tests were used in the comparisons of non-parametric independent data. The probability (p) statistics of < 0.05 indicated statistical significance. RESULTS Twenty-one (53.8%) of the patients included in the study were male and 18 (46.1%) were female. Patient ages ranged from 3 to 17 years. While 11 patients (28.2%) had a family history, 28 patients (71.7%) had no family history (Table I). It was determined that there was a history of consanguinity between the parents in four of the cases. Cafe au lait spots were present in all patients, and axillary and/or inguinal freckles were present in 21 patients (53.8%). Lisch nodules were detected in 14 patients (35.8%), and optic glioma was not found in any patient. Neurofibroma was detected in 6 patients. When the patients were evaluated neurologically; There were learning difficulties and cognitive disorders in 12 patients (34.4%), epilepsy in 5 patients (12.8%), and macrocephaly in 2 patients (5.1%). Abnormal MRI imaging was detected in 21 patients (Table II). T2 hyperintense lesions were detected in 16 (76.1%) of them, ventricular enlargement in 2 patients (9.5%), CSF spacing in the optic nerve sheath in 2 patients (9.5%), cavernous hemangioma in 1 patient (4.7%), and brain stem glioma was detected. MRI was normal in 15 patients (38.4%). When the patients were evaluated in terms of malignancies in NF1, peripheral neurofibroma was found in 2 patients (5.1%). Abdominal USG was performed in 30 (76.9%) of the patients in terms of the possible gastrointestinal stromal tumor, and no pathology was detected in any of the patients. In the endocrinological evaluation, 3 patients (7.6%) had short stature, and 2 patient (5.1%) had precocious puberty. Mitral valve prolapse (MVP) was detected in 2 (5.1%) of 11 patients who underwent cardiac examination, and no pathology was found in 9 patients (23%). When orthopedic comorbidities were screened, 4 patients (10.2%) had orthopedic pathology (scoliosis, pes planus), while no pathology was found in 25 patients (64.1%). The genetic examination was not performed in 27 (69.2%) of 39 patients. While mutations in the NF1 gene were detected in 12 (30.7%) of 39 patients who underwent genetic analysis. DISCUSSION In our study, the clinical features of 39 NF-1 cases were evaluated. Parents or siblings of 11 of these cases had NF-1 diagnosis. In studies, positive family history has been found at different rates ranging from 39% to 54% (6,7). The data in our study were compatible with the literature data. Cafe au late are one of the major diagnostic criteria of NF-1. It is typical for their number and size to increase with age (8). In our study, there were many cases in all cases. These spots, which cause cosmetic problems, cannot potentially cause a malignant lesion. Axillary or inguinal freckling, which is one of the diagnostic criteria for NF-1, usually occurs in late childhood, and school age (4). Axillary freckling was observed in 12 children. Lisch nodules are melanocytic hamartomas of the iris and are specific for NF1 (6). These lesions begin to appear on the iris surface from the age of 2.5 years and are seen in more than 90% of affected adults (10).In a study conducted in 162 pediatric patients with NF1, the presence of Lisch nodules was found to be 5% in patients younger than 3 years of age, 42% between 3-4 years of age, and 55% between 4-5 years of age (11). Lisch nodules are often asymptomatic, do not cause any visual impairment and do not require treatment (12). This study detected Lisch nodules in 14 patients (34.4%) in line with the literature. We detected Lisch node at the earliest in a case of 5 years old. Macrocephaly is a common finding in patients with NF-1 (4). The incidence of macrocephaly was reported as 40% in cases with NF-1, but in our study, macrocephaly was found to be 2.1%. Regular measurement of head circumference records is very important in terms of detecting macrocephaly. It should also be kept in mind in hydrocephalus. The most common pathology observed in neurofibromatosis in brain MRI examinations is hyperintense lesions seen in different localizations in T2W series. The characteristics of these lesions, called hamartomas, are that they are benign and have no accompanying neurological problems (1 ). The most common brain tumor in patients with NF-1 is optic gliomas. Most are asymptomatic (13). Duffner et al. (14) found hamartoma in 62% and abnormal findings other than hamartoma in 12% in NF patients. In our study, 16 of the cases had findings consistent with NF. Optic glioma was not detected in any of the cases. The frequency of seizures in neurofibromatosis patients is reported more frequently than in the normal population. Five cases in our study group had epileptic seizures and used drug therapy. Of these patients, 3 use valproic acid and 2 use levetiracetam. In a study, it was reported that the prevalence of epilepsy was 7% (15). Mental retardation, learning difficulties, language problems, lack of attention and organization, and psychosocial problems are more common in neurofibromatosis type 1 patients than in the normal population. A study reported that the prevalence of learning disabilities in 152 cases of NF-1 was 75% (16). Ten of our cases exhibited learning difficulties and were monitored by the child psychiatry department. Additionally, our two patients with epilepsy also experienced learning difficulties. Thirteen percent of NF patients had a height below -2 standard deviations (17). Within this study, three patients (13.7%) exhibited short stature, while two patients (3.45%) experienced precocious puberty. The patient with precocious puberty did not have optic glioma or pituitary adenoma. The frequency of diseases such as congenital heart diseases (especially pulmonary stenosis), hypertension, and renal artery stenosis has increased in NF1 patients (18). In the study, MVP was detected in 2 (3.45%) of 11 patients who underwent cardiac examination, and no pathology was found in 8 patients (27.5%). Another comorbidity with increasing frequency in NF1 is orthopaedic comorbidities, and pathologies such as scoliosis, kyphosis, bone dysplasia, and non-ossifying fibroma are more common in NF1. CONCLUSION NF-1 is a clinically complex and heterogeneous disease. It has been concluded that close follow-up of NF1 patients is necessary and important because of the involvement of many systems, the risk of malignancy in patients, the comorbidities that may accompany it, and their effects on quality of life.
2023-07-12T16:37:53.991Z
2023-06-05T00:00:00.000
{ "year": 2023, "sha1": "b5dd9c78257f87a316b668048d3bcc7856fec8c4", "oa_license": "CCBYNC", "oa_url": "https://medscidiscovery.com/index.php/msd/article/download/925/742", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "26af2dca74ab755f3231c9b4a7aa7abb3681b935", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
237887991
pes2o/s2orc
v3-fos-license
Srimad Bhagawat Gita as a Part of Mahabharata and its Comparison with Other Religious Literature Mahabharata is great Indian epics which discusses about facets of human life. It teaches many virtues in the form of story that is applicable even today in this technological world. It teaches Dharma always wins over adharma. We should be in the side of Dharma and follow good virtues. Our greed and lust ultimately will destroy our peace and will destroy us. This study introduces Mahabharata gives a snap shot of it and why Dharma-yuddha became inevitable. The confusion of Arjuna in the beginning of the war and how Lord Krishna clarifies it. The discourse between Arjuna and Lord Krishna is known as Bhagawat Gita. Certain verses of Bhagawat Gita is discussed and a comparison is made with other religious literature. The methodology used here is Analytical and Critical method for analysis of the philosophical content. Introduction Bhagawat Gita is a great philosophical work it contains all the essence of Veda and Vedanta. The whole text is written in the form of conversation. It is written in the form of question and answer. Asking questions is an art and each and every question asked by Arjuna shows his intelligence. Lord Krishna answers each and every question asked by Arjuna and clarifies his doubt. In order to understand about Bhagawat Gita let us first understand about Mahabharata. The whole story circulates around Kuru Family. (Basu, 2016) Krishna-Dwaipayan Vyasa, himself a character in the epic, composed it; as, according to tradition, he dictated the verses and Ganesha wrote them down. At 100,000 verses, it is the longest epic poem ever written, generally thought to have been composed in the 4th century BCE or earlier. The main story revolves around the two branches of family the Pandavas and Kauravas they www.scholink.org/ojs/index.php/wjer World Journal of Educational Research Vol. 8, No. 4, 2021 2 Published by SCHOLINK INC. fight in the Kurukshetra War for the throne of Hastinapura. (Basu, 2016) It was first narrated by a student of Vyasa at a snake-sacrifice of the great-grandson of one of the major characters of the story. Including within it the Bhagavad Gita, the Mahabharata is one of the most important texts of ancient Indian, indeed world, literature. Shantanu, the king of Hastinapur marries Ganga with whom he gets son Devavrat. Several years after he falls in love with Satyavati this is the root cause of all the events that will happen latter. Satyavati's father demands that he will agree for marriage provided his daughter's (Satyavati) son and her decedents will inherit the throne. But Shantanu did not agree for this demand but latter Devavrat vowed to renounce throne and to remain celibate throughout his life. Satyavati's father agreed for marriage by this way Shantanu and Satyavatis's marriage took place. Two sons were born to Satyavati and Shantanu couple but the elder one dies when he reaches adulthood. Therefore the younger son Vichitravirya was enthroned. Vichitravirya marriestwo princess with the help of Bhishma (Devavrat) but latter he dies soon after childless. Satyavati summoned her son Vyasa to impregnate the two queens. Vyasa had been born to Satyavati of a great sage named Parashar before her marriage to Shantanu. Thus by Niyogi custom the two queens each had a son of Vyasa the elder queen delivers Dhritarashtra (blind son) and the younger one Pandu. Vyasa also impregnates the maid of these queens and she delivers a son named Vidur. Dhritarashtra grew up to be the strongest of all princes in the country, Pandu was extremely skilled in warfare and archery, and Vidur knew all the branches of learning, politics, and statesmanship. Pandu was crowned as king because Dhritarashtra is handicapped and cannot become king by laws. Dhritarashtra's marriedGandhari, and Pandu married Kunti and Madri. Everything was going smoothly but Pandu announced that he want to go to jungle with his wives for some time and all duties related to kingdom was assigned to Dhritarashtra. Few years latter Kunti returned to kingdom with her five sons and with bodies of Pandu and Madri. (Basu, 2016)The five boys were the sons of Pandu, born to his two wives through the Niyog custom from gods: the eldest was born of Dharma, the second of Vayu, the third of Indra, and the youngest -twins -of the Ashvins. In the meanwhile, Dhritarashtra and Gandhari too had children of their own: 100 sons and one daughter. The Kuru elders performed the last rites for Pandu and Madri, and Kunti and the children were welcomed into the palace. Triumphs of Dharma over Adharma Lord Krishna clearly says that when adharma over rules Dharma, He will come to re-establish Dharma. This does not mean that God will come only when there is grave destruction of Dharma. It is true that he is everywhere, he is everything and of course he is present well inside us also. But it only means that He will take his full power (manifestation) whenever needed. The important verses of Bhagavat Gita which conveys this message is (Chapter 4, Verse 7-8) For the protection of the good, for the destruction of evil-doers, यदायदािहधम य लािनभ वितभारत। For the sake of firmly establishing righteousness, I am born from age to age. The Concept of Dharma in Ramayan In Ramayana Rama (Dharma) defeated Ravana (Adharma) and rescued Sita. The Concept of Dharma in Thirukural In The Concept of Truth in Quran (Stacey, 2017) "Righteousness is not that you turn your faces toward the east or the west, but [true] righteousness is [in] one who believes in God, the Last Day, the angels, the Book, and the prophets and gives wealth, in spite of love for it, to relatives, orphans, the needy, the traveler, those who ask [for help], and for freeing slaves; [and who] establishes prayer and gives zakah (obligatory charity); [those who] fulfill their promise when they promise; and [those who] are patient in poverty and ailment and during battle. Those are the ones who have been true, and it is those who are the righteous." (Quran, p. 2, p. 177) Wisdom is Powerful than Physical Strength Arjuna, the third of the five Pandavas was the most powerful and renowned warrior and known for heroism. Before the battle begins both Duryodhana and Arjuna meets Krishna and asks for help. Krishna agrees and he is ready to give two types of help they can choose whatever they want. One type is he is ready to give his army with all weapons they can use them. Another type is he will accompany with them in fight but he will never fight directly or will use his weapon. Duryodhana chooses the first one, thinking he will be the most powerful with more soldiers and arms with this he can easily win the battle. But Arjuna's choices was different he asked Lord Krishna to accompany him throughout the battle. Krishna agreed to drive Arjun's chariot during the battle. Arjun is choosing wisdom rather than army power because wisdom is most powerful than anything else. Review Related Literature The investigator have reviewed two studies related to the topic under study (Pillai, 1999) in his study "Educational ideas in Bhagavad Gita and its relevance to modern world". The content of Education should include both the temporal and spiritual subjects so that the temporal and www.scholink.org/ojs/index.php/wjer World Journal of Educational Research Vol. 8, No. 4, 2021 7 Published by SCHOLINK INC. spiritual knowledge may supplement and complement each other; but priority should be given for the attainment of self-realisation. A properly educated person is one who is free from ego desire, anger, hatred, jealousy and selfishness. A real scholar or pandita his one who is not influenced by emotions. A perfect individual is a realisedsoul who controls his intellect by the self (Atman), the mind by the intellect,the senses by the mind, and the body by the senses. Self-Realisation as an aim of education is very often correlated with idealistic school of philosophy in Education. But the self-realisation as stated in Gita as an aim of education is very comprehensive. (Bhagabati, 2009) "The philosophy of the bhagavadgita its upanisadic sources." The Upanisads as we have seen iscalled Brahmavidya or the science of reality which may be called monisticand idealistic. The statement 'All this is Brahman (sarvamkhalvidam brahma), insists on the unity of everything that exists and they areidealistic in the sense that all are pervaded by the Supreme Consciousness, who is of the nature of self. It is the Real of all reals (satyasya satyam). The Bhagavadgita also refers to the reality of one Absolute Brahman whose nature is pure consciousness. It communicates the Supreme knowledge of the Upanisad to humanity. Brahman manifesto Himself in external material world and also as individual beings. The Bhagavadgita also refers to the reality of one Absolute Brahman whose nature is pure consciousness as Upanishid. It communicates the Supreme knowledge of the Upanisad to humanity. Brahman manifesto Himself in external material world and also as individual beings. In the Kathopanisad the work is compared to a peepul (asvattha) tree, which is uprooted and eternal in its branches are scattered below in the form of variety of existence. Similar idea is found in the Bhagavadgita also. Methodology of the Study The study adopted Analytical and Critical method for analysis. (shodhganga) Analysis is a very dominant philosophical tendency. It involves "breaking down" (i.e., analyzing) philosophical issues. Analysis may be explained as an understanding of fundamental concepts, other related concepts, and interrelationship between these concepts. Critical analysis can be defined as the intellectually disciplined process of actively and skilfully conceptualizing, applying, analyzing, synthesizing, and/or evaluating information gathered from, or generated by, observation, experience, reflection, reasoning, or communication, as a guide to belief and action. 8. Implication of the Study 1) We should fight for Dharma we should not keep quiet when there is adharma happening in front of our eyes. 2) Dharma ultimately triumphs therefore we should always follow right path but we should have patience. 3) Dharma is the path of God which is clearly discussed is all world religions. 4) Wisdom (Intelligence) is most powerful than our physical strength. Conclusion This study analysis the inevitable war between the two parts of the Royal family Pandavas and
2021-09-01T15:13:03.315Z
2021-06-21T00:00:00.000
{ "year": 2021, "sha1": "677d42df92dbaedaf8f79984a39f111634f62239", "oa_license": "CCBY", "oa_url": "http://www.scholink.org/ojs/index.php/wjer/article/download/4006/4345", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "c6a304e007ffee311e4b25735122f85e5dee37d0", "s2fieldsofstudy": [ "Art" ], "extfieldsofstudy": [ "Philosophy" ] }
17614576
pes2o/s2orc
v3-fos-license
Sex with sweethearts: Exploring factors associated with inconsistent condom use among unmarried female entertainment workers in Cambodia Background Despite the success in promoting condom use in commercial relationships, condom use with regular, noncommercial partners remains low among key populations in Cambodia. This study explores factors associated inconsistent condom use with sweethearts among unmarried sexually active female entertainment workers (FEWs). Methods In 2014, the probability proportional to size sampling method was used to randomly select 204 FEWs from entertainment venues in Phnom Penh and Siem Reap for face-to-face interviews. Multivariate logistic regression analysis was conducted to examine independent determinants of inconsistent condom use. Results Of total, 31.4% of the respondents reported consistent condom use with sweethearts in the past three months. After adjustment, respondents who reported inconsistent condom use with sweethearts remained significantly less likely to report having received any form of sexual and reproductive health education (AOR = 0.49, 95% CI = 0.22–0.99), but more likely to report having been tested for HIV in the past six months (AOR = 2.19, 95% CI = 1.03–4.65). They were significantly more likely to report having used higher amount of alcohol in the past three months (AOR = 1.29, 95% CI = 1.01–1.99) and currently using a contraceptive method other than condoms such as pills (AOR = 4.46, 95% CI = 1.34–10.52) or other methods (AOR = 9.75, 95% CI = 2.07–9.86). Conclusions The rate of consistent condom use in romantic relationships among unmarried FEWs in this study is considerably low. The importance of consistent condom use with regular, non-commercial partners should be emphasized in the education sessions and materials, particularly for FEWs who use non-barrier contraceptive methods. Background In the Cambodian context, female entertainment workers (FEWs) refer to women working in entertainment venues such as massage parlors, night clubs, karaoke bars, beer gardens, etc. and who may or may not be involved in transactional sex [1]. The context of transactional sex in Cambodia has been dramatically altered by the introduction of the "Law on Suppression of Human Trafficking and Sexual Exploitation" in 2008 [2]. Many brothels closed down, and transactional sex has shifted to entertainment venues or other informal and hidden settings such as streets and parks. The lines between direct and indirect sex work have become less clear, and an increase in indirect transactional relationships, such as sweethearts, has been documented [3]. "Sweethearts," as they are called locally, involve romantic relationships and include normative lack of condom use as displays of trust and intimacy as well as indirect transactional sex through dinner dates, gifts or shopping trips [3]. For FEWs, a sweetheart is typically a boyfriend and/or regular client [4]. They are "a partner from a non-commercial, non-marital sexual relationship that possesses a certain degree of affection and trust" [5]. A sweetheart could give regular gifts and other forms of support. Anecdotal evidence indicates that many sweethearts of FEWs are previous clients [6,7]. Their relationship has become intimate over time, which enables a client to become a regular client and then a sweetheart. Some sweethearts become FEWs' cohabitating partner or spouse [6,7]. Marginalized communities had been affected by the global financial crisis in 2008, which led to the closure of several garment factories, with anecdotal evidence indicating that a number of female garment workers became workers in the entertainment industry. The National Center for HIV/AIDS, Dermatology and STD (NCHADS) estimated that there were 35,000 FEWs in Cambodia in 2012, of whom 60% were living in Phnom Penh [5]. In a 2012 study, all FEWs reported multiple, concurrent partners, including clients and sweethearts [4]. The number of clients for FEWs ranged from four to five a day-particularly for FEWs who worked in massage parlors and brothels-to one a month [4]. A 2010 assessment reported that FEWs were most likely to have recent sex with a client (38%), compared to a sweetheart (31%) or spouse (24%) [8]. On living arrangements, 7% of FEWs lived with a sweetheart [8]. The number of FEWs who have sweethearts is high, and most of them have active sex with their sweethearts. A 2015 study revealed that 60% of FEWs reported having one or more sweethearts in the past year, and about 85% of those with sweethearts reported having sex with them in the past three months [5]. In the 2010 assessment, about 20% of FEWs had sweethearts only in the past three months [8]. While the dramatic reduction of HIV prevalence in the general population was a cause for celebration in Cambodia, a great deal of challenges remain in reducing the prevalence of HIV and sexually transmitted infections (STIs) as well as in addressing other sexual and reproductive health (SRH) issues among FEWs who engage in transactional sex [9]. FEWs are at increased risk of both HIV/STI infections and poor SRH outcomes because of their high likelihood of involvement in direct or indirect transactional sex [1]. A recent study reported that HIV prevalence among this group is alarmingly high at 9.8% [10], compared to 0.6% in the general population [9]. According to the midterm data of the Sustainable Action against HIV and AIDS in Communities (SAHACOM) project, which provides comprehensive HIV and SRH services to FEWs in Cambodia, approximately 40% of FEWs reported having at least one STI symptom in the past three months [11]. Moreover, we recently found that 54% of FEWs reported at least one induced abortion during their lifetime, and 33% while working as a FEW [12]. The rates of consistent condom use with commercial and non-commercial partners in the past three months were 79 and 31%, respectively [13]. Through the five years of the SAHACOM lifespan, the rates of condom use with both commercial and non-commercial partners were not appreciably improved [12]. Inconsistent condom use with non-commercial partners among FEWs signifies a need for refinement in interventions, given their ubiquitous practice of having such partners and their high HIV prevalence. Exploring factors associated with inconsistent condom use with non-commercial partners among FEWs is important for prevention programs to eliminate new HIV infections and improve SRH outcomes such as STIs, unwanted pregnancies and subsequent induced abortions. Several studies have been conducted to explore factors associated with condom use among female sex workers (FSWs) in different settings. Guided by the Health Belief Model, Zhao and colleagues found that in China condom use was associated with self-efficacy and perceived benefits, and lack of use was associated with perceived barriers to condom use [14]. Excessive alcohol drinking has been found to be associated with both unprotected sex and a history of STIs among FSWs in China [15] and Uganda [16]. In Bolivia, women who used nonbarrier modern contraception were less likely to consistently use condoms with non-commercial partners than non-users [17]. In terms of sexual behaviors, the risk of inconsistent condom use decreased when the number of sexual partners increased [18]. FEWs in the wake of the brothel ban in Cambodia are a mixed population of women who might be involved in direct or indirect sex work, while also have romantic relationships. The lifestyle of FEWs thus is unique to the Cambodian context, and their sexual activities are more complex than the lifestyles and behaviors of FSWs in other countries. While evidence that reports factors associated with condom use among FSWs is important, there is a need to understand determinants of condom use among FEWs in Cambodia. Most studies of condom use among FSWs examined commercial relationships [14,16,18], while the rates of consistent condom use with non-commercial partners are low and have remained low in the past several years in almost all settings, including Cambodia [12,13,19,20]. This study was therefore conducted to explore factors associated with inconsistent condom use with sweethearts among FEWs in Cambodia. In our study, we defined commercial partners or clients as those who paid (either money or a gift) for sex with FEWs but did not engage in a romantic relationship, while we connoted noncommercial partners or sweethearts of FEWs by following the preceding elaboration. Methods This study was conducted as part of the impact evaluation of the SAHACOM project. Data were derived from the end-line survey conducted in April and May 2014. Details of this survey have been published elsewhere [12,13]. Participants and sampling Face-to-face interviews were conducted with 667 FEWs randomly selected from entertainment venues under the program coverage of the SAHACOM project in Phnom Penh and Siem Reap. The population of FEWs in these two provinces represented approximately 70% of the total population of FEWs in Cambodia. The probability proportional to size sampling method was used to decide the number of FEWs in each province, and venues were then randomly selected. A proportionate number of participants were randomly selected from a name list of FEWs of each selected venue. A FEW would be included in the study if she was: (1) biologically female, (2) at least 18 years of age, (3) able to present herself on the day of the interview and (4) able to provide consent to participate in the study. Data collection training and procedure All interviewers and field supervisors were trained for three days. The training covered a review of the study protocol, informed consent process, interview techniques and confidentiality. The research teams were also equipped with quality control and problem solving skills. Regular review sessions were encouraged to be performed among research team leaders and interviewers to follow up the progress and communicate any issues occurring during the data collection. Questionnaire development A structured questionnaire was initially developed in English, translated into Khmer and back-translated into English. A pilot study was conducted among a random sample of 20 FEWs, and the questionnaire was modified accordingly. The questionnaire was developed using existing tools from previous studies with same population [11], the 2010 Demographic and Health Survey in Cambodia [21], as well as from other relevant studies in Cambodia [22][23][24]. Socio-demographic characteristics included marital status, age, formal education, average monthly income, living arrangements, types of venues at which they were working and duration of time they had worked in the entertainment industry and at their current establishment. Several questions were employed to measure sexual behaviors and condom use in different types of sexual relationships in the past three months. The variables included the number of sex partners (both clients and sweethearts) and condom use with both types of sex partners. Condom use was measured using a scale with six-point response options ranging from (1) "always" to (6) "never." Those respondents who answered "always" to the questions were labeled as consistent condom users. Respondents were also asked if they were able to find condoms when needed in the past three months (0 = no, 1 = yes). Regarding substance use, participants were questioned about the use of alcohol (at least a full glass of beer, wine or liquor) and illicit drugs (including methamphetamine, heroin, ecstasy, inhalants, cocaine or marijuana) in the past three months. Response options were yes or no. Those who reported drinking alcohol in the past three months were also questioned about the average amount they drank per day (number of cans for beer and glasses for wine). We also collected information on the history of contraceptive use, pregnancy, induced abortion and STIs as well as HIV and SRH education they had received in the past six months. To measure mental health, we adapted a short version of the General Health Questionnaire (GHQ-12) [25] with four response options of "0 = less than usual," "1 = no more than usual," "2 = rather more than usual" or "3 = much more than usual." The scoring method '0-0-1-1' was used because it is believed to reduce any biases caused by respondents who tend to choose responses 0 and 3 or 1 and 2 [26]. To measure the level of mental disorder, the mean score for the study population [4.1 (SD = 2.7)] was used as the cut-off; scores above 4 were considered "high", and 4 or below were considered low [27]. The Cronbach's α of GHQ-12 scale among FEWs in this study was 0.70. A 12-item scale was adapted from a previous study to assess HIV knowledge [28]. The response options were '0 = No, ' '1 = yes' or '2 = don't know.' The total score of the scale was the sum of correct responses, with 'don't know' responses scored as incorrect. The Cronbach's α of this scale among FEWs in this study was 0.72. Data analyses EpiData version 3 was used for double data entry (Odense, Denmark). Data analyses were performed taking into account the sampling weight of sampling-size differences of FEWs population although the sampling design was self-weighted within each site [29]. In bivariate analyses, Student's t-test was used for continuous variables, and Chi-square test or Fisher's exact test was used as appropriate for categorical variables to compare socio-demographic characteristics, sexual behaviors, history of SRH, substance use, HIV knowledge and mental health (GHQ12) among respondents who reported consistent condom use and inconsistent condom use with one or more sweethearts in the past three months. A multivariate logistic regression model was developed. We simultaneously included all variables associated with condom use in bivariate analyses at a level of p < 0.2 in the model. Adjusted odds ratio (AOR), 95% confidence intervals (CI) and p-values were calculated. Two-sided p-value <0.05 was used to determine statistical significance. SPSS version 22 (IBM Corporation, New York, USA) was used for all analyses. Characteristics of respondents This study included 204 FEWs (30.6% of the total sample) who reported having sexual intercourse with one or more sweethearts in the past three months, with a mean age of 25.7 years (SD = 5.4 years). Only 31.4% reported consistent condom use with their sweethearts in the past three months. As shown in Table 1, the majority (77.9%) of the respondents were recruited from Phnom Penh, and more than two-thirds worked in Karaoke parlors (53.9%) and restaurants (25.5%). The rate of consistent condom use was significantly higher among respondents who reported working in the entertainment industry for less than 28 months (37.1 vs. 21.4%, p = 0.02), and in the current establishment for less than 18 months (36.0 vs. Substance use, sexual behaviors and SRH Comparisons of substance use, sexual behaviors and SRH among respondents who reported consistent and inconsistent condom use with one or more sweethearts in the past three months are shown in Table 2. The average amount of alcohol consumed per day was significantly higher among respondents who reported inconsistent condom use (mean = 10.3 cans/glasses, SD = 12.2) than among those who reported consistent condom use (mean = 7.0 cans/glasses, SD = 7.9) with sweethearts (p = 0.03). Respondents who reported consistent condom use with sweethearts were significantly more likely to report consistent condom use with clients in the past three months (88.5 vs. 64.4%, p = 0.02). Regarding contraceptive use, respondents who reported consistent condom use with sweethearts were significantly more likely to report current use of a modern contraceptive method (71.9 vs. 48.2%, p = 0.002), and using condoms as the main contraceptive method (87.2 vs. 32.8%, p < 0.001). Moreover, respondents who reported consistent condom use with sweethearts were significantly more likely to report having received some form of SRH education in the past six months (70.3% vs. 57.6%, p = 0.04). Table 3 shows factors that remained significantly associated with inconsistent condom use with sweethearts in the past three months after controlling for other covariates in a multivariate logistic regression model. Respondents who reported inconsistent condom use with Discussion In Cambodia, success in increasing condom use with commercial partners in key populations and significant reduction of HIV prevalence in the general population are attributed to the national level programmatic efforts made in the past decades. However, this study found that the rate of consistent condom use with sweethearts remains unacceptably low among FEWs (31.4% of those having sex with sweethearts in the past three months). This finding is in line with findings in several studies in other countries, which found that the rates of consistent condom use with regular, non-commercial partners are consistently lower than the rates in commercial relationships [30,31]. In this situation, partners of FEWs may potentially become a bridging population for HIV/STI transmission [12,13]. Inconsistent condom use with sweethearts among FEWs could increase the HIV prevalence among the general population since their sweethearts may also have partners in the general population with whom they may not use condoms. Research shows that some Cambodian FEWs did not use condoms with clients for extra pay or by coercion [4]. Our findings of the factors associated with inconsistent condom use among FEWs fill in the gaps in the literature on sexual behaviors among FEWs in Cambodia, a high-risk population with complex relationships and sexual behaviors. The successful decline in HIV prevalence in the Cambodian general population from 2.0% in 1998 to 0.6% in 2013 is widely attributed to the 100% Condom Use Program (CUP). The 100% CUP targets brothels as primary risk environments for HIV through multi-sector engagement and mobilization of local authorities, health workers, brothel owners, sex workers and community health workers to promote universal condom use and routine HIV and STI testing. The criminalization of sex work and brothels in 2008 has reversed some of the gains of the successful 100% CUP that was scaled up throughout Southeast Asia in the 1990s; new strategies are therefore urgently needed to address a persistent epidemic in sub-populations of women engaging in transactional sex [1]. The illegalization of sex work has made the 100% CUP approach infeasible because it entails explicit recognition of sex work by venues (brothels, massage parlors, karaoke bars, etc.), their managers and workers. As a result, sex work in Cambodia has transitioned to indirect transactional sex relationships known as entertainment work. The finding that FEWs who had not received any form of HIV and SRH education were significantly more likely to report inconsistent condom use may reflect the effectiveness of education campaigns performed by outreach workers in community-based HIV/SRH integration programs in Cambodia such as the SAHACOM project [32]. Several satisfactory changes have been reported in the impact evaluation study of the SAHACOM; however, challenges remained in improving and sustaining the rates of consistent condom use, particularly in regular and non-commercial relationships [32]. Tailored education programs are required to respond to the needs of FEWs and their sweethearts. In the SAHACOM project, where FEWs in this study were recruited, outreach workers led much of the project activities. Project activities included: (1) outreach sessions with FEWs at entertainment venues, which were led by trained peer outreach workers who used behavior change communication techniques to promote healthy sexual behaviors including condom use, HIV/ STI testing, contraceptive use and other health services; (2) outreach workers offering workplace-based counseling and finger-prick HIV and STI testing to FEWs; case management including referrals for treatment at health (3) access to vocational centers for those seeking to pursue new professions. We found that inconsistent condom users were more likely to report having been tested for HIV in the past six months. This finding may be supported by a number of health behavioral theories, including Protection Motivation Theory [33] and Health Belief Model [34], which view risk perception as an important determinant of healthcare seeking behaviors. FEWs who are involved in unprotected sex may choose to undergo HIV testing because of their perception of the risk they are involved, and they get HIV testing to confirm or rule out the possibility of the transmission. However, as shown in Table 1, HIV risk perception was not significantly associated with condom use with sweethearts among FEWs in this study, although the proportion of FEWs who perceived that their HIV risk was higher than that of the general population was higher among inconsistent condom users (20.3 vs. 29.3%). Moreover, our separate analysis of the same study sample did not find a significant association between HIV risk perception and HIV testing [35]. A study in China found that FSWs who had used HIV or STI services were more likely to use condoms consistently during commercial sex [18]. In this study, FEWs who used contraceptive methods other than condoms such as pills, injection, intrauterine devices, implant or natural methods were more likely to be involved in inconsistent condom use with sweethearts than those who used condoms as the main contraceptive method. The use of non-barrier contraception was also associated with inconsistent condom use with noncommercial partners among FSWs in Bolivia [17] and Swaziland [36]. Such practices may put FEWs at great risk for HIV and STI acquisition and transmission. This finding also underscores a challenging dilemma in HIV prevention where condom use is the most effective prevention method for HIV and STIs, yet women who opt to use highly effective contraceptive methods are less likely to use condoms. It also highlights the importance of programmatic promotion of dual protection method, using condoms in conjunction with other modern contraceptive methods that may increase protection against both HIV and unwanted pregnancies [36,37]. However, consistent condom use remains the most feasible and effective dual protection strategy [17,36]. We also found that the average amount of alcohol use per day in the past three months was high among FEWs, and the high alcohol consumption was significantly correlated with inconsistent condom use with sweethearts. This finding is in line with a Chinese study that highlighted heavy alcohol drinking among FSWs and its association with inconsistent condom use with both commercial and non-commercial partners [15]. Similarly, a Ugandan study unveiled that high alcohol use among FSWs was linked with unprotected sex with clients [16]. Therefore, alcohol use mitigation should be integrated into HIV prevention programs with FEWs. Further, given that alcohol drinking for many FEWs is job-related, alcohol consumption in entertainment establishments should be reckoned as an "occupational hazard" that warrants regular screening and intervention. This study has some limitations. First, the self-reported measures may limit our findings through inherent biases, including both underreporting and over-reporting. Given the Cambodian cultural norms, it is likely that risky sexual behaviors and substance use among FEWs in this study were underreported [38]. Second, we included only FEWs from two provinces where the SAHACOM, a comprehensive community-based HIV/SRH integrated project, has been implemented for FEWs since 2009. In such condition, the levels of HIV risk and behaviors among FEWs in this study may not represent the situation of the general FEW population in other areas of Cambodia. The final limitation concerns the cross-sectional design of the study that did not allow us to draw causal relationships between the variables. Conclusions Our findings highlight the low rate of consistent condom use in romantic relationships among FEWs. This situation puts FEWs at great risk for HIV and STI acquisition and transmission. In Cambodia, extensive progress has been made in the implementation of structural community-based HIV interventions with service packages, specifically designed for key populations, including FEWs. Further efforts are needed in order to increase condom use with sweethearts among these vulnerable women by addressing the key factors, including improving access to HIV and SRH education. The detrimental effects of multiple, concurrent partnerships and inconsistent condom use with sweethearts should be emphasized in education sessions and materials, particularly for FEWs who use non-barrier contraceptive methods.
2018-04-03T02:58:48.763Z
2017-01-05T00:00:00.000
{ "year": 2017, "sha1": "11f3521f0dbeeede5874659e6ca380f3751cc8ca", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1186/s12879-016-2101-2", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "11f3521f0dbeeede5874659e6ca380f3751cc8ca", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
233530580
pes2o/s2orc
v3-fos-license
Lest We Forget: Politics of Multiculturalism in Canada Revisited during COVID-19 Since COVID-19, we have witnessed a rise in hate crimes and xenoracism globally. Some commentators on COVID-related racism claim that this hate is apolitical. We question this claim, and in this paper, we strive to reveal the underlying politics especially around the ramifications and impact of this hate on racialized (im)migrants and the multiculturalism ideal. Drawing from Foucault’s construct of biopolitics and using Canada as a case study, we wonder how Canadian multiculturalism, which is a source of national pride, has been politically constructed to serve white settler hegemony from its inception to the present. We link political debates around the emergence of a multiculturalism policy in 1971 to the recent debates on multiculturalism and immigration during the 2015 and 2019 federal elections, and the current COVID-19 related national border policies in 2020. Our critical analysis illustrates how immigrants and racialized minorities have been systemically positioned in our legislation as a site to demonstrate the politics of governance, often scapegoated for national unrest and questioned on the legitimacy of their belonging and contribution to the nation. Meanwhile, the very ideal of multiculturalism in Canada has been evoked as the centre of biopolitics to govern ‘Others’ and all. Introduction The outbreak of COVD-19 has changed the world -millions of lives and jobs lost and unprecedented new norms created, pushing us to re-think national and moral borders and to re-visit diversity, humanity and human rights. There has been an outcry about increased hate crimes, racism, and xenophobia globally (Serhan and McLaughlin, 2020). originated in China and spread to other Asian countries and then to Western countries, which resulted in the construction of a projected image of the virus/'danger' being located in Chinese/Asian bodies, and a subsequent mislabelling of COVID-19 as the Chinese/Asian virus. This resulting bias that persons of Chinese/Asian descent carry the virus, even if they did not visit the virus epicentre locations or show pandemic symptoms, has situated them as targets for violence (Human Rights Watch, 2020). Furthermore, this projection and violence linked to Asian bodies has been expanded onto (im)migrants especially racialized bodies, mislabelled as the 'foreign virus,' even though they were citizens of non-Asian nations for generations. For example, on 28 February 2020, Italy's La Lega party leader Matteo Salvini conflated Italy's burgeoning outbreak with the recent arrival of a migrant boat from North Africa and demanded an anti-immigration policy although at that time Italy had 288 reported cases of COVID-19, while the whole of the African continent had just a single case (Davis, 2020). In Nova Scotia, Canada, the Premier Stephen McNeil and Chief Medical Officer Robert Strang named predominantly African Nova Scotian communities as locations of concern for COVID spread with no signs for this claim (McSheffrey, 2020). The pandemic is a global 'health' crisis (WHO, 2020) yet it is also a global 'human rights' crisis, since it has significant ramifications for racialized people and immigrants during and even after the pandemic. When faced with COVID-related racism, pervasive discourses in Canada include the idea that, since Canada has multiculturalism as a national policy, we are 'different' from those south of the border (i.e., the Trump administration), thus positioning Canada as a benign nation. Alternatively, some puzzlement is expressed at the eruption of pandemic-related racism as if this violence is exceptional to Canadian multiculturalism. In either discourse, Canadian multiculturalism is at the center of the debate. What does multiculturalism mean in Canada and how has this construction been challenged during the pandemic crisis as we witness a rise in racism and xenophobia? The official document by the Library of Parliament defines Canadian Multiculturalism as follows: The concept of Canada as a 'multicultural society' can be interpreted in different ways: descriptively (as a sociological fact), prescriptively (as ideology) or politically (as policy). As a sociological fact, multiculturalism refers to the presence of people from diverse racial and ethnic backgrounds. Ideologically, multiculturalism consists of a relatively coherent set of ideas and ideals pertaining to the celebration of Canada's cultural diversity. At the policy level, multiculturalism refers to the management of diversity through formal initiatives in the federal, provincial, territorial and municipal domains (Brosseau and Dewing, 2018: 1). According to the 2019 Census data (Statistics Canada, 2019), Canada's population growth relies heavily on immigrants (82.2%): Canada admitted 313,580 immigrants in 2018/2019, one of the highest levels in Canadian history. The 2016 Census data highlighted the increased diversity reporting more than 250 different ethnic origins and more than one out of five Canadians are born in foreign countries. Also, Canada is the first country to pass a national multiculturalism law in 1988 and has upheld it since as the national policy. Thus, sociologically and ideologically, Canada is a multicultural society. However, we argue that, at the policy level, managing diversity has been more tenuous, fluctuating between promoting and dismissing multiculturalism, even at times reflecting anti-multiculturalism. During the pandemic, we have witnessed an important juncture in Canadian multiculturalism around multiple levels of racism and systemic inequity. Is this pandemic-related racism and violence against (im)migrants and racialized people a new scenario that threatens Canadian multiculturalism or an ongoing never-ending story lest we forget? More fundamentally how can we understand Canadian multiculturalism -is it born from liberal, humanitarian and democratic good will? Or capitalism in a global neoliberal era melded with a post-colonial agenda? To explore these questions, we use Foucault's construct of biopolitics as a theoretical framework, conduct a critical review of Canadian multiculturalism from its emergence to the present and illustrate how multiculturalism has always been tenuous and is subject to re-shaping during the current pandemic crisis. Biopolitics: The Art of Governing during Unrest Foucault was concerned about how power operated over people and society. In ' Discipline and Punishment' (1977), Foucault explains how traditional sovereign power over actual life and death has been replaced with disciplinary power. It is not a weapon that threatens lives, but rather a way of governing and managing (often detrimental to) people that set out ways of existence while marking certain dominance as the norms. Through technologies of power over self (e.g. moralization) and over others (e.g. punishment), individuals internalize this disciplinary power and learn to behave in orderly ways that leads to docile subjects and a disciplined society. In his later works, 'The History of Sexuality (I) ' (1978) and 'The History of Sexuality (II)' (1984) Foucault further theorized his understanding of power relations with the concept of biopower, which articulates a shift in the use of power from the maintenance of 'authority' and control towards the maintenance and control of the 'population'. Instead of devising social control, biopower focused on the administration of the population's life, economic activity, productivity, health and mobility. In this section we briefly unpack his theorization of power and its potential in understanding the dynamics of multiculturalism under COVID. Analyzing the government's long history of emergency responses to outbreaks and epidemics of illness in the eighteenth century, Foucault (1977) noticed that, instead of 'managing infectious individuals' during epidemics, the 'management of plague' placed not only the infected individuals but also the general population in its totality as the centre of a disciplinary mechanism. To Foucault (1977: 198), 'the plague is met by order': The management of plague requires multiple divisions of the population and of space, a process Foucault called 'partitioning' which increased the 'organization in depth of surveillance and control' of these divisions, and while not sustaining punishment it entailed disciplinary power as 'an intensification and a ramification of power'. According to him, this disciplinary power is exercised in two ways. One is through invoking 'normalizing judgement' (177). Its purpose is the creation of a docile body/obedient subject specific to the condition of the pandemic through both the fear of punishment (e.g. fine, imprisonment) -that is, power exercised by outside -and the moralization of conduct -that is, power exercised by inside. Thus, the disobedient body is labelled as irresponsible, immoral and even criminal in contrast to the obedient body of the responsible, moral and 'livable' people. Thus, Foucault argues that the goal of the management of a plague looks like it's containment but in reality, the goal is for a 'disciplined society ' (199) in that the epidemic of plague becomes envisaged as a laboratory to test an ideal opportunity to exercise disciplinary power over a population. In The History of Sexuality II (1984: 138), Foucault further theorizes that biopower 'foster[s] life or disallow [s] it to the point of death', while managing ways of life and marking certain ways as norms of being. He defines biopower as the 'explosion of numerous and diverse techniques for achieving the subjugation of bodies and the control of populations' (140, italics added) while highlighting 'bio' in his theorizing again in two basic forms: One is the 'anatomo-politics of the human body' centring on the discipline of bodies aimed at producing docile bodies; and the other is the 'biopolitics of the population' centring around regulatory controls aimed not at individual bodies but at the management of population. Why did the body become significant in the exercise of power over populations? According to Foucault, since the shift from sovereign power to disciplinary power over people emerged with the birth of capitalism in the eighteenth century, the health of the population was considered as 'the foundation for protecting and augmenting the productive economic forces of the state' (Horton, 2020(Horton, : 1389. Thus, he wrote that the 'imperative for health' was 'at once the duty of each and the objective of all' and 'the body is a biopolitical reality' (1389). Biopower thus works precisely through 'the administration of bodies and the calculated management of life' over populations (Foucault, 1984: 140). We later illustrate how the politics of body has been deployed to exercise governing power before and during the global health crisis of COVID around Canadian multiculturalism. Interrogating this administration of bodies and its process of 'governing and being governed'an art of governing -has attracted critical scholars. Recently several scholars offer an illuminating perspective on the conception of the ongoing management of the COVID-19 pandemic (Horton, 2020;Kakoliris, 2020;Larsen, 2020;Lorenzini, 2020;Presiado, 2020;Van den Berge, 2020). During the pandemic, the administration of bodies, especially foreign and/or racialized bodies, became symbolized as administering a threat/danger to the legitimate community of 'right' citizens/bodies and was contested within and across national borders, constructing another global pandemic, racism. Witnessing racism spreading across the globe with COVID, critical scholars voiced concerns around the spreading crisis of humanity, equity and multiculturalism (Devaskumar et al., 2020a(Devaskumar et al., , 2020bLee and Johnstone, in press). We echo this call of crisis and the urgency to mend and restore justice, humanity and diversity. At the same time, we cannot fail to notice a familiar pattern of governing: whenever a crisis, inconvenience and/or discomfort occurs racialized others become a target using a governing tactic of seemingly benign and paternalistic intent, but deeply rooted in white supremacy and a capitalistic nationalist agenda. This serves to govern not only 'Otherized' people but also all the population since it looks like governors are doing something (i.e. blaming Others), thus distracting from systemic inequity and shortcomings, while legitimising the current inequitable governance. Canadian multiculturalism is such an example, which we use as a case study to illustrate the art of governing during social unrest. Furthermore, we argue that the very notion of Canadian multiculturalism has served to silence people (i.e. created the docile subject) who highlight the systemic inequity and flaws in multiculturalism from the emergence to the present, and this weak foundation has led to the very rationale of its existence continuously contested since its inception and questioned during COVID. We selected three political junctures closely relevant to Canadian multiculturalism: (1) the emergence of multiculturalism in 1971; (2) the recent two federal elections with the so-called feelgood policy in the 2015 election that positioned Canada as one of the forerunners to receiving refugees amidst the global refugee crisis and the 2019 election, which highlighted a division around addressing promises of admitting refugees and immigrants; and (3) the current pandemic era in 2020 and onward. Each juncture bears multi-scalar complexities of polities and contexts. By no means, are we claiming a comprehensive review of each of these complicated politics, rather, we seek to illustrate technologies/mechanisms of governing people -not only racialized others but all populations by deploying multiculturalism in Canada. A Birth of Multiculturalism as a Site of the Legitimacy In 1971, Prime Minister Pierre Trudeau announced Canadian multiculturalism to the House. In the 1960s, there was an explosion of activism on social and cultural rights. For example, the Canadian Bill of Rights was introduced in 1960 and prohibited discrimination for reasons of race, national origin, colour, religion or sex. In 1970, the Canadian government ratified the International Convention on the Elimination of all Forms of Racial Discrimination (Dewing, 2013). This zeitgeist of recognising human rights and ending discrimination would seem to pave the way towards the announcement of a Canadian multiculturalism policy in 1971. However, there were other political currents that were equally, if not more, influential that are often overlooked -(1) the new wave of non-White immigrants which constructed a split between earlier and recent immigrants/citizens; (2) the heightened conflicts between the two founding settler nations, the British and French; and (3) the Indigenous autonomy movementall of which precipitated a federal government push for containment. Since its emergence racism and multiculturalism have been set in motion side-by-side, yet it is not surprising given its little told story as below. In 1962, the White-only racial discrimination clause in Canadian immigration law was removed as decreased immigration from Britain and Europe resulted in a need for an increased labor pool. Immigration was opened to Africa, Asia, the West Indies, the Middle East and South America and the demographic composition of urban Canada changed (Knowles, 2015). These changes opened a debate between earlier settlers (mostly White) and recent immigrants (mostly non-White) around belonging and citizenship. Although the 'merit-based points system' to approve immigrant applications was introduced in 1967, upward mobility opportunities for racialized newcomers were limited to non-existent (Portes, 1977). The new immigrant labor contributed to an oversupply of unskilled labor as highly skilled immigrants (with high merit points) could not access employment reflecting their credentials, and this weakened the bargaining position of the domestic workforce and curtailed unionism. Quebec Nationalists watched the crumbling European Empires and the newly emancipated states of Africa and Asia and recognized that their province was very similar to that of a colonized state and in 1960 began calling for Quebec independence (Jones, 1998). A climate of fear was created by the formation of the Front de Libération du Quebec (FLQ), a militant separatist group whose violent tactics included blowing up railway tracks and delivering letterbox bombs. In 1968 the sovereigntist Parti Quebecois was established by Rene Levesque. This so-called quiet revolution in Quebec prompted the Royal Commission on Bilingualism and Biculturalism, which resulted in Bill C-120 (Official Languages Act) making both English and French official languages in Canada (McRoberts, 1998). Another domestic political challenge was the struggle for Indigenous autonomy. Having fought for Canada in both World Wars, First Nations peoples began to voice their dissatisfaction with demands for self-government and government recognition of treaty land claims (Turner, 2013). In 1969, Trudeau's Government introduced the White Paper, which proposed that the Indian Act be abolished, purportedly to give First Nations peoples the same rights as other Canadian citizens. This was unequivocally rejected by First Nations Chiefs across Canada as it would destroy their First Nations status, cancel their treaty agreements and undermine their quest for greater autonomy. These losses would far outweigh any gains that citizenship might give them (York, 1989). In the face of these multiple threats to Canadian unity from First Nations peoples and from French Canadians as well as the dissatisfaction amongst new immigrant non-White groups, the announcement of a multicultural policy expediently created a new dominant rhetoric to override the differences, thus creating an ethos of justice and equity for all Canadians (Dewing, 2013;Lentin and Titley, 2011;Pillay, 2015). As Foucault (1977: 199) argues, the apparent goal of the management of conflicts is containment, but in reality, the goal is for a 'disciplined society' and those tumultuous periods are used as a laboratory to test the exercise of disciplinary power over racialized others and all Canadians. Noteworthy is the length of time it took to announce multiculturalism as the national policy and the immediate reaction against this policy. The introduction of multiculturalism in 1971 was succeeded by the changing population demographic, which was witness to an acceleration of visible racism. This resulted in a reassessment of the backbone of multiculturalism from 'tolerance of others' to 'freedom from discrimination', which was then conceptualised as a 'right' and was entrenched in the Charter in 1982. Seventeen years later in 1988 this was embedded in the Multiculturalism Act. However, ongoing racism has persisted. For example, at the provincial level, the Ontario New Democratic Party (1990)(1991)(1992)(1993)(1994)(1995) was committed to implementing the Employment Equity Act (1986) and set up provisions for Indigenous people, people with disabilities, members of racial minorities, and women. The immediate backlash is reflected in the Editorial in the Toronto Star, 11 November 1993, which headlined 'White men need not apply' and accusations of 'reverse racism' concluded with accounts of White men who put their lives on the line for Canada during the war and were now excluded from the job market. This backlash resulted in the election of the Ontario Progressive Conservative Party in 1995, and the new government launched an ideological assault against the Employment Equity Act, resulting in its appeal. Multiculturalism was thus deployed to exercise power over not only otherized groups but also dominant groups (e.g. white men), with a zero-sum multiculturalism-based rhetoric around 'allowing immigrants to access services means losing the power we/dominant groups used to have,' as if the history of power originating mostly from the white-settler patriarchal colonial heteronormative state was non-existent. Therefore, from its emergence, the biopolitics of multiculturalism has governed not only 'Others' but also all populations in Canada. What we describe next is how all three spheres of underlying conflicts with communities of racialized immigrants, francophone in Quebec as one of the two founding groups of a white settler nation, and Indigenous people which were the initial impetus to the emergence of multiculturalism policy continue to be locations of conflicts and tensions questioning the very notion of multiculturalism in the current political juncture in Canada. Multiculturalism Contested during the Immigration and Refugee Crises The two faces of multiculturalism become even more apparent in recent immigration policies: immigrants are portrayed as contributors to nation building boosting the economy and at the same time as public threats at times even 'criminals.' For example, the Minister of Citizenship and Immigration of Canada (CIC, 2014) said in the 2014-2015 Report on Plans and Priorities for Citizenship and Immigration and Multiculturalism that: The Government of Canada is focused on creating jobs and opportunities by protecting the economy, keeping taxes low, and ensuring the health, safety and security of Canadians. Immigration remains central to that focus. The plans outlined in this report will ensure the immigration system fuels Canada's future prosperity, as we also maintain our generous family reunification and humanitarian record. In this speech, the focus is exclusively economic (e.g., 'economy', 'taxes'), and the image of Canada as a strong multicultural society -here highlighted with 'our generous' immigration system -is then used as a neoliberal marketing tool for generating revenue for nation building ('future prosperity'). Much of the legislation at the same time, however, reveals the opposite to 'generous family reunification' and a 'humanitarian record'. In the Strengthening Canadian Citizenship Act (2014), the reforms are actually focused on preventing what they call 'fraud', which is illegal entry or illegal residency in Canada. CIC has previously implemented two acts to protect program integrity: (1) the Protecting Canada's Immigration System Act (2012), which facilitates the prosecution of human smugglers and (2) the Faster Removal of Foreign Criminals Act (2013), which limits review mechanisms for foreign nationals and permanent residents who are inadmissible on such grounds as serious criminality and denies temporary resident status to foreign nationals. While the rhetoric claims that this ensures 'the safety and security of Canadians,' supports integration and ensures better preparation to participate in Canada, critical scholars suggest that the acts prevent people who are not seen as 'ideal' Canadian citizens from entering Canada. For example, regarding Bill-S7: Zero Tolerance for Barbaric Cultural Practices Act, a critical scholar Bhuyan (2015) elegantly argued at the Citizenship and Immigrant Committee that violence against women and children occurs in all cultures, groups, and societies, and in most cases cultural values are used to justify and carry out the abuse. I wish we could say with confidence that violence against women was un-Canadian, but if you look at the rates of rape, sexual assault, harassment, violent spousal assault, and homicide -specifically by male spouses, or former partners, against their female spouses -this is a Canadian problem. She strongly recommended that the committee remove the phrase 'barbaric cultural practices'. Despite strong activism from many sectors and individuals supporting immigrants, Bill S-7 has been a law in Canada since 2015. This kind of gate-keeping legislation which facilitates deportation and breaking families echoes the inhumane treatment of immigrants and racialized others in Canadian history such as Chinese Head Tax imposed on families of Chinese immigrants while they were building the trans-Canada railways between 1885 and 1923 (Li, 2008), the internment of Japanese Canadians during World War II (Adachi, 1991), and the genocide of Indigenous People through residential schools for seven generations and the 1960s scoop that places their children with non-Indigenous families through child protection services (FRTRCC, 2015). Despite public apologies to these wrong-doings and verbal commitments of 'never again' by the federal government (FRTRCC, 2015), exclusionary policies are being put in place again, controlling ways of being and the place of their bodies. Thus, precisely through 'the administration of bodies and the calculated management of life' over racialized others, the biopower works (Foucault, 1984: 140). At the same time, this law projects the image of racialized others as 'being barbaric' in the mind of people, thus governing all populations. The 2014-2015 CIC report above continues that 'Citizenship is devalued by those who do not intend to establish in Canada, including citizens of convenience' (a18). It is quite unclear what 'citizens of convenience' means and how 'do not intend to establish in Canada' could be determined. In this rhetoric, as a technology of control, the government claims a 'generous' immigration system: which legitimizes its disciplinary power to decide and gate-keep those 'who do not intend to establish in Canada'; criminalizes immigrants under these legislations if they are viewed as 'devaluing' the system by the dominant gaze; and the responsibility for settlement (i.e., 'establishing' after immigration) and its difficulties are placed on individual newcomers. The exclusion embedded in our legislation is projected onto the ones excluded who are even blamed for being excluded (Foucault, 1977). The federal election is often a site for heated conversation on immigration and diversity where multiculturalism is contested and re-defined. Prime Minister (PM) Justin Trudeau was elected in 2015 and re-elected in 2019. His Liberal party has been featured as a progressive diversity cabinet and has promoted multicultural Canada (Jeyapal, 2018). Amidst the global refugee crisis, Trudeau announced accepting 25,000 Syrian refugees as his 2015 election platform, which gained national and global approval and was featured as the evidence of Canada as a welcoming multicultural country. Janet Dench, Executive Director of the Canadian Council for Refugees noted although this action was 'uncommon' garnered Canada a reputation as a world leader on refugee resettlement, this stance is more to do with the fact that the United States has fallen behind around refugee policies in recent years. In fact, once the Trudeau administration reached 25,000, federal funding sponsorship stopped, and more private sponsorship was encouraged (Browne, 2020). Gunter (2015) critiqued it as the 'PM's feel good policy' since if the same money was sent as aid to countries bordering Syria, we could help 300,000 refugees in camps there, rather than just 25,000. Nevertheless, this policy worked as a governing tactic to all populations holding the view of Canada as a generous multicultural society. What we also observed in the recent 2019 election is questioning Canadian multiculturalism not only as the policy managing diversity but also as representative Canadian ideology (Brosseau and Dewing, 2018). While Trudeau's Liberal party mobilized a 'diversity is our strength' mantra promising an annual immigrant target of 350,000 to meet labor market demands, People's Party of Canada's election platform was calling for 'the end of the Multiculturalism Act' and its leader Maxime Bernier announced that he would fence off the areas along the border connected to New York State and Quebec used by illegal migrants, stating 'It's not a wall. It's a fence' (Campbell, 2019). He promised to reduce the total intake of immigrants and refugees to between 100,000 and 150,000 annually, around one third of the federal Liberal target and, if elected, to eliminate all funding that promotes multicultural Canada for the preservation of Canadian values and culture (Breen, 2019), as if being multicultural is un-Canadian. Accompanied by his image, billboards proclaiming 'Says NO to mass immigration' have popped up in major cities like Vancouver, Halifax, Toronto and Regina (Hudes, 2019). Saima Jamal, a Calgary-based social activist who advocates for refugees and immigrants, called it a 'slap to every immigrant' and an 'insult' to all Canadians. She said the billboard only serves to 'alienate' newcomers to Canada. Avnish Nanda of 'Everyone's Canada', a recently founded Alberta non-profit that supports multiculturalism and immigration to Canada, also commented on the billboards threatening 'foundational Canadian values such as multiculturalism, pluralism and welcoming newcomers' (Hudes, 2019). Again, multiculturalism is contested as both Canadian and un-Canadian. What is disturbing is not so much that one conservative party and their supporters are challenging the very value of multiculturalism and immigration but how this increasing anti-multiculturalism and anti-immigrant rhetoric is becoming pervasive across Canada as shown in the findings of several recent polls conducted in 2018-2019. For example, the Angus Reid Institute poll in August 2018 found that half of Canadians wanted to see the government's immigration targets reduced. The Angus Reid analysis noted the number of Canadians who have opposed and supported immigration as fairly steady over 40 years. Interesting is the difference in public opinion in that in 2014, 36% respondents said there should be fewer immigrants admitted, but in 2018, 49% held this belief (Glavin, 2018). In February 2019, a Leger poll reported almost half saying that Canada welcomed too many immigrants and refugees. Another in June reported 63% saying the government should limit immigration levels because the country was reaching its limit to integrate them. 37% of respondents in Ipsos poll conducted in May reported that immigration was a 'threat' to white Canadians (Abedi, 2019;Wright, 2019). Contrary to this sentiment and pervasive discourse, Professor Usha George, an immigration expert and director of the Centre for Immigration and Settlement at Ryerson University pointed out that there is 'no mass immigration' to Canada. Many Canadians thought more refugees were admitted than actually were (Moran, 2019). George said that sentiment came from a lack of understanding about how immigrants contributed to the system and from 'negative propaganda about immigrants'. Since its inception, multiculturalism policy has been perceived by many Quebecois as another intrusion by federal authorities into their province's internal affairs, downgrading French cofounder/settler status to the level of other ethnic minorities (Brosseau and Dewing, 2018). A perceived fear of losing the francophone language and identity is described in Jacques Houle's book, Disparaître? (To Disappear) and has captured the attention of many audiences in Quebec. Houle warns people in Quebec that current immigration levels must be slashed significantly, or Quebec's French-speaking majority will be in the minority, committing what he calls a 'demo-linguistic suicide' (Nakonechney, 2019). The Institut de recherche sur le Québec, a 2002 think tank that studied 'the Quebec national question' and its head of research, right-wing pundit Mathieu Bock Côté, organized a conference in November 2019. This conference showcased French nationalists as slowly reconquering Quebec's political space. The attack on Canadian multiculturalism has compounded with Quebec's nationalist movement. Recently Quebec banned visible religious symbols worn by public servants such as teachers and government officials with Bill 21 (June 16, 2019). Étienne-Alexis Boucher, a former Parti Québécois MNA and president of the Mouvement national des Québécoises et Québécois noted that Bill 21 was 'a pedestal on which we must build'. Given the victory of passing Bill 21, Quebec's nationalist movement strategized on how to use it to launch a multi-pronged attack on Canadian multiculturalism. Having a premier who isn't ashamed to call himself a nationalist is more than just a way to pass legislation, Nakonechney (2019) notes it is a sign that Quebec is pushing further back against the 'federal regime' and its multicultural tenets. Indigenous scholar Pamela Palmater (2019) also noted that the 2015 election campaign promised to make Indigenous issues a political priority and yet in the 2019 political platform Indigenous issues were ignored. Prime Minister Trudeau failed to appear for the first leader's debate and conservative leader Andrew Sheer characterised Indigenous issues as controversial natural resource projects declaring that Indigenous groups were 'holding hostage' resource developers, thus perpetrating an aggressive stereotype of dangerousness. The absence of potable drinking water on reserves, the national inquiry into murdered and missing Indigenous women and girls, which found Canada guilty of historic and ongoing genocide, the crisis of an overrepresentation of Indigenous children in foster care, the overrepresentation of Indigenous peoples in prisons and the Human Rights Tribunal finding of wilful discrimination against Indigenous children were all issues of prime importance to Indigenous people, which did not appear on the Trudeau platform (Palmater, 2019;Tower, 2019). What is so clear here is that all three spheres of underlying conflicts -injustice against communities of racialised immigrants, francophone in Quebec and conflicts between the two founding white settler states, and Indigenous people which were the initial impetus to the emergence of multiculturalism policy continue to be current areas of conflict and tension at the turn of 2020. Multiculturalism has worked as a site to exercise the biopolitics of governing all the population -to debate what kind of Canadian ideal population to pursue -in other words who is to be favoured to 'make live' and who is to be denied access to services within the state boundary excluding others (e.g., deportation) even to 'let die' (Foucault, 1984). At the Crossroads: Multiculturalism Questioned during COVID-19 In the beginning of 2020, even before WHO announced COVID as a pandemic, more than 9,000 parents signed a petition calling the Ontario school board to keep children whose family members had recently travelled to China home from school for 17 days (Jaynes, 2020). Kerry Bowman, a professor of bioethics and global health at the University of Toronto, called this racism since people should only be isolated if they showed symptoms of illness, not simply because of where their relatives had travelled (Jaynes, 2020). A law professor at the University of British Columbia, Carol Liao (2020) raises concern about the significant rise in hate crimes in Vancouver. Although Anti-Asian sentiment is not new, and many people in Canada have faced this corrosive exclusion, Liao notes that it has now escalated again during COVID-19. She painfully tells that some people are 'treating COVID-19 as a licence to exhibit their hate, only emphasizing the long history of racism in this city'. In Vancouver, Canada, on 13 March 2020, a 92-year-old Asian man with dementia, was yelled at with comments about COVID-19 and then shoved, resulting in a fall during which he hit his head. Vancouver police reported that hate crimes against people of East Asian descent in Vancouver doubled in April 2020 (Hager, 2020). On 14 March, an extreme case was reported in Midland, Texas, where three Asian American family members, including a two-year-old and a six-year-old, were stabbed since the suspect thought the family was Chinese infecting people with the coronavirus. Globally, overwhelming numbers of racialised population have been 'let die' as a result of insufficient protection during the COVID pandemic (Krieger, 2020). However, this is nothing new in history where infectious diseases (e.g. yellow fever) have been linked with othering and xenophobia (Chotnier, 2020;White, 2020). Quoting Derrida and Foucault, Presiado (2020) makes a convincing argument that '[T]he virus, neither living nor death, neither organism nor machine is always the foreigner, the other, the one from elsewhere'. Preciado takes an example of the epidemic of syphilis in the fifteen-century, which coincided with the European colonial enterprise and launched its destruction and xenoracial, male-dominant and heteronormative politics to come: 'The English called it the "French disease," the French said it was the "Neapolitan disease," and the Neapolitans said it came from America; it was thought to have been brought by the colonizers who had been infected by the "Indians." It was rather the opposite.' During the COVID-19 pandemic, a similar marriage of virus and racism has occurred in Canada and globally. Human Rights Watch (2020) reported numerous incidents of violence, harassment and xenophobic attacks on people of Asian descent across the globe (also see Lee and Johnstone, in press for detailed examples). However, what we should keep in mind is that the management of syphilis was not achieved by illegalizing prostitution or the confinement of sex workers to national brothels that in fact made them more vulnerable to the disease. Rather, its eradication came with the discovery of penicillin in 1928. Similarly, what transformed AIDS from a pandemic into a chronic disease was the de-pathologisation of homosexuality and women's right to sexual emancipation (Presiado, 2020). Certainly, the end of COVID will not arrive from this pervasive racist rhetoric. Nevertheless, the pandemic has awakened deeply embedded existing systemic racism in Canada, disputing who is the subject/owner of the nation (thus 'make live') and blaming people who are otherized as a threat (thus 'let die'). During the pandemic, migrants are, in general, more vulnerable to loss of employment, often have restricted access to health services, have precarious access to housing and less financial capacity to manage. The rise of racism adds further vulnerability to migrants and also becomes a site to debate who is in, who is out, and who is in-between, sharply affecting borders and immigration and the settlement system within the national border, as Foucault (1977) notes the tactic of 'partitioning' to create the conditions for implementing biopower. For example, since 21 March 2020, Canadian borders have been closed except for essential travel. Although it has been gradually loosened, as of 14 August, Public Safety Minister Bill Blair announced the reciprocal restrictions at the Canada-US border until 21 September 2020. Not surprisingly, closing the borders has disproportionately challenged marginalised people like asylum seekers, non-status migrants and temporary foreign workers as well as the most vulnerable populations fleeing danger zones or entrapped in refugee camps facing grave health and safety dangers. Asylum seekers crossing from the United States are being returned to US authorities where they face potential deportation to their countries of origin (Harris, 2020). This scaling border practice has reified the systemic inequity and discrimination for immigrants and has constructed two types of immigrants: First, people who enter illegally (because the border is legally closed) are deemed criminals and the other is people who are granted exception to the border closure (e.g. essential workers in the farms). To maintain the nation's food security, the border closure and travel restrictions were exempted for temporary farm and fishery workers acknowledging their essential services to Canadians. In May 2020, over 50 workers tested positive at one produce farm in Kent Bridge, Ontario, and other farms in Ontario have seen peaks in COVID positive cases, and in Southwestern Ontario several farm workers died. The Mexican government paused the migrant worker program on 15 June, which delayed around 5,000 Mexican workers coming to Canada. Only then did the Canadian government establish safety provisions and medical services so that the flow resumed by 21 June. Noteworthy is that the vulnerability of migrant workers has continued, and several lives were tragically lost even after Prime Minister Justin Trudeau announced support for farmers and food industries on 5 May with a federal government investment of $252 million. However, their ongoing poor working and living conditions (e.g. overcrowded and shared living spaces) were not changed and continued to expose them to high risks during the pandemic. Also, on 22 May, the federal government announced the Agri-Food pilot program, which allows non-seasonal farm workers to apply for permanent residency; yet due to its limited scope, this is also criticized as more of a 'symbolic' gesture, continuing the longstanding pattern of the Canadian immigration system undervaluing essential economic contributions of 'low-skill' agricultural workers (Shields and Alrob, 2020). As noted by Foucault (1977), to govern is to establish a certain boundary/exclusion that 'make live and let die' by 'partitioning' to create legitimacy and power over bodies and population. Governing thus simultaneously constructs 'outside Others,' who are "barred from the life of the legitimate community" where the recognition of its membership allows one "access to the category of 'the human'" (Zylinska, 2004: 526). A presence of seemingly inclusive policies (e.g. the Agri-Food pilot program) that are open to racialised others under the name multiculturalism does not mean an absence of partitioning and boundary making. Scaling inclusion policies or making new announcements under the multicultural rhetoric is a continuum of biopolitics in managing others and the population. Galabuzi (2008) notes that inclusive policies do not address exclusion. A presence of inclusive policies is often performing the management of diversity, which thus explains the possible co-habitancy of both inclusive (e.g. multiculturalism) and exclusive (e.g. anti-multiculturalism) policies. Therefore, COVID and its related border issues reveals how bodies of others are subject to politics of governing that decide their place and conditions of living and not-living; how vulnerable these migrant workers have been while performing essential work pre-and during the pandemic; and this systemic inequity is embedded in our legislation and yet also claimed/portrayed as Canada's 'generosity'. Critical scholars argue that 'the work performed by such temporary workers is deemed essential but the workers themselves are not' (Shields and Alrob, 2020: 16). Vaughan-Williams (2008) thus rightfully called this border politics 'the generalised bio-political border'. Supporting the economy and maintenance of the nation is once again picked up by immigrants and racial minorities -replicating the historic pattern of recruiting poorly paid Chinese labor migrants to build the Trans Canada Railroad and splitting families with the Head Tax legislations. Immigrants and racialised others who have often worked in 3-D labour (i.e., Dirty, Dangerous and Demeaning) are positioned to pick up 'essential' work during the pandemic, resulting in their high density in 3D+E industries exposing them to much higher risk of COVID infection than workers in other non-essential industries (Shields and Alrob, 2020). Despite this contribution, the discourse around linking immigration to economic crisis is repeated. For example, there has been media coverage that questions if the Trudeau administration's signature pledge to take in 350,000 immigrants a year by 2021 can still happen after the pandemic since 'the public is faced with serious personal economic distress caused by unemployment and savings that have been wiped out' during the pandemic (Fisher, 2020). The implication underlying these politics is that somehow receiving immigrants is an act of benevolence and generosity that Canada can ill afford during the pandemic crisis as if sustaining the Canadian economy with food supplies harvested by temporary migrant workers has never existed. In his lecture of 'Society Must Be Defended', Foucault (2003) argues that racism is 'a way of introducing a break into the domain of life taken over by power: the break between what must live and what must die' (p. 254). Lorenzini (2020) notes that 'the differential exposure of human beings to health and social risks is, according to Foucault, a salient feature of biological governmentality. Racism, in all of its forms, is the "condition of acceptability" of such a differential exposure of lives in a society in which power is mainly exercised', thus accurately asserting that 'biopolitics is always a politics of differential vulnerability' (italics in original). Despite the claim that COVID is apolitical and an equaliser affecting all, politics around COVID repeatedly demonstrates that COVID is 'the great unequalizer' (Devaskumar et al., 2020a(Devaskumar et al., , 2020b and disproportionately impacts racialised others in general and those essential services in particular with higher infection rates and death tolls (Krieger, 2020). Although the effectiveness of border control as a means to contain COVID was not confirmed and the World Health Organization (WHO, 2020) explicitly advised that 'Travel bans to affected areas or denial of entry to passengers coming from affected areas are usually not effective in preventing the importation of cases but may have a significant economic and social impact', it became an unchallenged norm across many countries, even being highly promoted by far-right nationalism. For example, on 10 March, President Trump tweeted that 'this is why I told you we needed to build walls!' highlighting his signature policy platform 'we need the Wall more than ever' (Singh, 2020). It was used to manage and govern displaced refugees and legitimise political decisions to make refugees subject to even worse health crises in quarantine in refugee camps (Makszimov, 2020). The pervasive rhetoric against others and the growing racism, xenophobia and far-right nationalist polities not only manage national borders and state policies but also govern all population influencing perceptions, behaviours and outlook of Canada-to-come, the things Foucault notes as biopolitics. Conclusion While multicultural rhetoric honours diversity and difference and supports toleration and the embrace of multiple traditions and customs, at the same time there is a politicised 'Canadian ideal', which dictates what is appropriate, desirable and worthy as a Canadian citizen who can be 'strangers' to be accepted versus 'stranger strangers' who cannot be granted proximity to the nation (Ahmed, 2012), thus 'partitioning' the population into who we 'make live or let die' (Foucault, 1984). The rhetoric of 'immigrants are good for Canada' in fact has addressed depopulation due to ageing and low rate of birth and fiscal security by recruiting hard-working people who can take over the jobs that Canadians avoid, the points system brings highly qualified, educated and motivated immigrants to boost the national economy and 'feel good' national pride is generated by the recent welcoming policy for refugees. However, as illustrated, this rhetoric has been always tenuous and contested particularly during social unrest and so is now challenged again with the outbreak of COVID. A review of Foucault's theorising on biopower was especially relevant to understand these politics around multiculturalism in Canada as described. During COVID, the administration of racialised bodies in geopolitical borders and all population around their living, working, and health service access has further legitimized the existing dominance in Canada. Rather than taking multicultural Canada for granted, our analysis highlights the importance of an ongoing need to critically reflect on the so-called noble ideals of multiculturalism and the underlying biopolitics to maintain resistance and fulfil the quest for human dignity and inclusivity, which is at a constant crossroads. Funding The author(s) received no financial support for the research, authorship, and/or publication of this article.
2021-04-08T13:22:20.998Z
2021-04-05T00:00:00.000
{ "year": 2021, "sha1": "903e25d8cb0d23da6096ab85b6a464bcd2f515fb", "oa_license": "CCBYNC", "oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/08969205211000116", "oa_status": "HYBRID", "pdf_src": "Sage", "pdf_hash": "7862b37a842ce5ae6f563d50aa12b2ca27468d2a", "s2fieldsofstudy": [ "Political Science", "Sociology" ], "extfieldsofstudy": [ "Political Science" ] }
4383267
pes2o/s2orc
v3-fos-license
Reducing excessive GABAergic tonic inhibition promotes post-stroke functional recovery Stroke is a leading cause of disability; but no pharmacological therapy is currently available for promoting recovery. The brain region adjacent to stroke damage, the peri-infarct zone, is critical for rehabilitation, as it exhibits heightened neuroplasticity, allowing sensorimotor functions to re-map from damaged areas1–3. Thus, understanding the neuronal properties constraining this plasticity is important to developing new treatments. Here we show that after a stroke in mice, tonic neuronal inhibition is increased in the peri-infarct zone. This increased tonic inhibition is mediated by extrasynaptic GABAA receptors (GABAARs) and is caused by an impairment in GABA transporter (GAT-3/4) function. To counteract the heightened inhibition, we administered in vivo a benzodiazepine inverse agonist specific for the α5-subunit-containing extrasynaptic GABAARs at a delay after stroke. This treatment produced an early and sustained recovery of motor function. Genetically lowering the number of α5 or δ-subunit-containing GABAARs responsible for tonic inhibition also proved beneficial for post-stroke recovery, consistent with the therapeutic potential of diminishing extrasynaptic GABAAR function. Together, our results identify new pharmacological targets and provide the rationale for a novel strategy to promote recovery after stroke and possibly other brain injuries. Stroke is a major source of disability, confining one-third of stroke survivors to nursing homes or institutional settings4. Recent studies have shown that the brain has a limited capacity for repair after stroke. Neural repair after stroke involves re-mapping of cognitive functions in tissue adjacent to or connected with the stroke5,6. Functional recovery in this peri-infarct tissue involves changes in neuronal excitability that alter the brain's representation of motor and sensory functions. Stimulation of peri-infarct cortex enhances local neuronal excitability through a process that involves long-term potentiation (LTP), alters sensorimotor maps, and improves use of affected limbs5-8. The inhibitory neurotransmitter GABA is critical for cortical plasticity and sensory mapping. Altering GABAergic transmission changes sensory maps during the critical period of cortical development9 and produces rapid alterations in adult cortical maps that resemble changes occurring after stroke10, 11. Alterations in cortical maps through blockade of GABAergic signaling are associated with fundamental changes in cellular excitability including LTP12. In a similar manner to normal cortical plasticity, GABAergic mechanisms may mediate changes in neuronal excitability that play a central role in functional recovery of peri-infarct cortex after stroke. Cortical GABAergic signaling through GABA A Rs is divided into synaptic (phasic) and extrasynaptic (tonic) components. Tonically active extrasynaptic GABA A Rs set an excitability threshold for neurons13,14. Extrasynaptic GABA A Rs primarily consist of α5 or δ-subunit-containing receptors13,14. Pharmacological and genetic knockdown of α5-GABA A Rs enhance LTP and improve performance on learning and memory tasks15, 16. The selective effects of extrasynaptic GABA A Rs on cellular excitability and plasticity, and the evidence that changes in neuronal excitability underlie functional reorganization in periinfarct cortex, suggest that this system may play a role in post-stroke recovery. We find that stroke increases tonic GABAergic transmission in peri-infarct cortex and dampening this tonic inhibition produces an early and robust gain of motor recovery post-stroke ( Supplementary Fig. 1, schematic summary). We examined neuronal excitability in the peri-infarct cortex of mice during the period of recovery and reorganization after a photothrombotic stroke to forelimb motor cortex. Whole-cell voltage-clamp recordings in in vitro brain slices prepared at 3-, 7-, and 14-days post-stroke (Fig. 1a) showed a significant increase in GABA A R-mediated tonic inhibition (I tonic ) in layer 2/3 pyramidal neurons, compared to neurons from sham controls (control: 8.05±0.80 pA/pF, n=24, vs. post-stroke: 13.6±1.41 pA/pF, n=45, Mann-Whitney U-test, P<0.05; Fig. 1b). I tonic remained elevated from 3-to 14-days post-stroke ( Supplementary Fig. 2a). The mean phasic excitation remained unchanged over the 2-week period after stroke ( Supplementary Fig. 3a, b). The mean phasic inhibition was unchanged except for a transient decrease at 7-days post-stroke ( Supplementary Fig. 3c, d). The resting membrane and GABA reversal potentials were both unchanged ( Supplementary Fig. 3e, f). We hypothesized that the chronically elevated tonic inhibition in the peri-infarct region may antagonize the neuronal plasticity required for functional recovery after stroke. Therefore, we tested whether reducing the excessive tonic inhibition would improve function recovery. Of the two GABA A Rs subtypes shown to underlie tonic inhibition in cortical neurons, the α5-GABA A Rs can be antagonized specifically by L655,708, a benzodiazepine inverse agonist16, while no specific antagonist exists for δ-GABA A Rs. L655,708 (100 nM) decreased I tonic in control neurons by −13.3±5.2% (n=4), but produced a significantly greater decrease in post-stroke neurons (−30.0±4.1%, n=13; P<0.05; Fig. 2d, e), which reverted I tonic back to control level (control: see above vs. post-stroke + L655,708: 140.8±18.5pA, n=13; P=0.702; Fig. 2f). L655,708 produced only minimal effects on phasic inhibitory currents in both post-stroke and control conditions ( Supplementary Fig. 4b). We next tested the effects of reducing tonic inhibition on functional recovery after stroke, using measures of fore-and hindlimb motor control. Stroke produced an increase in the number of foot-faults in grid-walking task, and a decrease in forelimb asymmetry in the cylinder task from 7-days post-stroke. Chronic treatment with L655,708 starting 3-days post-stroke resulted in a dose-dependent maximal gain of function beginning from 7-days post-stroke in both tasks (P<0.001; Fig. 3a-c). Acute treatment with L655,708 just prior to behavioral testing had a minimal effect on stroke recovery ( Supplementary Fig. 7). To assess the necessity of long-term administration, we discontinued L655,708 treatment after 2 weeks and found a decrease in functional gains, although these mice still performed better than vehicle-treated stroke controls ( Supplementary Fig. 6). To further corroborate the role of reduced tonic inhibition in enhancing stroke recovery, we tested mice with deletions of either α5 or δ-subunit-containing GABA A Rs (Gabra5 −/− and Gabrd −/− , Methods)18. Gabra5 −/− animals showed significantly better motor recovery poststroke, comparable to L655,708-treated wild-type animals ( Fig. 3d-f). In addition, Gabra5 −/− animals displayed a significant reduction in hindlimb foot-faults (Fig. 3e). Gabrd −/− animals also showed significant improvements in motor recovery ( Fig. 3d-f), but to a lesser extent than the Gabra5 −/− mice. Thus, modulation of α5GABA A Rs affords greater functional gains in motor recovery than δGABA A Rs, and genetic removal of α5GABA A Rs produces a more widespread increase in motor recovery than pharmacological antagonism. Administration of L655,708 to Gabrd −/− mice produced an even greater recovery, confirming the beneficial effect of reducing peri-infarct tonic inhibition (Fig. 3a, c). Low/sub-seizure dosing of picrotoxin (PTX: 0.1mg/kg, i.p.), a use-dependent GABA A R antagonist, enhances learning and memory in transgenic mouse models of Alzheimer's and other cognitive impairments by reversing an increased GABAergic inhibitory tone, acting at both synaptic and extrasynaptic GABA A Rs19,20. The pharmacological effects of PTX on reducing phasic and tonic inhibition were not altered after stroke (Supplementary Table II). PTX given to animals from 3-days post-stroke resulted in a significant gain of forelimb function on the grid-walking task compared to vehicle-treated stroke controls (P<0.05; Supplementary Fig. 9a). No significant changes were observed in hindlimb function or forelimb asymmetry ( Supplementary Fig. 9b, c). Combined L655,708 + PTX treatment showed similar initial functional gains compared to stroke + L655,708 alone; however, prolonged PTX + L655,708 treatment produced a deterioration in motor function such that the performance progressively worsened at late periods after stroke ( Supplementary Fig. 9). These data suggest that increasing cortical excitability too far or reducing phasic inhibition negatively impact functional recovery. An important element in stroke treatment is the timing of drug delivery. GABA A R agonists administered at the time of stroke decrease stroke size20. Therefore, dampening tonic inhibition too early after stroke may produce an opposite effect, i.e. increased cell death. To test this, we assessed stroke volume at 7-days post-stroke, in animals treated with 1) vehicle, 2) L655,708 from stroke onset, and 3) L655,708 from day-3 post-stroke. Stroke volumes were similar between mice treated with vehicle and L655,708 from day-3 (Fig. 4). In contrast, stroke volume was significantly increased in animals treated with L655,708 from stroke onset (P<0.05; Fig. 4). These data indicate a critical timeframe for therapeutically dampening tonic inhibition post-stroke: reduction too early would exacerbate stroke damage, while delaying treatment by 3-days would promote functional recovery without altering stroke size. Genetic deletion of α5or δ-GABA A Rs did not affect infarct size or neuronal number in peri-infarct cortex ( Supplementary Fig. 8). Unlike pharmacological antagonism of α5-GABA A R-mediated inhibition, in Gabra5 −/− and Gabrd −/− mice, the genomic absence of one of the extrasynaptic GABA A Rs may trigger compensatory upregulation of the other receptor13 and thus obscuring their roles in neuroprotection immediately after stroke. Current therapies that promote functional recovery after stroke are limited to physical rehabilitation4. Here, by identifying an excessive tonic inhibition after stroke, we have found promising new targets for pharmacological interventions to promote recovery. The elevated tonic inhibition in cortical pyramidal neurons occurs during precisely the same time-period important for cortical map plasticity and recovery1-3. Alterations in other aspects of cortical signaling have also been described during this period, including altered GABA A R subunits, glutamate receptor expression and neuronal network properties21-24. Protein levels of GAT-1 and GAT-3/4 were shown to be decreased in peri-infarct cortex in some rodent stroke models, and reactive astrocytes exhibit reduced uptake of other neurotransmitters23. However, there are conflicting data on GABA A R levels after stroke23-26. We found decreased protein level and compromised function of GAT-3/4 in peri-infarct cortex. The elevated tonic inhibition may curtail cortical plasticity and spontaneous recovery after stroke, and is consistent with tonic GABAergic inhibition exerting a causal role in limiting motor recovery in stroke. Non-selectively decreasing GABAergic tone facilitates neuronal plasticity in genetic models of cognitive diseases19,20. We show for the first time that antagonizing an elevated tonic inhibition enhances motor recovery after stroke, consistent with the idea that molecular and cellular events of neuronal plasticity are dampened in the peri-infarct zone, and promoting this plasticity facilitates functional recovery. Together, our results have identified novel pharmacological targets and provide a rational basis for developing future therapies to promote recovery after stroke and possibly other brain injuries. Methods Summary Photothrombotic model of focal stroke Focal stroke was induced by photothrombosis in adult male C57BL/6 mice (age 2-4 month) as described by27. Slice preparation for electrophysiology Following decapitation, brains were rapidly removed and placed into a N-methyl-Dglucamine (NMDG)-based cutting solution to enhance neuronal viability28. Coronal slices (350μm) were cut and transferred to an interface-style chamber containing artificial cerebrospinal fluid as previously described13. Recordings were made from intact periinfarct cortical layer-2/3 pyramidal neurons and analyzed as previously described13. In vivodrug administration L655,708 was dissolved in DMSO and then diluted 1:1 in 0.9% saline. L655,708-filled ALZET-1002 pumps were implanted at 3-days post-stroke and replaced every two weeks. In acute administration studies, 5mg/kg L655,708 was administered (i.p.) 30 minutes prior to testing. The concentration in one minipump, 5mM, delivers a 200ug/kg/day dose in mice. With one or two minipumps implanted, this provides a dose escalation. PTX (0.1mg/kg i.p. bi-daily) starting 3-days post-stroke was administered alone or in concert with L655,708. Behavioral analysis Mice were videotaped during walking and exploratory behavior in the grid-walking and cylinder/rearing tasks, tested at approximately the same time each day during the nocturnal period29. Baseline behavioral measurements were obtained one week prior to surgery. Poststroke animals were assessed at weeks 1, 2, 4, and 6. Infarct-size measurement-For the histological assessment of infarct size, brains were processed at 7-days post-stroke using cresyl violet as previously described30. Statistical analysis-All data are expressed as mean ± s.e.m. For electrophysiological comparisons between control vs. post-stroke, Mann-Whitney non-parametric test was used. For multiple comparisons across post-stroke days, one-way analysis of variances (ANOVA) and Newman-Keuls' multiple pair-wise comparisons for post-hoc comparisons were used. For behavioral testing, differences between treatment groups were analyzed using two-way ANOVA with repeated measures and Newman-Keuls' multiple pair-wise comparisons. The level of significance was set at P<0.05. METHODS Photothrombotic model of focal stroke-Under isoflurane anesthesia (2-2.5% in a 70% N 2 O / 30% O 2 mixture), 2-4 month-old adult C57Bl6 (Charles River, Wilmington, MA) male mice were placed in a stereotactic apparatus, the skull exposed through a midline incision, cleared of connective tissue and dried. A cold light source (KL1500 LCD, Zeiss) attached to a 40× objective giving a 2mm diameter illumination was positioned 1.5mm lateral from Bregma, and 0.2mL of Rose Bengal solution (Sigma; 10 g/L in normal saline, i.p.) was administered. After 5-min, the brain was illuminated through the intact skull for 15-min. Rose-bengal produces singlet oxygen under light excitation, which damages and occludes vascular endothelium, resulting in focal cortical stroke under the region of illumination (Fig. 4), circumscribed by peri-infarct tissue with normal neuronal cell number ( Supplementary Fig. 8). Two to four month-old adult male Gabra5 −/− and Gabard −/− mice1 received stroke as above. These mice had been back-crossed to C57Bl6 in excess of 15 generations, and were compared in behavioral studies to wild-type C57Bl6. Body temperature was maintained at 36.9 ± 0.4°C with a heating pad throughout the operation and did not vary by drug or genetic condition. This stroke method produces a small stroke in the mouse forelimb region of the motor cortex (Fig. 4). Sample size was 10 per group for Gabara5−/− and Gabard−/− in stroke/behavioral studies. Sample size was 8 per group for each condition in dosing of L655,708 (Fig. 3). Blood pressure (systolic and diastolic) and heart rate were measured in separate cohorts of mice in wild-type (C57Bl6), with and without L655,708 administration via ALZET minipumps from 3-days post-stroke, and in Gabra5 −/− and Gabrd −/− mice, before during and after stroke, using a standard non-invasive tail-cuff method (Coda, Kent Scientific, Torrington, CT). There were no significant differences in heart rate or blood pressure by treatment or genotype (Supplementary Table I). All studies in this manuscript complied with the STAIR (Stroke Therapy Academic Industry Roundtable) criteria for stroke investigations in measuring physiological parameters, monitoring treatment effects for at least one month, analyzing treatment effects blinded to conditions, utilizing dose-response studies, and use of a drug administration route with blood brain barrier penetration. Neurons were voltage-clamped in whole-cell configuration using a MultiClamp-700A amplifier (Molecular Devices); all recordings were low-pass-filtered at 3 kHz (8-pole Bessel) and digitized online at 10 kHz (National Instruments PCI-MIO-16E-4 board). Series resistance and whole-cell capacitance were estimated from fast transients evoked by a 5mV step and compensated to 75%. EPSCs and IPSCs were recorded by voltage-clamping sequentially at −70mV and +10mV, respectively. All drugs were purchased from Sigma or Tocris. L-655,708 and SNAP-5114 were dissolved in DMSO then diluted 1:1000 in H 2 O. NO-711, Gabazine and GABA were dissolved in H 2 O. Tonic inhibitory current and mean phasic current determination-Custom- written macros running under IGOR Pro v.6.0 (WaveMetrics, Inc.) were used to analyze the digitized recordings to determine the values of tonic currents and mean phasic currents, as previously described1. I tonic was recorded as the reduction in baseline holding currents (I hold ) after bath-applying a saturating amount (>100μM) of the GABA A R antagonist SR-95531 (gabazine), while voltage-clamping at +10mV. NO-711, SNAP-5114 and L-655,708 were added to the recording ACSF via perfusion and their effects on I tonic were recorded as the post-drug shift in I hold . Drug perfusion was continued until the shifting I hold remained steady for 1-2 min. To determine the mean phasic current (I mean ), a 60-s segment containing either EPSCs or IPSCs was selected, and an all-point histogram was plotted for every 10,000 points (every 1s), smoothed, and fitted with a Gaussian to obtain the mean baseline current. All baseline mean values were then plotted and linear trends subtracted to normalize the mean baseline current to 0pA. After baseline normalization, the values of each 10,000 points (each 1s) were averaged to yield the value of I mean (in pA/s) for each 1s epoch. The averaged I mean value of a 60s segment was reported as the phasic I mean value for either the spontaneous EPSC or IPSC. Synaptic event kinetics (i.e. frequency, peak amplitude, 10-90% rise time, and weighted decay time constant) are analyzed by custom-written LabView-based software (EVAN), as previously described1. For comparison of the IPSC peak amplitudes under control and PTX-treated conditions (Supplementary Table II), the largest-amplitude countmatched method was used, whereby the amplitude values in a given recording were sorted and the largest x number of events under control condition were averaged and taken for comparison with the average of an equally-matched x number of events under the PTX condition, with x determined by the number of events detected under the 10 μM PTX condition. This method circumvents the erroneous comparison of average amplitudes when considering the effects of a receptor antagonist that reduces the smaller events (in control condition) below the noise level. Measurements of neuronal resting membrane potential (V rest ) and GABA reversal potential (E GABA )-To estimate V rest , the cell-attached recording technique2 was used. Briefly, depolarizing voltage ramps (−100 to +200 mV) were applied to cellattached patches to activate voltage-gated K + channels and establish the K + current reversal potential, which provides a measure of the V rest , given near equimolar K + inside the cell and the pipette. E GABA was estimated by measuring the K + reversal potential after activating GABA A Rs with 50μM muscimol. Recordings were made using a solution containing the following (in mM): K+ gluconate (135), KCl (5), MgCl 2 (2), HEPES (10), EGTA (0.1), Na-ATP (4), Na-GTP (0.3), pH 7.3, 273mOsm/l. A junction potential of 9mV was measured and then subtracted from voltage values of all measurements. Fitting of multiple distributions to cumulative probability plots-The fitting of multiple distributions to a cumulative probability plot ( Supplementary Fig. 2) was done as follows. Cumulative probabilities of the variable x (i.e., P(x)) were calculated and fitted by one or more normal curves approximated by the logistic equation3: where R 1 , …,R n are the ratios of the n normal distributions (such as ), , are the individual means, and p 1 ,…,p n are steepness factors related to the n standard deviations (SD 1 ,. . SD n ). Behavioral analysis Grid-walking Task-The grid-walking apparatus was manufactured as previously described4, using 12mm square wire mesh with a grid area 32cm / 20cm / 50cm (length / width / height). A mirror was placed beneath the apparatus to allow video footage in order to assess the animals' stepping errors (i.e. `footfaults'). Each mouse was placed individually atop of the elevated wire grid and allowed to freely walk for a period of 5min. Video footage was analyzed offline by raters blind as to the treatment groups. The total number of footfaults for each limb, along with the total number of non-footfault steps, was counted, and a ratio between footfaults and total-steps-taken calculated. Percent footfaults were calculated by: [#footfaults / (#footfaults + #non-footfault steps) * 100]. A ratio between footfaults and total steps taken was used to take into account differences in the degree of locomotion between animals and trials. A step was considered a footfault if it was not providing support and the foot went through the grid hole. Further, if an animal was resting with the grid at the level of the wrist, this was also considered a fault. If the grid was anywhere forward of the wrist area then this was considered as a normal step. Spontaneous Forelimb Task (Cylinder Task)-The spontaneous forelimb task encourages the use of forelimbs for vertical wall exploration / press in a cylinder5. When placed in a cylinder, the animal rears to a standing position, whilst supporting its weight with either one or both of its forelimbs on the side of the cylinder wall. Animals were placed inside a Plexiglas cylinder (15cm in height with a diameter of 10cm was used) and videotaped for 5min. Videotape footage of animals in the cylinder were evaluated quantitatively in order to determine forelimb preference during vertical exploratory movements. While the video footage was played in slow motion (1/5th real time speed), the time (sec) during each rear that each animal spent on either the right forelimb, the left forelimb, or on both forelimbs were calculated. Only rears in which both forelimbs could be clearly seen were timed. The percentage of time spent on each limb was calculated and these data were used to derive an SFL asymmetry index (% ipsilateral use-% contralateral use). The `contact time' method of examining the behavior was chosen over the `contact placement' method, as described by5, as it takes into account the slips that often occur during a bilateral wall press post-photothrombosis. Western Blot-Seven days after stroke mice were decapitated, the brains rapidly removed and peri-infarct cortex microdissected and frozen (n=5). The equivalent region of cortex was taken in control, non-operated mice (n=3). Samples were homogenized in radioimmunoprecipitation (RIPA) buffer (Pierce; Rockford, IL) and centrifuged at 20000×g at 4°C for 10 minutes. Supernatant was collected as protein extract and stored at −80°C. Western blot was performed as described5. 100 ug of protein from each sample was diluted in 7.5 ul of 2× SDS-sample buffer gel (Invitrogen; Carlsbad, CA) containing dithiothreitol (DTT) (Sigma, St. Louis, MO) and brought to a final volume of 15 ul with RIPA buffer. Samples were denatured at 95°C, loaded on to a 4-12 % gradient Tris-Glycine gel (Invitrogen; Carlsbad, CA), separated via SDS-PAGE, and then transferred to HYBOND™-P (pvdf) membrane (Amersham; Piscataway, NJ) at 30 volts for 2 hours. Membranes were rinsed and blocked overnight at 4°C. Membranes probed with antibodies against GABA Transporter 3 (Rbt Anti-GAT-3 1:1000; Millipore; Temcula, CA), and GABA Transporter 1 (Rbt Anti-GAT-1 1:200; Millipore; Temecula, CA). Following successive washes, membranes were incubated in IgG Donkey Anti-Rabbit HRP-labeled secondary (1:6000; Jackson; West Grove, PA) for one hour at room temperature. Membranes were incubated in ECL PLUS (Amersham; Piscataaway, NJ)) and chemiluminescence was detected using Fluorochem (Alpha Innotech, San Leandro, CA). Membranes were then re-probed for one hour at room temperature with GAPDH (1:2500; Abcam; Cambridge, MA) and Donkey Anti-Rbt-HRP (1:10000; Jackson; West Grove, PA) as an endogenous control protein to ensure equal loading. Immunoblotting was performed in triplicate for each antibody. Adobe Photoshop software (Adobe Systems Inc, San Jose, CA) was used for densitometric analysis of all blots. Supplementary Material Refer to Web version on PubMed Central for supplementary material. a, Blocking GAT-1 (NO-711) produced a higher % increase in I tonic after stroke; combined blockade of GAT-1 and GAT-3/4 (NO-711 + SNAP-5114) produced a substantial I tonic increase in controls but only an increase equivalent to blocking GAT-1 alone after stroke. b,c, I tonic in sequential drug applications. Note the lack of response to SNAP-5114 application in the post-stroke slice. d, L655,708 reduced I tonic . e, L655,708 significantly decreased post-stroke I tonic . f, Drug treatment reverted post-stroke I tonic to near-control level (asterisk: P<0.05; n.s.: no significance, bar graphs represent mean ± s.e.m.).
2016-05-03T22:56:22.947Z
2010-10-06T00:00:00.000
{ "year": 2010, "sha1": "0ddd0ccbb1e9b4c7bfa4e05ad859cc09df2d9214", "oa_license": "implied-oa", "oa_url": "https://europepmc.org/articles/pmc3058798?pdf=render", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "ff86ea836dae311f19ca6490749a6897377e4ab1", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
214710197
pes2o/s2orc
v3-fos-license
Impact of Parallel Computing on Study of Time Evolution of a Quantum Impurity System in Response to a Quench In an arbitrary system subjected to a quench or an external field that varies the system parameters, the degrees of freedom increases double in comparison to that of an isolated system. In this study, we consider the quantum impurity system subjected to a quench, and measure the corresponding time-evolution of the spectral function, which is originated from the time-resolved photoemission spectroscopy. Due to the large number of degrees of freedom, the expression of the time-dependent spectral function is twice much more complicated than that of the time-independent spectral function, and therefore the calculation is extremely time consuming. In this paper, we estimate the scale of time consumption of such calculation in comparison to that of time-independent calculation, and present our solution to the problem by using parallel computing as implementing both MPI and OpenMP to the calculation. We also discuss the possibility to exploit parallel computing with GPU in the near future, and the preliminary results of time-dependent spectral function. Introduction Numerical methods have a great impact on studies of strongly correlated condensed matter systems, where the strong Coulomb interaction between electrons cannot be treated by perturbation ________  Corresponding author. Email address: hoa.nghiemthiminh@phenikaa-uni.edu.vn https//doi.org/ 10.25073/2588-1124/vnumap.4453 method. For example, the well-known Kondo effect was shown in the 60s that the first order perturbation gives the wrong ground state [1], while the calculation up to the second order gives the unphysical diverse resistance at low temperature [2], i.e. Kondo problem. And this problem was not solved fully until the study with the numerical renormalization group (NRG) method [3]. Studies of strongly correlated systems now grow diversely into many topics: finding an exotic Kondo effect in certain actinide/lanthanide ions in metal [4], keeping a topological phase by using the spin-orbit coupling [5], and tracking the time revolution of systems as well as finding the nonequilibrium steadystate when systems are subjected to external field [6]. In the studies, a large number of degrees of freedom are involved, serial numerical calculating may take an infeasible long computing-time. Parallel computing is the answer this problem, where a big calculation is divided into many smaller jobs and calculating these jobs is done in parallel. The application programming interfaces created for parallel computers are classified by the assumption they make about the underlying memory architecture: shared memory and distributed memory. While Open Multi-Processing (OpenMP) is the most used in the class of shared-memory, Message Processing Interface (MPI) is the most used in the class of distributed memory. In this paper, we present a case study showing the impact parallel computing by solving the numerical problem in the time evolution of a strongly correlated impurity system as being subjected to a quench. The outline of the paper is as follows. In Sec. II., we describe the model and the timedependent NRG formalism to study the time evolution of quantum impurity system following a quench. In Sec. III., we present the numerical problem in calculating the time-dependent spectral function of the impurity system, and the solution by using parallel computing with OpenMP and MPI. In Sec. IV., the success of using parallel computing is shown via the trend of decreasing timeconsumption as the number of threads increase in two different Central Processing Units (CPUs), and the comparison between the speedup of real calculations and the prediction by Amdahl's law. From these results, we discuss of the possible use of GPU to accelerate calculations. The time-evolution of the impurity system is represented via the time-dependent spectral function in Sec. V.. The conclusion and outlook are presented in Sec. VI. Model To describe the quantum impurity system subjected to a quench, we consider the following timedependent Hamiltonian where the quench at time t=0 is represented via the change of the local energy level the number operator for local electron with spin , and   k is the kinetic energy of the conduction electrons with constant density of states The time evolution of the system can be well represented via the time-dependent spectral function, since it exhibits the probability of finding an electron at as a specified energy and time. However, the   time-dependent spectral function involves more degrees of freedom than its time-independent counterpart, one cannot define it easily via Lehmann representation. Therefore, one should define the time-dependent spectral function based on experimental observations. In this paper, we consider the spectral function originated on the time-resolved spectroscopy with the pump-probe technique [7,8], in which the photoemission-current intensity takes the form where the probe-pulse shape is taken to be Gaussian, the pulse width is  t ,  t delay is the time delay between pump and probe pulses, and the time-dependent spectral function of interest is derived from the lesser Green's function that In this study, we will calculate the timedependent spectral function, which measures the time-evolution of the occupied density of states. Formalism Using the time-dependent numerical renormalization group (TDNRG) method [9], we have the expression of Parallel computing In the last section, we show the time-dependent spectral function originated from the time-resolved photoemission spectroscopy. The calculation for this time-dependent observable is challenging. In the last two terms, since all the four indices  r,s,r 1 , and s 1 appear in the denominator, one cannot rewrite the summation over four indices as matrix multiplications for efficient evaluation with BLAS routine. Therefore, one should run all the four loops all together to calculate this expression. In a specified calculation, the time consumption to calculate the first two terms with three loops in Eq. (4) is 100~200 times faster than that to calculate the last two terms with four loops. While, the trivial time-independent spectral function only involves two loops since the summation over three indices there can normally be recast as matrix multiplications [12,13], and such calculations only take the time scale of minutes depending on computing systems. With that reference to the timeindependent spectral function, calculating the time-dependent spectral function presented in Sec. II., is extremely heavy, and the serial computing is not sufficient. Parallel computing is the answer the above problem. Two classes of parallel computing are considered in our study: shared memory with Open Multi-Processing (OpenMP) and distributed memory with Message Processing Interface (MPI). In a parallel computing with MPI, every parallel processes works in its own memory space, which is independent from the others. Passing messages between processed is required to transfer data. While, in a parallel computing with OpenMP, parallel computing occurs on every threads, which are able to access to the shared memory. Therefore, different from MPI, OpenMP does not require the overhead of message passing. In our study, we use the hybrid parallel computing with both shared and distributed memory. The parallel computing with distributed memory is for the two NRG calculations for the matrix elements Time consumption vs. number of threads As presented in the last section, the use of OpenMP is applied to the summation over four indices in Eq. (4). In this section, we show the efficiency of parallel computing via the trend of timeconsumption decreasing with an increasing number of threads. The calculations were done on two different computing systems. In the first system, one node is with two Intel Xeon E5-2680 v3 Haswell CPUs. In each node, there are 24 physical cores, and 48 logical threads thanks to the hyper-threading with folding of two. In the second system, one node is with one Intel Xeon Phi 7250-F Knights Landing CPU. The number of physical cores in each node is 68, and, with the hyper-threading with folding of four, therefore the number of logical threads is 272. The CPU clock is 2.5GHz in the first system, and 1.4GHz in the second system. The trend is similar in both calculations on the two systems. Besides, even though there are more threads in the KNL CPU than in the Haswell CPU, the CPU clock of KNL is slower than that of Haswell. Therefore, the total time-consumptions of calculations in one single node of each system with the maximum number of threads are similar. Amdahl's law In parallel computing, Amdahl's law predicts the speedup in latency of the execution of a task at fixed workload as follows [14]  S latency  1 (1  p)  p s (5) In words, it depends on the proportion of execution time that the part benefiting from parallel computing originally occupies, p, and the speedup of that part. If we assume the speedup ideally equals to the number of physical threads, we can predict, with a known value of p, the ideal speedup of a calculation. Figure 2. shows the prediction of speedup by Amdahl's law and the speedup of real calculations with p=99.3%, which means for every 1000 minutes to calculate the whole workload there are 993 minutes to calculate serially the part benefiting from parallel computing. We can see up to the number of physical core, the speedup of real calculation matches perfectly to the prediction by Amdahl's law. The speedup of real calculations as increasing further the number of threads deviates from the ideal speedup. It is due to the fact of using the logical threads; the speedup does not increase linearly with the number of threads. However, the parallel computing with OpenMP can only use up to the maximum number of threads in a single node, which is limited, 48 in Haswell CPU and 272 in KNL CPU. While, from the prediction of Amdahl's law, the calculation with large number of proportion benefiting from parallel computing can be even speedup further if the number of threads are more than 1000. Therefore, using the Graphic Processing Unit (GPU) with a large number of cores up to thousands can be the future to our calculation. Figure 3. shows our preliminary results of time-dependent spectral function defined in Sec. II. From t=0, the quench starts to move the local energy level at the low energy to the higher energy and the Coulomb repulsion is switched to be smaller, therefore the side peak of the spectral function evolves with time gradually accordingly, and the peak at Fermi level is gradually broaden. Preliminary result of time-dependent spectral function Since this observable originates from the time-resolved photoemission spectroscopy, the spectral function here shows the time-dependent occupied density of states. While the inverse photoemission (IPES) gives the unoccupied density of states. Therefore, one may naturally expect the time-resolved IPES can give the time-dependent unoccupied density of states. This interesting observation will be studied in the near future. Conclusions In this paper, we show the computing problem in calculating the time-dependent spectral function originated from the time-revolved photoemission spectroscopy. The problem is due to the sums over four different indices. We solve the problem by mainly using parallel computing with distributed memory, in particular OpenMP. The speedup is shown to be nearly equal to the number of physical threads, while the logical threads gives the slower speedup. We also present the prospective calculation with the use of GPU to speedup further. We note that MPI of the latter versions can also work with shared memory, however, in this paper, we only use MPI for parallel computing with distributed memory. The preliminary results of time-dependent spectral function are shown to give the time-dependent occupied density of states which can be validated by the time-resolved photomemission. We also propose the possible observation of time-dependent unoccupied densiy of states.
2020-03-12T10:22:02.345Z
2020-03-09T00:00:00.000
{ "year": 2020, "sha1": "ba544db2f839ab7d99926123f269dc3dc833d71c", "oa_license": null, "oa_url": "https://js.vnu.edu.vn/MaP/article/download/4453/4066", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "fee5e0605b4d638d70982d50fcd8e1cb141068e0", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
231744060
pes2o/s2orc
v3-fos-license
Reconstructed evolutionary history of the yeast septins Cdc11 and Shs1 Abstract Septins are GTP-binding proteins conserved across metazoans. They can polymerize into extended filaments and, hence, are considered a component of the cytoskeleton. The number of individual septins varies across the tree of life—yeast (Saccharomyces cerevisiae) has seven distinct subunits, a nematode (Caenorhabditis elegans) has two, and humans have 13. However, the overall geometric unit (an apolar hetero-octameric protomer and filaments assembled there from) has been conserved. To understand septin evolutionary variation, we focused on a related pair of yeast subunits (Cdc11 and Shs1) that appear to have arisen from gene duplication within the fungal clade. Either Cdc11 or Shs1 occupies the terminal position within a hetero-octamer, yet Cdc11 is essential for septin function and cell viability, whereas Shs1 is not. To discern the molecular basis of this divergence, we utilized ancestral gene reconstruction to predict, synthesize, and experimentally examine the most recent common ancestor (“Anc.11-S”) of Cdc11 and Shs1. Anc.11-S was able to occupy the terminal position within an octamer, just like the modern subunits. Although Anc.11-S supplied many of the known functions of Cdc11, it was unable to replace the distinct function(s) of Shs1. To further evaluate the history of Shs1, additional intermediates along a proposed trajectory from Anc.11-S to yeast Shs1 were generated and tested. We demonstrate that multiple events contributed to the current properties of Shs1: (1) loss of Shs1–Shs1 self-association early after duplication, (2) co-evolution of heterotypic Cdc11–Shs1 interaction between neighboring hetero-octamers, and (3) eventual repurposing and acquisition of novel function(s) for its C-terminal extension domain. Thus, a pair of duplicated proteins, despite constraints imposed by assembly into a highly conserved multi-subunit structure, could evolve new functionality via a complex evolutionary pathway. Introduction Septins comprise a fourth cytoskeletal element, conserved from fungi to metazoans (Pan et al. 2007;Nishihama et al. 2011;Auxier et al. 2019). Each septin contains a GTP-binding fold (G domain) preceded by an N-terminal extension (NTE) of variable length and trailed by a C-terminal extension (CTE) of variable length. A given septin associates with other septins in a defined order into linear hetero-oligomeric complexes, which, in turn, have the capacity to assemble into higher-order structures. Similar to other cytoskeletal components, septin-based structures can adopt unique architectures and geometries in vivo and in vitro, including linear filaments, arcs, spirals, hourglasses, and rings (Bertin et al. 2008(Bertin et al. , 2012Garcia et al. 2011;Ong et al. 2014). Rather than purely contributing to cell shape, septins reportedly have numerous functions in different species, cell types, and subcellular locations. Such functions include: (1) serving as a diffusion barrier tightly associated with the membrane between two distinct cellular compartments (such as in dividing cells, or to separate dendritic spines from the cell body in neurons) (Dobbelaere and Barral 2004;Caudron and Barral 2009), (2) sensing membrane curvature (Bridges et al. 2016;Cannon et al. 2019), and (3) acting as a platform for recruitment of septin-associated proteins for information exchange via signaling pathways (Neubauer and Zieger 2017;Perez et al. 2016). Many of these functions have been conserved across eukaryotes; importantly, septin dysfunction in humans has been linked to a number of diseases, including male infertility, cancer, and neurodegenerative diseases (Shen et al. 2017;Wang et al. 2018;Xu et al. 2018;Marcus et al. 2019). Early studies on septins focused on the unicellular eukaryote Saccharomyces cerevisiae. In this yeast, seven genes encoding distinct septins were identified-CDC3, CDC10, CDC11, CDC12, SHS1, SPR3, and SPR28-the latter two are only expressed and functional during sporulation (Kaback and Feldberg 1985;Ozsarac et al. 1995;De Virgilio et al. 1996;Garcia et al. 2016;Heasley and McMurray 2016). Disruption of CDC3, CDC10, CDC11, or CDC12 prevented completion of cytokinesis and resulted in cell death (Hartwell 1978); labeling experiments later determined that the cognate proteins localize to the division site (bud neck) between a mother and daughter cell undergoing mitosis and form a complex 3D super-structure there (Byers and Goetsch 1976;Haarer and Pringle 1987;Cid et al. 1998;Bertin et al. 2012). Extensive genetic and biochemical approaches determined that two copies of each of the four essential septins form a linear apolar heterooctamer with a twofold axis of symmetry (Cdc11-Cdc12-Cdc3-Cdc10-Cdc10-Cdc3-Cdc12-Cdc11), that hetero-octamers polymerize end-to-end via Cdc11-Cdc11 interaction to form long, laterally paired filaments, and that formation of filaments is essential for septin function in vivo (Bertin et al. 2008;McMurray et al. 2011). Subsequent work showed that the fifth mitotically expressed septin, Shs1, could also occupy the terminal position, thus forming Shs1-Cdc12-Cdc3-Cdc10-Cdc10-Cdc3-Cdc12-Shs1 hetero-octamers McMurray et al. 2011;Bertin et al. 2012;Booth et al. 2015;Finnigan, Takagi, et al. 2015). However, it is clear that there are significant functional differences between Cdc11 and Shs1; the former subunit is essential for filament formation and viability in vivo, whereas Shs1 is nonessential under many standard growth conditions (Iwase et al. 2007;Garcia et al. 2011). Use of sensitized genetic backgrounds, structural data, and biochemical assays revealed certain unique roles for Shs1 within S. cerevisiae and related fungal species that influence filament curvature and/or assembly state, association with the plasma membrane, and coordinated recruitment of non-septin binding partners, such as the myosin-binding protein Bni5 (Egelhofer et al. 2008;Buttery et al. 2012;Meseroll et al. 2012Meseroll et al. , 2013Booth et al. 2015;Finnigan, Booth, et al. 2015;Finnigan, Takagi, et al. 2015). The CTEs of both Cdc3 and Cdc12 were found to participate in coiled coil (CC) interactions that serve as cross-bracing within each hetero-octamer and that also provide contacts responsible for the lateral pairing of septin filaments ). As the cell cycle proceeds, the hourglass-shaped septin-based collarlike structure at the bud neck undergoes a transition to a split (double ring) structure concomitant with the onset of cytokinesis (Bertin et al. 2008(Bertin et al. , 2012Garcia et al. 2011;McMurray et al. 2011). Within each hetero-octamer, there are alternating interfaces between neighboring subunits deduced from crystallized septin complexes: the G interface, in which the GTP/GDP-binding pockets in each subunit face each other; and the NC interface, wherein helical elements within the N-and C-terminal sequences that are proximal and distal, respectively, to the G domain face each other (Sirajuddin et al. 2007(Sirajuddin et al. , 2009Ong et al. 2014;Brausemann et al. 2016). In a hetero-octamer, the central Cdc10-Cdc10 pair associates via an NC interaction, whereas each Cdc10 associates with its flanking Cdc3 via a G interface, and so forth. Across eukaryotes (with the exception of higher plants, which lack septins), the number of septin subunits varies-for example, one in the green alga Chlamydomonas reinhardtii, two in the nematode Caenorhabditis elegans, five in the fruit fly Drosophila. melanogaster, and 13 in Homo sapiens, which are differentially expressed in specific cell types and tissues (Field et al. 1996;Adam et al. 2000;Nguyen et al. 2000;Kinoshita 2003;John et al. 2007;Cao et al. 2009;Nishihama et al. 2011;Pinto et al. 2017). However, the hetero-octameric complex with distinct subunits occupying specific positions within the structure has been conserved from yeast to humans (Bertin et al. 2008;McMurray and Thorner 2019;Mendonca et al. 2019;Soroor et al. 2020). Phylogenetic analyses indicate that during fungal and metazoan evolution gene duplications gave rise to the current repertoire of septin subunits (Pan et al. 2007;Nishihama et al. 2011;Auxier et al. 2019). Such increases in biological complexity across deep evolutionary time wherein a multi-subunit complex acquires additional functional components through gene duplication and divergence have clearly occurred in other instances, including the V-type ATPase (Finnigan et al. 2011(Finnigan et al. , 2012, the proteasome (Wollenberg and Swaffield 2001), the TRiC/CCT chaperonin (Gestaut et al. 2019), and the NADH:ubiquinone oxidoreductase (Gabaldon et al. 2005). However, how inclusion of a newly duplicated protein within an existing multi-protein ensemble occurs is more challenging to explain for a non-essential subunit (such as Shs1 in the yeast septin hetero-octamer) that has been maintained rather than pseudogenized and lost. In addition, the molecular evolution of two subunits, both occupying the same position within a complex structure, presents a number of biochemical constraints. In the case of the terminal septin subunits, both Cdc11 and Shs1 must retain the ability to bind guanine nucleotide as well as the capacity to associate with the penultimate subunit Cdc12 via a G interface. On the other hand, whether to preserve the capacity for homotypic NC interface interaction, which supports formation of paired linear filaments (as exhibited by Cdc11), or to evolve the capacity for heterotypic interaction (such as exhibited by Shs1-and Cdc11capped hetero-octamers) and thereby acquire the capacity to form more complex geometric arrangements in higher order structures leaves room for why the advent of Shs1 may have provided some selective advantage. Viewed in this light, Cdc11 and Shs1 provide a unique opportunity to conduct an analysis grounded in evolutionary principles to address questions relating to how a new subunit arising from the duplication of a pre-existing one is first tolerated, retains the capacity for integration into a complex structure, and diverges to confer new properties without disrupting essential functions. Understanding how protein complexes have increased in complexity through evolutionary time remains a critical task for multiple fields of study. A detailed mechanistic history of how protein complexes, protein-protein interfaces, and specific protein domains evolve can provide not only a proper, "vertical" historical context for current day experimental comparisons of existing proteins (Merkl and Sterner 2016), but may someday have predictive power for understanding protein evolution within rapidly evolving species such as micro-organisms. Toward these ends, in this study, we utilized ancestral gene reconstruction (Thornton 2004) (AGR) to predict, generate, and test in modern S. cerevisiae cells the assembly, localization, and function(s) of the pre-duplicated ancestral subunit (termed "Anc.11-S") of Cdc11 and Shs1, as well as four additional ancestors and three modern fungal septins. Our study determined that Anc.11-S can partially replace modern Cdc11 in yeast yet was unable to form productive heterotypic interfaces with Shs1. Furthermore, all tested ancestral and fungal septins seemed to be able to associate with Cdc12 through the G interface, albeit with very different apparent affinities. Evolution of the Shs1 subunit involves multiple distinct changes including early loss of homotypic Shs1-Shs1 interactions, development of a distinct G domain, development of an optimized Cdc11-Shs1 heterotypic interaction, and very recent evolution of a modern function of its CTE. These findings are the first to highlight the complex evolution of a unique pair of essential/non-essential components in a highly conserved multi-protein complex. Materials and methods In silico reconstruction of ancestral protein sequences Putative septin orthologs of budding yeast Cdc11 or Shs1 within the fungal kingdom were identified using BLAST (NCBI); these are listed in Supplementary Table S1. Sequences were aligned using three separate methods: MUSCLE (Edgar 2004), MSAprobs (Liu et al. 2010), and PRANK Goldman 2005, 2008). For each alignment, ancestral protein sequences for all shared ancestors were inferred with maximum-likelihood phylogenetics, using PAML (Yang 2007) and PhyloBot (Hanson-Smith and Johnson 2016). All three approaches (Supplementary Figure S1) yielded a consensus sequence for Anc.11-S, with differences concentrated within the CTE domain. We chose to experimentally assay the ancestral sequences from the MUSCLE approach, as the total length of the protein was the longest of the three (418 residues) indicating that MUSCLE yielded the most conservative alignment of the three approaches. The posterior probabilities (PPs) from reconstructed sequences are summarized in Supplementary Table S2. For each ancestral gene, a set of residues with PP scores below a determined threshold were randomly sampled and individually tested in vivo compared to the original reconstruction; these findings will be presented in a separate manuscript. Yeast strains and plasmids Saccharomyces cerevisiae strains used in this study can be found in Table 1 and Supplementary Table S3 and plasmids used in this study can be found in Table 2. Reconstructed ancestral genes were generated by custom gene synthesis (Genscript) using a yeast codon bias and carried in plasmid pUC57. For all constructs, in vivo plasmid assembly was used to link together the necessary DNA components (promoter, coding regions, tags, terminators, and selection cassettes). A modified polymerase chain reaction (PCR)-based mutagenesis protocol (Zheng et al. 2004) was used to introduce substitutions prior to assembly. Briefly, a CEN-based plasmid was digested with a unique restriction site downstream of a cloned promoter sequence and co-transformed into yeast (standard lithium acetate-based protocol) (Gietz and Schiestl 2007) with the necessary amplified PCR fragments containing homology to adjacent sequences. Typically, a downstream drug-resistance cassette (Goldstein and McCusker 1999) was also included for additional selection purposes and for use in one-step chromosomal integration strategies. Placement of DNA constructs at the required genomic loci utilized upstream promoter sequence as well as the common MX-based terminator sequence present on selection cassettes. This "marker swapping" technique allowed for the integration of the entire gene fusion. Given that there is still the possibility for the marker cassette to swap without integration of the upstream sequence (using the identical MX promoter sequences), all integrations were confirmed using diagnostic PCRs to confirm the presence of the desired integrated DNA construct in addition to the switch in selection marker. Following in vivo plasmid assembly, constructs were confirmed further using either in-house (UC Berkeley DNA Sequencing Facility) or commercial (Genscript) Sanger DNA sequencing. Following chromosomal integration, modified loci were amplified by PCR, purified, and sequenced (Genscript). Sequences of all the DNAs used in this study can be found in Supplementary Figure S2. Fluorescence microscopy All plasmid-carrying strains were selected by streaking for single colonies at least twice on agar plates. Cultures were grown overnight at 30 C, back diluted into rich medium for 4.0 or 4.5 h at 30 C, harvested, and examined within 30 min at room temperature under a fluorescence microscope (Leica, model DMI6000; Leica Microsystems, Buffalo Grove, IL, USA), equipped with a 100Â lens and appropriate cutoff filters for visualization of GFP and mCherry (monomeric red fluorescent protein derivative) fluorescence (Semrock), and images were acquired using a Leica DFC340 FX camera. Image capture and analysis was performed using software from the Leica Microsystems Application Suite and ImageJ (Schindelin et al. 2015). Images were captured using identical exposure times and evaluated in a single-blind manner. Representative images for each strain are shown and rescaled in the same way; adjustment of contrast was done per individual image. Data availability The authors will make available the reagents (DNA plasmids or yeast strains) and/or datasets used to confirm the conclusions of this manuscript upon reasonable request. A Supplementary file S1 is available at FigShare and contains DNA sequences used, additional tables, and additional figures. Evolution of the terminal septins Cdc11 and Shs1 within fungi From available fungal genome sequences, orthologs of Cdc11 or Shs1 were collected (Supplementary Table S1) and a phylogeny was constructed using the parameters and algorithms in MUSCLE (Edgar 2004) ( Figure 1A). Fungal septins closely related to, but distinct from, Cdc11 and Shs1 served as an outgroup. Three prediction programs: MUSCLE (Edgar 2004), MSAprobs (Liu et al. 2010), and PRANK Goldman 2005, 2008) then were used to deduce a pre-duplication ancestor, dubbed Anc.11-S, and the resulting inferred sequences were compared (Supplementary Figure S1). All three programs provided an overall consensus sequence for Anc.11-S with the major differences within the CTE and lacking, in particular, the inserts in the G domain that are present in modern Shs1 ( Figure 1B). Indeed, most of the apparent Shs1 counterparts in other fungi have no (or only much smaller) insertions at these positions. Hence, parsimony This study (continued) suggests that these inserts were absent initially and acquired during the evolutionary trajectory toward modern S. cerevisiae Shs1, as will be discussed later. We chose the Anc.11-S deduced by MUSCLE as representative of the most likely common ancestor for several reasons: (1) the total length of the predicted protein (418 residues) was the longest of the three (Supplementary Figure S1) and (2) it lacked gaps within its predicted CTE (Supplementary Figure S1). In the same way, we also predicted, constructed, and studied a likely, most recent common ancestor to all Cdc11-like subunits (Anc.11) and a likely, most recent common ancestor to all Shs1-like subunits (Anc.S), as well as two likely intermediates (Anc.S1 and Anc.S2) within the lineage leading to modern budding yeast Shs1 (Table 1, Supplementary Tables S2 and S3, and Supplementary Figure S3). Alignment of Anc.11-S with S. cerevisiae Cdc11 and Shs1 revealed that 29% of the predicted ancestral residues are retained in both modern S. cerevisiae subunits and that 50% of the residues in the predicted ancestor are identical or similar to at least one of those modern subunits ( Figure 1B). As noted Abbreviations of the fungal genus and species are included for all modified genes: S.c., Saccharomyces cerevisiae, C.g., Candida glabrata, C.a., Candida albicans, A.g., Ashbya gossypii, and S.p., Schizosaccharomyces pombe. c The URA3-based covering vector (expressing WT CDC11) was removed by multiple rounds of selection on medium containing 5-FOA. d The GFP sequence used in these fusions includes an N-terminal linker sequence of GRRIPGLIN as well as F64L and S65T substitutions. e The C. albicans protein is missing the residue N613 (which exists within a stretch of consecutive Asn residues) within this strain. The total septin protein size is therefore expected to be 665 amino acids, not 666. above, compared to either the predicted Anc.11-S or modern S. cerevisiae Cdc11, modern S. cerevisiae Shs1 has some extended loops, two within its G domain and two within its CTE ( Figure 1B). With regard to the former, our predictive analysis suggests that the origins of the 35-residue insertion after position 41 first appeared early in the trajectory (Anc.S) to modern S. cerevisiae Shs1 (Supplementary Figure S3). With regard to the latter, our prior mutational analysis (Finnigan, Takagi, et al. 2015) has already demonstrated that the inserts found in the CTE of Shs1 (prior to the predicted CC region) are not required for its unique functions in vivo. The approaches used here for AGR provide a confidence metric (PP) for each predicted residue. For the residues in Anc.11-S, 67% were predicted with a confidence level of 0.60 or higher; and similar levels were found for each of the other deduced ancestral sequences (Supplementary Table S2). Visualization of these confidence levels across all residues for each ancestral protein (Supplementary Figure S4) revealed a number of common patterns: (1) poorly supported residues at the extreme N-terminus of each ancestor; (2) poorly supported residues within the segment of CTEs that are most proximal (and presumably just serving as a linker) to the G domain; and (3) in the lineage toward Shs1, poorly supported residues at two sites within the G domain corresponding to the position of inserts, such as the 35-residue loop in modern S. cerevisiae Shs1. Importantly, however, these patterns also include the appearance through evolutionary time of strongly supported residue clusters. For example, residues poorly supported at the extreme C-terminal ends of Anc.11-S and Anc.11 acquire substantial and strongly conserved appendages diagnostic of the Shs1 lineage (Supplementary Figure S4). Anc.11-S is able to partially replace modern yeast Cdc11 DNA encoding an optimized version (i.e. using modern S. cerevisiae codon usage bias) for each predicted ancient septin was synthesized de novo, C-terminally tagged in-frame with the coding sequence for either mCherry or GFP, cloned under control of the natural CDC11 or SHS1 promoter, and inserted and expressed from the corresponding native chromosomal locus in place of the endogenous gene in S. cerevisiae. We first examined whether or not integrated copies of Anc.11-S, Anc.11, or Anc.S could substitute for the function of modern S. cerevisiae Cdc11 (Figure 2). Budding yeast lacking Cdc11, but expressing Shs1, is inviable Finnigan, Takagi, et al. 2015); hence, in all cases, for these strain constructions and growth assays, the Cdc11 deficiency was covered by a URA3-marked plasmid expressing wild-type (WT) S. cerevisiae CDC11 to maintain viability. The capacity of each construct to support growth could then be tested by selecting for loss of the URA3-marked plasmid on medium containing 5-FOA (Boeke et al. 1984). Serial dilutions of the strains to be tested were spotted onto either a permissive medium or a medium containing 5-FOA. The cells expressing Cdc11-mCherry and Shs1-GFP (positive control) remained viable in the absence of the CDC11 plasmid, whereas the cdc11D SHS1 strain (negative control) was inviable when the CDC11 plasmid was absent, as expected ( Figure 2A, lanes 1 and 2). Like cells expressing Cdc11-mCherry, we found that cells expressing either Anc.S-11-GFP or Anc.11-mCherry were viable in the absence of the CDC11 plasmid (Figure 2A, lanes 3 and 4), whereas cells expressing Anc.S-mCherry were unable to grow (Figure 2A, lane 5). Thus, The GFP sequence used in these fusions includes an N-terminal linker sequence of GRRIPGLIN as well as F64L and S65T substitutions. Ancestral (abbreviated "Anc") genes were synthesized de novo with a yeast codon bias. The commonly used prMX and MX(t) sequences were included flanking all drug resistance cassettes (e.g. Nat R ). b Abbreviations of the fungal genus and species are included for SHS1 and CDC11 genes: S.c., Saccharomyces cerevisiae, C.g., Candida glabrata, C.a., Candida albicans, and A.g., Ashbya gossypii. The S.c.SHS1 gene has a silent substitution within codon 314 (Glycine). Supplementary Table S1. Sequences were aligned using MUSCLE (Edgar 2004). Branch support expresses approximately likelihood ratio test statistics (Anisimova and Gascuel 2006; Anisimova et al. 2011), interpreted as the ratio increase in model support for the existence of the branch relative to the next-best model in which the branch does not exist. The Cdc11 lineage is colored in blue, whereas the Shs1 lineage is colored in orange. The position of five reconstructed ancestral proteins is noted. (B) Alignment of budding yeast Cdc11 (top) and Shs1 (bottom), and their predicted common Anc.11-S progenitor (middle), using CLUSTAL W (Thompson et al. 1994). Identities (white letter in a black box) among all three, and similarities (bold letter in a gray box) where two of the three are identical or share standard conservative substitutions, as well as inserts (yellow) of the indicated length (number of residues in parentheses) present within Shs1, are indicated. Above the alignment are structural elements, based on (1) the crystal structure of an N-and C-terminally truncated version of S. cerevisiae Cdc11 (residues 20-to-298) determined at $3 Å resolution (Brausemann et al. 2016); (2) mammalian SEPT2 (Sirajuddin et al. 2007;Sirajuddin et al. 2009) because the Cdc11 structure was solved by molecular replacement and refined using the crystal structure of SEPT2 as the model; and (3) prior sequence alignments, structural predictions, and mutational analysis of both Cdc11 and Shs1 Versele and Thorner 2004;Finnigan, Booth, et al. 2015;Finnigan, Takagi, et al. 2015). Septins possess sequence elements required for GTP binding that are conserved among all members of the Rasrelated super-family (highlighted within red boxes), dubbed the P-loop (G1), Switch I (G2), Switch II (G3), G4 and G5 motifs (Sprang 1997; Wittinghofer and Vetter 2011). Anc.11-S and Anc.11 (but not Anc.S) were able to substitute for modern S. cerevisiae Cdc11 based on this growth assay. We also examined the cell morphology of strains harboring either Anc.11-S or Anc.11 in place of yeast Cdc11. Compared to a WT strain expressing an integrated copy of Cdc11-mCherry, cells expressing either ancestral subunit appeared similar in shape and size (Supplementary Figure S5). However, there was a subpopulation that appeared to have elongated cell morphologies suggesting that Anc.11-S or Anc.11 cannot provide a full replacement of modern Cdc11 (Supplementary Figure S5). Nonetheless, we find it quite remarkable that this predicted ancient progenitor possesses the capacity to interface with modern septins sufficiently well to maintain viability, support a normal growth rate, and exhibit near-normal morphology in the majority of the cells. To determine whether the presence of Shs1 contributed to the ability of either Anc.11-S or Anc.11 to function in place of Cdc11, we also tested the same three ancestral subunits in a strain lacking both CDC11 and SHS1. Previous work found that cells carrying a cdc11D shs1D double deletion, rather than being inviable, are able to grow, albeit more slowly than normal cells and with an aberrant, markedly elongated and branched morphology, which also manifests at the macroscopic level as an altered colony morphology (Supplementary Figure S6). We were able to readily reproduce those findings ( Figure 2B, lane 2). The explanation for the viability of cells lacking both Cdc11 and Shs1 is that the remaining septin hetero-hexamers are still able to form rudimentary filaments via a non-native Cdc12-Cdc12 G interface association . When only Shs1 is present, it binds to Cdc12, forming hetero-octamers, but Shs1 is unable to selfassociate via an NC interface Booth et al. 2015;Finnigan, Takagi, et al. 2015); hence, no filaments can assemble and the cells are inviable. In contrast, when only Cdc11 is present, it binds to Cdc12, restoring hetero-octamer formation and mediating filament assembly via a robust Cdc11-Cdc11 NC interface, and thus the cells are viable McMurray et al. 2011). The non-native homotypic interaction between Cdc12capped hetero-hexamers can be prevented by a mutation (W267A) that disrupts the G interface , and we confirmed that cdc12D shs1D cells carrying a cdc12(W267A) allele are indeed inviable ( Figure 2B, lane 3). Most importantly, we found that, in cells lacking both Cdc11 and Shs1, expression of either Anc.11-S or Anc.11 was able to support normal growth ( Figure 2B, lanes 4 and 5), as would be expected for authentic S. cerevisiae Cdc11 and with a spot morphology resembling that of the control cells ( Figure 2B, lane 1) rather than that of the cdc11D shs1D cells ( Figure 2B, lane 2). These findings demonstrate that Anc.11-S and Anc.11 can partially replace yeast Cdc11 in either the presence or absence of modern Shs1. Equally as telling, we found that the expression of Anc.S-mCherry in cdc11D shs1D cells still resulted in no growth ( Figure 2B, lane 6), as might be expected for authentic S. cerevisiae Shs1. At the very least, this result indicates that the Anc.S subunit is, in fact, produced and must associate with Cdc12, thereby preventing any homotypic Cdc12-Cdc12 interaction. If Anc.S were not actually produced, or not properly folded, or failed to interact with Cdc12, then these cells would have been viable, like the cdc11D shs1D strain itself. To ensure that the observed results were not influenced by the choice of promoter used for expression (as there is no predicted ancient promoter sequence), we tested the functions of Anc.11-S and Anc.S, each driven by either the CDC11 promoter or the SHS1 promoter at their native genomic loci. Regardless of their mode of expression, identical results were obtained for both proteins when each ancestral subunit was expressed in cdc11D shs1D cells ( Figure 2C). Therefore, the stark difference in their observed phenotypes cannot be attributed to any difference in expression due to the promoters used or to the genomic location from which they were produced. Finnigan, Takagi, et al. 2015). Fortunately, we were able to devise previously three, different "sensitized" genetic backgrounds in which the presence of Shs1 is required for cell survival (Finnigan, Takagi, et al. 2015). Thus, despite being "non-essential" in normal S. cerevisiae cells, these three reporter strains permitted analysis of functional elements in Shs1 (Finnigan, Takagi, et al. 2015), identification of some of its interaction partners (Finnigan, Booth, et al. 2015), and inferences about its unique contributions to optimal cell function (Egelhofer et al. 2008;Buttery et al. 2012;Meseroll et al. 2012). In the first of these special strains, Cdc10 (the central subunit of septin hetero-octamers) is absent. Under standard laboratory conditions (glucose as the carbon source and 30 C), this strain is inviable. However, prior work showed that on galactose medium at 22 C, cdc10D cells are able to grow (McMurray et al. 2011). The mechanistic explanation, at least in part, for this behavior was determined to be that, under those specific growth conditions, Cdc11-Cdc12-Cdc3 hetero-trimers assemble, associate via a nonnative homotypic Cdc3-Cdc3 interaction, and the resulting hetero-hexamers are able to form rudimentary filaments and thereby support growth. However, we found that, in this context, survival of the cells requires the presence of Shs1 (Finnigan, Takagi, et al. 2015) (Figure 3A, lanes 1 and 2). In this case, viability during strain construction was maintained by a URA3-marked plasmid expressing WT S. cerevisiae CDC10 and the capacity of any construct to support growth could then be tested by selecting for loss of the URA3-marked plasmid on 5-FOA medium. Unlike modern S. cerevisiae Shs1, expression of Anc.11-S or Anc.S did not rescue the inviability of cdc10D cells lacking endogenous Shs1 ( Figure 3A, lanes 3 and 4). The second sensitized genetic background in which we found that presence of Shs1 was essential for viability was in cells expressing a C-terminally truncated cdc11 allele, Cdc11(D357-415), tagged at its C terminus with mCherry as the sole source of Cdc11 (Finnigan, Takagi, et al. 2015). In this case, viability during strain construction was maintained by a URA3-marked plasmid expressing WT S. cerevisiae CDC11. In this context too, unlike modern Shs1 ( Figure 3B, lane 2), expression of Anc.11-S or Anc.S was unable to rescue the inviability of the cells expressing Cdc11(D357-415)-mCherry ( Figure 3B, lanes 3 and 4). The third background in which the presence of Shs1 is required for normal growth is in cells carrying a temperaturesensitive cdc12 allele, cdc12-6, incubated at what would otherwise be a permissive temperature (22 C) ( Figure 3C, lanes 1 and 2). It has been shown elsewhere that although cdc12-6 cells are able to survive at the lower temperature, they become inviable at a higher temperature (37 C) because their septin filaments disassemble (Johnson et al. 2015). In this case, viability during strain construction was maintained by a URA3-marked plasmid expressing WT S. cerevisiae CDC12. As in the other two sensitized backgrounds, Anc.11-S or Anc.S could not behave like modern S. cerevisiae Shs1 ( Figure 3C, lanes 3 and 4). Thus, these data indicate that neither of these predicted progenitors (the original preduplicated ancestor and the most recent common ancestor to all Shs1-like septins) has yet acquired the full panoply of unique characteristics that define modern Shs1. Ancestral septins assemble into the septin collar at the bud neck To rule out in an independent way that any lack of functional complementation for any trait examined was due to lack of incorporation of the reconstructed ancestral protein of interest into septin-based structures, we examined localization of Anc.11-S and Anc.S tagged at their C terminus with GFP by live cell imaging using fluorescence microscopy ( Figure 4). To mark the location of septin-based structures unequivocally, these cells also expressed an integrated copy of Cdc10-mCherry. To maintain uniform conditions, because expression of Anc.S in cells lacking both Cdc11 and Shs1 does not support growth (Figure 2), we chose to examine expression and localization of these proteins in a CDC11 shs1D strain. We found that, just like authentic Shs1-GFP (expressed under the SHS1 promoter on a low-copy plasmid) (Figure 4, top panels), both Anc.11-S-GFP (Figure 4, middle panels) and Anc.S-GFP (Figure 4, bottom panels) localized prominently to the bud neck in dividing cells and completely congruently with the Cdc10-mCherry marker (despite the presence of endogenous Cdc11, which might have been expected to compete with the ancestral proteins for binding to Cdc12). The same pattern was observed for Anc.11-S-GFP and Anc.S-GFP in cells where the septin collar was marked by expression of an integrated copy of Cdc11-mCherry (Supplementary Figure S7). Similarly, in cells where the septin collar at the bud neck was marked with Shs1-GFP, Anc.11-mCherry also localized prominently to the bud neck, even though Cdc11 was also present (Supplementary Figure S7). In the case of Anc.S-GFP, there was a somewhat higher level of diffuse fluorescence in the cytosol than for the other two ancestral proteins (Figure 4 and Supplementary Figure S7). Overall, these observations indicate that all three ancestral proteins are incorporated well into the septin super-structure at the bud neck, and thus are able to compete for occupancy with their modern septin counterparts, presumably because each is able to associate with Cdc12 via their G interface. Moreover, for Anc.S, the collective data up to this point demonstrate that there must be in vivo function(s) of Shs1-like septins that are separable from assembly into and localization within the septin collar at the bud neck, as we have documented for modern Shs1 itself (Finnigan, Booth, et al. 2015;Finnigan, Takagi, et al. 2015). Interaction of ancestral septins with extant subunits within and between hetero-octamers To investigate how Anc.11-S and Anc.11 were participating in contacts within and between septin hetero-octamers, we utilized a previously studied septin allele (Bertin et al. 2008) that deletes an alpha helix (a0), corresponding to residues 2-18 in both modern Cdc11 and Shs1, situated just upstream of their G domain. This segment contains residues that participate in contacts important for formation of a fully functional NC interface (Sirajuddin et al. 2007;McMurray et al. 2011). Shs1 is not essential for growth or filament formation under most conditions, and it has been demonstrated that end-to-end contacts between Cdc11capped hetero-octamers mediated by formation of homotypic Cdc11-Cdc11 NC interfaces are necessary and sufficient for filament formation both in vivo and in vitro McMurray et al. 2011). Thus, when present, how is Shs1 incorporated into the septin super-structure at the bud neck? Because both in vivo and in vitro studies suggest that homotypic Shs1-Shs1 NC interaction does not occur Booth et al. 2015;Finnigan, Takagi, et al. 2015), one possibility to explain how an Shs1-capped hetero-octamer is assembled into filaments is that heterotypic Shs1-Cdc11 NC junctions can form between the Shs1-capped end of a hetero-octamer and the Cdc11-capped end of another hetero-octamer. In support of this possibility, we have found that when the sole source of Cdc11 is a Cdc11(Da0) mutant, which perturbs its NC interface, the cells are viable when they also express Shs1, but not when Shs1 is absent or when an Figure 4. Reconstructed ancestral septins localize to the yeast bud neck congruent with endogenous septins. A CDC11 shs1D strain (GFY-6) expressing Cdc10-mCherry from the chromosomal CDC10 locus to mark the location of the septin collar at the bud neck was transformed with plasmids expressing either S. cerevisiae Shs1-GFP (pGF-preIVL-59) (top panels), Anc.11-S-GFP (pGF-IVL-159) (middle panels), or Anc.S-GFP (pGF-IVL-168) (bottom panels). The cultures were incubated overnight in SD-LEU at 30 C, back-diluted into YPD, grown for an additional 4.5 h at 30 C, washed with water, and visualized under white light by Nomarski optics (Differential Interference Contrast (DIC), leftmost images) and by fluorescence microscopy with appropriate cutoff filters to detect mCherry (middle images) and GFP (rightmost images), respectively. Representative images, adjusted using ImageJ, are shown. Faint dotted white lines demarcate the cell periphery. Scale bar, 3 lm. Figure S8A). Thus, homotypic Cdc11(Da0)-Cdc11(Da0) interactions alone are too weak to promote sufficient filament formation to maintain viability, whereas Cdc11(Da0)-Shs1 NC interaction must retain the capacity to do so. Additional support for the role of heterotypic Cdc11-Shs1 interactions in bolstering filament formation is provided by our finding that under the conditions where cdc10D cells are unable to grow in the absence of Shs1, they are able to grow when Shs1(Da0) is present (Supplementary Figure S8B), presumably because, like Cdc11(Da0)-Shs1 interaction, the reciprocal Cdc11-Shs1(Da0) interaction retains some ability to promote filament formation and/or stability. Shs1(Da0) mutant is co-expressed (Supplementary These observations provided a means to examine whether Anc.11-S or Anc.11 was able to form a heterotypic junction with Shs1 when present as the sole source of a Cdc11-like septin. Again, as before, viability during strain construction was maintained by a URA3-marked plasmid expressing WT S. cerevisiae CDC11. We found that, unlike cells expressing modern Cdc11 ( Figure 5A, lanes 1 and 2), cells expressing Anc.11-S(Da0) were inviable in both the presence and the absence of SHS1 ( Figure 5A, lanes 3 and 4), as well as when co-expressed with either Cdc11(Da0) or Shs1(Da0) ( Figure 5A, lanes 5 and 6). These findings indicated that, unlike modern yeast Cdc11, the pre-duplicated ancestor does not form (productive) heterotypic Anc.11-S-Shs1 interactions in vivo, despite its capacity to form homotypic Anc.11-S-Anc.11-S junctions, as reflected in its ability to substitute for modern Cdc11 in the absence of Shs1 (Figure 2). Telling, however, Anc.11(Da0) tested in the same way was able to support weak, but readily detectable, growth when Shs1 was present ( Figure 5B, lane 3), but not when it was absent ( Figure 5B, lane 4) or when paired with Shs1(Da0) ( Figure 5B, lane 5). This observation suggests that early on the trajectory from the preduplication ancestor to modern Cdc11, the capacity to form a heterotypic junction with a Shs1-like counterpart emerged. Previous studies (Nagaraj et al. 2008;Weems et al. 2014) have demonstrated that a G29D mutation in the P-loop of the G domain of Cdc11 (Supplementary Figure S9) weakens a contact important for formation of a fully functional G interface between Cdc11 and Cdc12. This perturbation does not prevent Cdc11 recruitment to the end of a hetero-octamer under normal growth conditions ( 30 C), but does compromise this contact and likely the overall structure of Cdc11 sufficiently to cause cells containing this allele to be inviable at high temperature (37 C). Indeed, we have demonstrated previously that even at permissive temperature, and unlike a cdc11D shs1D strain, cdc11D shs1D cells expressing Cdc11(G29D) are inviable (Finnigan, Takagi, et al. 2015), suggesting that Cdc11(G29D) is still able to cap the end of a hetero-octamer, but has such an altered conformation that it is unable to form normal homotypic Cdc11-Cdc11 NC interfaces. In marked contrast, cdc11D shs1D cells expressing the equivalent variant of Shs1, Shs1(G30D), are viable (Finnigan, Takagi, et al. 2015), indicating that this septin is unable to associate with Cdc12. These observations provided a basis to test whether a derivative of Anc.11-S carrying the equivalent mutation, Anc.11-S(G30D), would likewise still retain the capacity to associate with Cdc12 in the hetero-hexamers present in cdc11D shs1D cells and thus behave more like modern Cdc11, or be unable to associate with Cdc12 and thus behave more like modern Shs1. We found that cdc11D shs1D cells expressing Anc.11-S(G30D) were indeed inviable ( Figure 5C, lane 3). When Anc.11-S(G30D) was paired with either Cdc11(G29D) or Shs1(G30D), the strains remained inviable ( Figure 5C, lanes 4 and 5), as expected. Evolution of the G domain and CTE of Shs1 Four of the five mitotically expressed yeast septins (excluding Cdc10) contain prominent CTEs whose sequences each contain a presumptive alpha-helical segment with a strongly predicted propensity to form a CC (Barth et al. 2008;Meseroll et al. 2012;Sala et al. 2016). Previous work Bertin et al. 2008Bertin et al. , 2010 has shown that the CTEs of Cdc3 and Cdc12 form a parallel CC that helps stabilize hetero-octamers and also forms an antiparallel four-helix bundle with its counterpart in a neighboring filament to form the cross-bridges responsible for filament pairing. Along these lines, we showed previously (Finnigan, Takagi, et al. 2015) that neither Cdc3 nor Cdc12 could tolerate deletions of the linker region in their CTE that separates their CC from their G domain, whereas both Shs1 and Cdc11 were able to endure large deletions of the corresponding regions [e.g. Shs1(D342-436) and Cdc11(D301-357)] and still retain full function in vivo. Even more strikingly, we previously demonstrated (Finnigan, Takagi, et al. 2015) that the CTEs of Shs1 and Cdc11 could be swapped; expression of a chimera between the G domain of one terminal subunit fused to the CTE of its paralog allowed for retention of the function of that CTE and subsequent growth, but not if the same CTE was appended to the central subunit Cdc10. Thus, the function(s) of the CTEs of Cdc11 and Shs1 are separable and able to function "in trans," as long as they are located at the terminal end of a hetero-octamer. The CTE of modern Shs1 is the longest of any of the four extant S. cerevisiae septins that have a CTE. To study the evolution of modern yeast Shs1, we generated strains expressing (1) a fulllength septin of interest, (2) a chimera between the G domain of S. cerevisiae Shs1 and the CTE of our predicted ancestral septins (or from a different extant fungal species), and (3) the reciprocal fusion between the CTE of S. cerevisiae Shs1 and the G domain of a different subunit. For existing modern Shs1-like septins from other fungal species, we chose Candida glabrata, Ashbya (now Eremothecium) gossypii and Candida albicans (Supplementary Table S3). Each construct was integrated into either a cdc10D shs1D strain (covered initially by a URA3-marked CDC10 plasmid) or a cdc11D shs1D strain (covered initially by a URA3-marked CDC11 plasmid) and expressed from the native SHS1 promoter at its normal chromosomal locus. Together, these 46 strains provided information and insight on the ability of each construct to associate with Cdc12, form homotypic interactions between hetero-octamers, and exhibit the unique properties that are attributable to modern Shs1 in budding yeast and the degree of functional divergence of the apparent Shs1 orthologs in other distant yeast species. Expression of these constructs in the sensitized cdc10D background (on galactose medium at 22 C), which is inviable in the absence of Shs1 ( Figure 3A and Figure 6A, lanes 1 and 2), was tested to determine whether any of them was capable of fulfilling the functions of current-day S. cerevisiae Shs1. Turning first to extant Shs1 orthologs, C. glabrata Shs1 was able to support only very weak growth when present in place of S. cerevisiae but functioned significantly better when its own CTE was replaced with the CTE of S. cerevisiae Shs1 ( Figure 6A, lanes 3 and 4). Revealing, the most robust rescue was observed when the CTE of S. cerevisiae Shs1 was replaced with the CTE of C. glabrata ( Figure 6A, lane 5). Taken together these results suggest that the CTE of C. glabrata is functionally equivalent to that of the CTE of endogenous Shs1, and the poor ability of C. glabrata Shs1 to complement on its own is likely due to its G domain does not form (1) productive contacts with S. cerevisiae Cdc12 and/or (2) optimal NC interface contacts with S. cerevisiae Cdc11 between neighboring hexamers in cdc10D yeast. Similarly, as we have previously documented (Finnigan, Takagi, et al. 2015), when the CTE of A. gossypii Shs1 was substituted for the CTE of Shs1, it supported vigorous growth, whereas neither A. gossypii Shs1 alone nor when the CTE of A. gossypii Shs1 was replaced with that from S. cerevisiae could do so ( Figure 6A, lanes 6-8). Thus, when conveyed to S. cerevisiae Cdc12 via the G domain of S. cerevisiae Shs1, the CTE of A. gossypii clearly could supply near-normal Shs1 function, but its own G domain has lost this ability. The most extreme case we examined was the apparent Shs1 ortholog from C. albicans ( Figure 6A, lanes 9-11); it is clear that the C. albicans CTE does not contain the characteristic functions of S. cerevisiae Shs1. Turning to the predicted ancestral proteins, we found that, with regard to behaving like S. cerevisiae Shs1, neither Anc-11.S nor Anc.S had the capacity to do so ( Figure 6A, lanes 12-17), akin to the C. albicans Shs1 ortholog. In contrast, although Anc.S1 itself could not maintain cell viability, when the CTE of Anc.S1 was replaced with the CTE of modern S. cerevisiae Shs1, some very poor, but reproducible, growth was observed ( Figure 6A, lanes 18 and 19), suggesting a gradual shift away from the Anc.S identity. Moreover, even when brought to S. cerevisiae Cdc12 by the G domain of S. cerevisiae Shs1, it is clear that the CTE of Anc.S1 has not acquired modern functionality ( Figure 6A, lane 20). In distinct contrast, Anc.S2 itself was able to complement the loss of S. cerevisiae Shs1 rather well ( Figure 6A, lane 21), even slightly better than the C. glabrata Shs1 ortholog, and its CTE has acquired, at least partially, the functionality of the CTE of modern S. cerevisiae Shs1 ( Figure 6A, lanes 22 and 23). Thus, by these criteria, the fully functional roles of budding yeast Shs1 seem to have arisen rather recently in the evolution of modern S. cerevisiae Shs1. Expression of each of the constructs as a source of Shs1 in the cdc11D background ( Figure 6B) assessed whether any subunit was able to associate with extant Cdc12 and mediate sufficient filament formation to maintain viability. Of the 21 proteins tested, only full-length Anc.11-S and Anc.11-S in which its CTE was replaced by the CTE of modern S. cerevisiae Shs1 supported growth ( Figure 6B, lanes 12 and 13). However, there was a subtle difference in the colony morphology between these two strains: yeast expressing the Shs1 CTE replacement on Anc.11-S appeared to have a rougher colony edge, but not as pronounced as cdc11D shs1D yeast (Figure 2). This may result from the inability of the modern Shs1 CTE domain to contribute to assembly and/or function within hetero-octamers capped exclusively by the Anc.11-S subunit. Thus, the G domain (residues 1-301) of the preduplication progenitor possesses the capacity to form a G interface with extant Cdc12 and to self-associate via homotypic Anc.11-S-Anc.11-S NC interfaces to promote the assembly of Anc.11-S-capped hetero-octamers into filaments (and its CTE may be dispensable for these functions). As we have previously observed, strains expressing A. gossypii constructs (Finnigan, Takagi, et al. 2015) or any other extant or ancestral septin subunit ( Figure 6B) were unable to maintain cell viability in this genetic background. However, it remains unclear whether there are differences in gene expression, protein stability, or assembly within the octamer for these constructs that may explain the inability to form functional septin filaments. Together, these findings provide another piece of independent evidence indicating that the capacity to mediate homotypic NC association seems to have been lost very early on in the divergence of Shs1 from Cdc11. Overexpression reveals differential affinities for septin-Cdc12 G interface formation Expression of a protein or any of its variants from an endogenous promoter at its normal chromosomal locus is the most stringent and physiologically meaningful way in which to test biological function. However, retention of partial function can often be uncovered by examining whether any of the same set of proteins is able to function when overexpressed because, as expected from the Law of Mass Action, the effects of a weakened interface can be overcome by raising the concentration of one of the components, thereby pushing the equilibrium toward complex formation, especially in multi-protein ensembles (Sopko et al. 2006). A previous dosage screen (Sopko et al. 2006) in S. cerevisiae had suggested that production of either terminal septin subunit at a very high level was toxic in otherwise normal cells. Indeed, when we overexpressed either Shs1 or Cdc11 in otherwise WT cells using the galactose-inducible S. cerevisiae GAL1/10 promoter, growth was markedly impeded (Figure 7, lanes 2 and 4). This growthinhibitory effect requires their ability to form a G interface because it was eliminated by equivalent P-loop mutations in each protein [Shs1(G30D) and Cdc11(G29D)] (Figure 7, lanes 3 and 5). Under these conditions, however, we cannot determine whether the G interface with Cdc12 in question is the cause of the toxicity, or non-native G-G homotypic association of the overproduced septin itself [which is often observed in vitro; for review, see McMurray and Thorner (2019)], or one or more unnatural heterotypic G-G associations with a different septin(s) with which it might not normally interact (which has sometimes been observed in vivo; McMurray et al. 2011). In any event, by this same criterion, the Shs1 orthologs of C. glabrata and A. gossypii have the capacity to form a G interface, likely with some extant S. cerevisiae septin (Figure 7, lanes 6-9), but that the Shs1 ortholog of C. albicans does not in the context of otherwise WT yeast expressing both S. cerevisiae Cdc11 and Shs1 (Figure 7, lanes 10 and 11 and Supplementary Figure S10). By the same reasoning, among the predicted ancestral subunits, Anc.11-S behaves quite similar to either Cdc11 or Shs1 (Figure 7, lanes 12 and 13), whereas the toxicities of overexpressed Anc.11 (Figure 7, lanes 14 and 15), Anc.S1 (Figure 7, lanes 18 and 19), and Anc.S2 (Figure 7, lanes 20 and 21) likely arise from other causes (e.g. aggregation or misfolding, perhaps). By contrast, Anc.S Figure 6. Analysis of the Shs1-like functions of apparent Shs1 paralogs from three distantly related yeast species and four predicted ancestral intermediates in the lineage to modern S. cerevisiae Shs1. (A) The sensitized Shs1-dependent cdc10D background was used to assess the properties of Shs1-like gene products from Candida glabrata (C.g.), Ashbya gossypii (A.g.) and Candida albicans (C.a.) and Anc.11-S, Anc.S, Ans.S1, and Anc.S2 (see Figure 1A). Strains (all initially harboring a URA3-marked covering plasmid expressing WT CDC10) were cultured overnight in YPGAL medium at 22 C, spotted onto plates in the absence and presence of 5-FOA, as indicated, and incubated for 5 days at 22 C before imaging. (B) Expression in a cdc11D shs1D strain background was used to assess the Shs1-like properties of the same gene products as in (A). Strains GFY-160, GFY-147, GFY-483, GFY-584, GFY-860, GFY-476, Figure 6 Continued GFY-583, (all initially harboring a URA3-marked covering plasmid expressing WT CDC11) were grown overnight in SD-URA at 30 C, spotted in the absence and presence of 5-FOA, as indicated, and incubated for 3 days at 30 C. Red asterisk, the WT strain (GFY-160, lane 1) includes an integrated copy of CDC11-mCherry as a positive control. Strains harboring A. gossypii constructs were included for a complete comparison to other extant and ancestral subunits; these were tested in a previous study (Finnigan, Takagi, et al. 2015). seems to exhibit only a weak capacity for G interface formation (Figure 7, lanes 16 and 17 and Supplementary Figure S10), similar to the C. albicans Shs1 ortholog in a strain also expressing WT Shs1 and Cdc11. Discussion Gene duplication events (at multiple scales) are an important source of new material to fuel the evolution of biological systems (Ohno 1970). When examining the evolution of large multimeric protein-based structures in eukaryotes, it is clear that duplication events have increased the number of individual polypeptides that assemble into the fully functional complex or oligomeric enzyme (Magadum et al. 2013;Copley 2020). However, this trend might seem counter-productive, in that, in some organisms, a "simpler" version of the same protein complex has an identical function, yet makes do with fewer separate parts (Finnigan et al. 2011;Finnigan et al. 2012). Therefore, it is critical to understand this common tendency toward increased biological complexity at a detailed mechanistic level. Septin-based structures in eukaryotes have a deeply rooted evolutionary history and a highly conserved overall organization in the opisthokont lineage from single-celled yeast to humans (Nishihama et al. 2011;Auxier et al. 2019;McMurray and Thorner 2019). Yet, within any given fungal or mammalian organism (or, in metazoans, cell type), septin hetero-octamers can be assembled from alternative sets of subunits, which, it has been proposed, arose from gene duplication and divergence (Cao et al. 2009;Valadares et al. 2017). Ostensibly, this diversification has allowed different combinations of subunits associated with a common core structure to generate distinct supramolecular arrangements that fulfill separate physiological functions using the same underlying scaffold (Barral and Kinoshita 2008;Garcia et al. 2011Garcia et al. , 2016Vargas-Muñiz et al. 2016;Khan et al. 2018;Rosa et al. 2020). Septin structures erected during vegetative growth of the budding yeast S. cerevisiae (Farka sovsk y 2020; Marquardt et al. 2019) are assembled from two otherwise identical protomers: Cdc11capped hetero-octamers or Shs1-capped hetero-octamers. Although each is likely symmetric (i.e. possessing the same terminal subunit at each of its ends) (Khan et al. 2018), the possibility of a mixed hetero-octamer (i.e. with Cdc11 at one end and Shs1 at the other) has not been completely ruled out. In this study, we deduced, constructed, and tested the properties of a predicted likely common ancestor (Anc.11-S) of both Cdc11 and Shs1, as well as proposed representatives of likely intermediates on the trajectory to Cdc11 (Anc.11) and to Shs1 (Anc.S, Anc.S1, and Anc.S2). We found that, like modern Cdc11 itself, both Anc.11-S and Anc.11 were able to associate with the penultimate subunit (modern Cdc12) via their G interface and able to maintain cell viability, indicating that they must also self-associate via forming homotypic NC interfaces, thereby mediating polymerization of hetero-octamers into functional filaments. Thus, it appears that the capacity for promoting filament assembly was retained within the Cdc11 lineage ( Figure 8A). Preservation of such selfself interactions has been observed in other cases where complexity has increased due to gene duplication and divergence (Pereira-Leal and Teichmann 2005;Pereira-Leal et al. 2007). After duplication of the common ancestor, other potential arrangements (aside from 'Cdc11'-Cdc12 and 'Cdc11'-'Cdc11') became potential options, namely 'Shs1'-Cdc12, 'Shs1'-'Cdc11', and 'Shs1'-'Shs1'. With regard to the latter possibility, we found that, like modern Shs1 itself, neither Anc.S, Anc.S1, nor Anc.S2 retained the capacity for homotypic association. Hence, it appears that loss of a direct filament-promoting function occurred early in the Shs1 lineage ( Figure 8A). However, there was the apparent gain of the capability for subunits in the Shs1 lineage to form a heterotypic 'Shs1'-'Cdc11' NC interface, which obviously would expand the repertoire of higher-order structures achievable, perhaps providing an initial selective advantage for acquisition and fixation of this property. In contrast to the loss of homotypic NC interface formation, our analysis revealed that, like modern Shs1, Anc.S, Anc.S1, and Anc.S2 (as well as the Shs1 orthologs from three other extant yeast species distant from S. cerevisiae) retained the capacity to form a G interface with the penultimate subunit (modern Cdc12), albeit with rather widely different apparent affinities. Of course, it seems reasonable to assume that for all of the predicted ancestral septins tested that, at the same point in the evolutionary trajectory, the Cdc12 equivalent with which they associated likely differed in sequence to varying extents from that of modern S. cerevisiae Cdc12. Likewise, we know that the sequences of the Figure 7. Use of over-expression to assess the capacity for formation of non-native septin interactions. The effects of over-expression driven by the GAL1/10 promoter of Shs1-like gene products from Candida glabrata, Ashbya gossypii, and Candida albicans and of all five predicted ancestral species constructed in this work (Anc.11-S, Anc.11, Anc.S, Ans.S1, and Anc.S2). An otherwise WT strain (BY4741) was transformed with plasmids pRS315, pGF-IVL-286, pGF-IVL-287, pGF-IVL-1278 through pGF-IVL-1293, pGF-IVL-1343, and pGF-IVL-1344, and cultures of the resulting transformants were grown overnight under non-inducing conditions (SþRAF/SUC-LEU medium) at 30 C, serial diluted onto plates containing either a repressing (D; dextrose/glucose) or an inducing (GAL, galactose) carbon source, as indicated, and incubated for 3 days at 30 C prior to imaging. Growth of constructs marked with a red asterisk was also monitored at 2 and 4 days in several strain backgrounds (Supplementary Figure S10). RAF, raffinose; SUC, sucrose. Cdc12 partners for the Shs1 orthologs of the extant species tested here also differ in sequence from that of modern S. cerevisiae Cdc12 (Pan et al. 2007;Nishihama et al. 2011). This non-native ancient-to-modern G interface between yeast Cdc12 and the ancient septins may also explain why we observed elongated cellular morphologies in strains expressing Anc.11-S or Anc.11 in vivo. To assess the acquisition of the features that distinguish the unique CTE of modern Shs1 from that of modern Cdc11, we utilized three sensitized genetic backgrounds in which authentic Shs1 must be present for the cells to remain viable. We found that glimmers of the characteristics that distinguish the CTE of modern Shs1 could be observed in Anc.S1 and were much more robustly exhibited by Anc.S2, but only fully displayed by modern Shs1 itself and preserved in orthologs from certain other yeasts (especially C. glabrata and A. gossypii). Thus, the changes that neo-functionalized Shs1 seem to have occurred in stepwise fashion and emerged rather late in the Shs1 lineage ( Figure 8A). Indeed, although modern fungal Shs1 is "non-essential," it makes readily measurable contributions to optimal cell physiology, such as reinforcing recruitment of certain septin-associated proteins required for cytokinesis (Finnigan, Booth, et al. 2015) and phosphorylation-dependent control of the geometries and disassembly dynamics of higher-order septin-based structures (McQuilken et al. 2017;Khan et al. 2018). Prior work demonstrated that Cdc12 (and Cdc10) possesses low, but readily detectable, GTPase activity, but Cdc3, Cdc11, and Shs1 do not (Versele and Thorner 2004;Sirajuddin et al. 2009); and recent work (Weems and McMurray 2017) indicates that, when GTP-bound, Cdc12 associates preferentially with Cdc11, whereas when GDP-bound, Cdc12 associates preferentially with Shs1, explaining, at least in part, the basis of the differential incorporation of the two different terminal subunits into the corresponding hetero-octamers. Our findings here, while consistent with those conclusions, address how changes during the divergence of the Cdc11 and Shs1 lineages from their common preduplication ancestor have contributed to modulating their differential affinities for the formation of a G interface with Cdc12. We found that Anc.11-S and Anc.11, like modern Cdc11, exhibited a robust capacity for binding to Cdc12, whereas during the progression toward modern Shs1, due to cumulative sequence alterations (possibly including numerous insertions in its G domain), the affinity of Shs1 for Cdc12 has been significantly reduced ( Figure 8A), in agreement with earlier in vitro biochemical results demonstrating that the off-rate for dissociation of Shs1 from purified recombinant Shs1-capped hetero-octamers is substantially higher than for dissociation of Cdc11 from purified Cdc11-capped hetero-octamers . In this regard, how high-level overexpression of Cdc11 or Shs1 (but no other septin) is toxic to the growth of otherwise WT yeast cells involves inappropriate capping of hetero-octamer ends, thereby preventing formation of functional filaments (which occurs via polymerization of preformed hetero-octamers; Bridges et al. 2014), but the mechanism by which each does so in the cell is distinct. Even though it binds more weakly to Cdc12, excess over-expressed Shs1 outcompetes the level of endogenous Cdc11, resulting in mainly Shs1-capped hetero-octamers, which lack the capacity for homotypic Shs1-Shs1 NC interaction, thereby blocking filament formation, as deduced previously . By contrast, in the presence of a much greater than stoichiometric level of Cdc11, it is possible that homotypic Cdc11-Cdc11 NC interaction between free Cdc11 monomers and the ends of Cdc11-capped hetero-octamers generates non-natural hetero-decamers that are unable to polymerize via a homotypic Cdc11-Cdc11 G interface. Alternatively, if a homotypic Cdc11-Cdc11 G interface between such non-natural hetero-decamers is able to form, the more extended structure of the resulting Figure 8. Model for gene duplication and functional divergence in the evolution of the essential septin Cdc11 and its non-essential paralog Shs1 within the fungal clade. (A) A simplified phylogeny of the evolutionary trajectory of the terminal septin subunits, highlighting the findings made in this study. 1. All ancestors and tested modern fungal subunits were able to form a G interface with the penultimate subunit Cdc12 and assemble into the septin hetero-octamer in vivo. 2. Anc.11-S was unable to form a heterotypic NC interface with modern Shs1, whereas Anc.11 has acquired to a readily detectable degree the ability to form a heterotypic NC interface with modern Shs1. 3. All Shs1 subunits tested, including Anc.S, were unable to form functional homotypic NC interface interactions. 4. The distinct function(s) of the Shs1 CTE evolved late in the Shs1 lineage, after Anc.S1. 5. An optimal Shs1 G domain appeared in Anc.S2. 6. Modulation of Shs1-Cdc12 association and/or assembly at the G interface appears to occur after Anc.S2. (B) Model of how duplication of Anc.11-S and the ensuing advents of modern Cdc11 and Shs1 allows for an expansion in the repertoire of potential filamentforming complexity. filaments must be so aberrant as to preclude viability. We favor somewhat the latter explanation because we observed (Figure 7) that overexpression of the Cdc11(G29D) P-loop mutant, which cripples its G interface (and likely alters conformation sufficiently so as to also weaken its NC interface), completely eliminated its overexpression-based toxicity (Figure 7). Of course, it is also possible that massive overexpression of GTP-bound Cdc11 [but, not "empty" Cdc11(G29D)] titrates out protein chaperones needed for folding of Cdc11 itself, thereby preventing efficient folding of the other septin subunits and other essential cellular proteins (Johnson et al. 2015). The genomes of many species, especially mammals, encode an assortment of alternative septin subunits, which can be differentially expressed in different cell types and further diversified by alternative splicing and other means, allowing for assembly of distinct types of hetero-octamers in specific tissues or during different developmental programs (Hall and Russell 2012;Neubauer and Zieger 2017). It has been unclear; however, the degree to which existing septin assemblies could accommodate predicted ancestral subunit(s) or modern septins that have evolved in extant, but distantly related, species. With regard to the latter point, septins from certain heterologous sources have been tested in S. cerevisiae. The apparent Cdc12 ortholog from the filamentous fungus Aspergillus nidulans AspC was able to complement the inviability of a cdc12D mutant only poorly, and when expressed in WT cells, it promoted formation of atypical pseudohyphae rather than normal buds, even though it appeared to localize at the bud neck (Lindsey et al. 2010). Recently, the major isoforms of all 13 human septin gene products were tested for their ability to rescue cdc3D, cdc10D, cdc11D, and cdc12D mutant cells; and only complementation of cdc10D cells was observed (Garge et al. 2020). Of the 13 human septins, only four-two from homology Group 1A (SEPT3 and SEPT9) and two from homology Group 1B (SEPT6 or SEPT10)-were able to exhibit a Cdc10-like function in vivo but could not fully replace the yeast subunit (Garge et al. 2020). Phylogenetic analysis suggests that human Group 1A and IB septins may share a common ancestor with S. cerevisiae Cdc10 (Pan et al. 2007). The most recent studies of the human heterooctamer support an organization (SEPT2-SEPT6-SEPT7-SEPT9-SEPT9-SEPT7-SEPT6-SEPT2) in which a Group1A NC homodimer forms the core of the human septin hetero-octamer (McMurray and Thorner 2019; Mendonca et al. 2019;Soroor et al. 2020), just as a Cdc10-Cdc10 NC homodimer forms the core of the yeast septin hetero-octamer. So, the partial rescue by SEPT9 (and its paralog SEPT3) of Cdc10 deficiency makes structural sense. By contrast, the rescue of Cdc10-deficient cells by human SEPT6 (and its paralog SEPT10), which occupies the same position in human hetero-octamers as Cdc12 in S. cerevisiae hetero-octamers, is harder to explain. Nonetheless, this reported complementation presumably requires that SEPT6 (and SEPT10) be able to form a functional NC homodimer that is able to engage at its flanks yeast Cdc3 via a G interface, highlighting the incredible flexibility inherent in septin-septin interaction. In this same regard, it has been inferred that the (obligate) inclusion of Cdc10 at the central position within the yeast hetero-octamer may have been coupled to the loss of the ability of the Cdc3 subunit to hydrolyze its bound GTP, an event that seems to have occurred prior to the split between the yeast genera Saccharomyces, Ashbya, and Kluyveromyce (Johnson et al. 2020). Indeed, biochemical studies of the corresponding human proteins (Zent and Wittinghofer 2014) demonstrate that SEPT9, like yeast Cdc10, is GTPase competent, whereas the flanking septin, SEPT7, like yeast Cdc3, lacks the capacity to hydrolyze its bound GTP. In conclusion, our study provides the first analysis in vivo of predicted intermediates in the evolution of the two paralogs that are able to occupy the terminal position in the septin heterooctamers of S. cerevisiae ( Figure 8B). Our findings shed light on why Cdc11 is essential and why Shs1 is not, define the complexities involved in maintaining ancestral protein interactions, and delineate when the various functional features that define and distinguish Cdc11 and Shs1 emerged and diverged. Future work will focus on investigating whether any specific residue change (or small set of residues) is necessary and/or sufficient to recapitulate the steps in the progression from the ancestral state to their modern counterparts. Ethical statement This work did not involve any human or animal subjects of any kind.
2020-04-18T20:46:17.807Z
2020-12-07T00:00:00.000
{ "year": 2020, "sha1": "167a1a245745fa460e2bb82e7cb88acf8597961d", "oa_license": "CCBY", "oa_url": "https://academic.oup.com/g3journal/article-pdf/11/1/jkaa006/38018280/jkaa006.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "7c1d6feab55ca786bcb0c258ed0839bcf5c84e18", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
158501669
pes2o/s2orc
v3-fos-license
Effective physician leaders: an appreciative inquiry into their qualities, capabilities and learning approaches Purpose The aim of this study was to explore the qualities and capabilities effective physician leaders attribute to their success in leading change and how they developed these. Method The authors interviewed 20 emerging and senior leaders using a semistructured interview guide informed by appreciative inquiry. Data were subjected to an inductive qualitative content analysis to identify themes related to qualities, capabilities and learning approaches. Results The qualities identified were clarity of purpose to improve care, endurance, a positive outlook and authenticity. They were considered innate or developed during participants’ upbringing. Capabilities were to ground management in medicine, engage others, catalyse systems by acting on interdependencies and employ a scientific approach to understand problems and measure progress. Capabilities were developed through cross-pollination from a diversity of work experiences, reflection, when education was integrated with practice and when their organisational environment nurtured ambition and learning. Conclusions This study reframes current leadership thinking by empirically identifying qualities, capabilities, and learning approaches that can contribute to effective physician leadership. Instead of merely adapting leadership development programmes from other domains, this study suggests there are capabilities unique to effective physician leadership: ground management in medicine and employ a scientific approach to problem identification and solution development. The authors outline practical implications for individuals and organisations to support leader development as a cohesive organisational strategy for learning and change. Effective physician leaders: an appreciative inquiry into their qualities, capabilities and learning approaches InTrOduCTIOn Physician leadership and management practices impact clinical and organisational outcomes, for example, effectiveness, efficiency, productivity, quality and work environment. [1][2][3] They are essential to the development of new approaches to care delivery, organisational structures and governance, and often seen as a requisite for successful change efforts. 4 Consequently, physician competency models recognise leadership as a core component. [5][6][7] Despite these competency models and numerous leadership development efforts in healthcare, there is little evidence that either contribute to leader effectiveness. 1 8 9 The existing evidence on effective leader and leadership development in and outside of healthcare is not being applied. 3 10 The frequent use of brief programmes that focus on conceptual knowledge and individual skills training fail to address personal development and collaborative capacity-both cornerstones for effective leadership. 8 The design and delivery commonly employ ineffective teaching approaches, lack relevant theoretical underpinnings and have limited adaptation to the healthcare context. 1 9 In a complex context rife with adaptive challenges, physician leaders need more than a 'checklist' of competencies. 11 While competencies are the application of knowledge, skills and attitudes to known problems with proven solutions, capabilities involve learning how to use and adapt these competencies in unfamiliar environments to address unfamiliar challenges. 12 Given this difference, we need to understand how effective physician leaders have learnt to adapt their competencies to guide their organisations towards better performance. 8 This identification of capabilities requires linking competencies to the effectuation of change. Leader effectiveness and capability development are influenced by one's qualities (eg, open-mindedness, responsibility, courage). 13 14 In contrast to general leadership research, physician leaders' qualities have received little attention. Previous studies also have a methodological limitation: they identified qualities and/or competencies physicians perceived to be important, not those anchored in actual successful leadership experiences. 15 Given the above, the aim of this study is to empirically explore the qualities and capabilities effective physician leaders attribute to their success in leading change, and how they think they developed these. We defined leadership as working towards continual organisational development, independent of formal authority or roles, that is, change (and its management) lies at the core of leadership. 16 17 MeThOd study design We chose a qualitative interview study design to explore participants' experience in depth and address the lack of qualitative studies on effective teaching and learning approaches. 9 The design was informed by appreciative inquiry (AI), a systematic approach to identify and analyse successful experiences and practices. 18 AI suggests that by analysing successes instead of problems (the focus of traditional research), one is better able to identify and understand what actually works. It originated as a research methodology in studies conducted at the Cleveland Clinic and has been applied in healthcare (eg, hospital and primary care settings, nursing and obstetrics). 19 20 study setting and participants This study was conducted in Sweden, which addresses the call for leadership research outside of North America. 8 9 Swedish healthcare is at the forefront of developing new models for care delivery, structure and governance, through quality improvement, lean, value-based healthcare and other initiatives. 21 22 Our purposive sample included aspiring and senior physician leaders instrumental in these developments and with established records of accomplishment. 23 An initial group of senior leaders was identified in consultation with professors at the Medical Management Centre, Karolinska Institutet, who possess a thorough understanding of the Swedish healthcare system based on decades of research. An initial group of emerging physician leaders was identified from the MedUniverse national physician network nomination list for their annual 'Future Physician Leader' prize. Nominees were physicians 45 years of age or younger who had demonstrated that they were 'visionary, role models, innovative, influential, and a positive force for leadership in Swedish healthcare.' 24 Thereafter, both groups were expanded and verified through cross-referencing and snowballing, where each respondent was asked to recommend other outstanding physician leaders. data collection Data were collected through semistructured interviews. The interview guide consisted of three sections informed by AI (online supplementary appendix 1): (1) reflect on successful aspects of one's work and competence; (2) understand how past experiences enabled the development of qualities and capabilities relevant for leading change; and (3) ideas and suggestions to develop similar qualities and capabilities in future physician leaders. Qualities and capabilities were elicited in three ways: through direct questions; reflections about how others view the respondent's leadership; and a personal experience of leading what they defined as a successful change. The guide was pilottested thrice with individuals who shared the same profile as the participants. As the questions remained unchanged after the first pilot test, we included the two subsequent pilot tests. Interviews were conducted together by the first and the last authors, both with considerable experience in qualitative research. Interviews lasted 60-80 min, were digitally recorded and held at a convenient location for the participant, free from interruption. Three were conducted over the phone. Interviews were conducted in English with the option to answer in their native tongue (Swedish). Two chose to do so. Interviews were continued past the point of saturation for each group. 25 Informed consent was received prior to the interview. Data were handled confidentially; all efforts were made to preserve anonymity. data analysis Interviews were transcribed verbatim. Transcripts were read repeatedly to develop familiarity. Inductive content analysis 26 was performed using NVivo qualitative data analysis software; QSR International, V.10, 2012. Meaning units relevant to the research question were identified and coded by the first author. The analysis for the senior and emerging leaders was separated after observing different patterns in the codes. Codes were independently categorised by the first and the last authors and differences resolved through consensus. Each category was reviewed to develop subcategories and themes where applicable. To strengthen credibility, categorisation was repeated independently and corroborated by two other groups of six researchers each, and the transcripts were reread to ensure the themes accurately reflected what was said. Trustworthiness was strengthened through discussions with key informants and presentations. Given the growing interest among physicians for MBA programmes, 27 we conducted an additional analysis to look at the potential impact of MBA training on leadership perspectives. resulTs Study participants were 20 senior and emerging leaders (table 1). The emerging leader group was gender balanced; senior leaders consisted of more men than women. Specialties included internal medicine, family medicine, paediatrics, obstetrics/gynaecology, anaesthesiology, emergency medicine, psychiatry, cardiology and surgery. Sixteen had academic degrees in addition to their medical degree. All had worked in academic medical centres and the public sector. We identified four themes among the qualities, four among the capabilities and five related to learning approaches. For illustrative quotations, consult tables 2-4. Qualities Participants attributed qualities such as clarity of purpose to improve care, endurance, positive outlook and authenticity to their success in leading positive change (table 2). They were driven by a purpose to improve healthcare and have a positive impact. For senior leaders, the impetus was dissatisfaction with the status quo. Emerging leaders acted on their ambition to implement new ideas. For senior leaders, endurance meant to hold people accountable and always remind them of the larger purpose. Emerging leaders often faced resistance and thus needed endurance to keep 'standing tall' even if their proactive behaviour was not appreciated due to their junior status. Both groups had a positive outlook. They were forward looking and focused on opportunities instead of problems. Emerging leaders emphasised the importance of positive reinforcement and feedback. Both described additional qualities related to authenticity, such as humility, openness, trustworthiness, professionalism and curiosity. Capabilities The capabilities were to ground management in medicine, engage others, catalyse systems by acting on interdependencies and employ a scientific approach to understand problems and measure progress (table 3). Ground management in medicine Participants integrated their knowledge of medicine with that of economics, quality improvement and organisational development to help others see the rationale for change. Their in-depth medical knowledge and understanding of care processes granted them credibility among staff, and it helped them understand and address the medical consequences of change initiatives. Engage others: 'Working with the system' versus 'working the system' When engaging others, participants empathised with staff, maintained motivation and developed resonant relationships. However, there was a distinction in how they related to their context. Senior leaders 'worked with the system.' They mediated conflicting interests by focusing on a shared purpose and brought together strategic allies. They engaged and empowered staff by creating space and challenging them to take the lead in problem identification and solution development by being present and visible in the organisation. They did not rush to solve problems for others, but through delegation and sharing their decision-making powers, got others to identify problems and develop solutions themselves. Senior leaders listened first to truly understand and empathise with what matters to people. Emerging leaders, on the other hand, 'worked the system.' Without a formal position of authority, they built support by negotiating terms and teamed with people whose competencies compensated for their own shortcomings. They engaged others in change initiatives by asking questions that tested their own hypotheses about the situation. They too listened to people, but it was in order to understand how to tailor their communication and demonstrate good social skills. Catalyse systems by acting on interdependencies Participants recognised patterns and led by example; using themselves as learning tools. They connected ideas (senior) and acted on the interdependencies in the system through goal setting and providing structure (emerging). They saw themselves as part of everything that was going on in the system. Senior leaders reflected on the importance of improving self-awareness through testing ideas on themselves. Emerging leaders 'walked the talk' as a strategy to illustrate the validity of their ideas. Employ a scientific approach to understand problems and measure progress Participants used a scientific approach to analyse problems, develop hypotheses and measure progress. Senior leaders described this as being curious, asking questions and listening carefully to be able to understand problems before jumping into solutions. Emerging leaders talked about maintaining a healthy scepticism and the importance of critical thinking, in particular to check and analyse the data used to inform decisions. learning approaches The most influential learning approaches were cross-pollination from a diversity of work experiences, reflection, the rare occasions when education was integrated with practice, being part of an environment that nurtured ambition and learning and 'luck of the draw' (table 4). Both groups valued 'learning by doing'. Emerging leaders described taking responsibility for increasingly larger projects over time. Both groups credited experiences from medical practice as important in developing emotional intelligence, skills in communication and quick decision-making. While medical practice was a central work experience for both groups, participants also found it important to seek out roles in other contexts and leave their 'medical comfort zone'. This was primarily tied to rethinking how medicine works based on a diversity of work experiences in management and health economics consulting, pharma, medical entrepreneurship or at the WHO. Teaching and mentoring-helping others develop new behaviours-as well as being a mentee, were also seen as a practice of leadership. Seek to understand what matters to peopleempathise, motivate and inspire. Original research Be curious and interested in others, develop good social skills. III. Catalyse systems by acting on interdependencies. I also think that I have a tactical or strategic sense for possibilities so that I can see things within the system that are not linked today, but I can actually see that if I can connect this person to that person, whether I'm going to be a part of that or not, it might actually lead to something that is interesting. (PE04) Connect ideas, help others to see the big picture. See interdependencies in order to develop a strategic mindset to spot opportunities, set goals, develop action plans, provide structure and grow networks. Lead by example by using oneself as a learning tool. The only [learning] tool I have is myself, really, so, in that way I sort of learn and then see reactions from people. (PS14) Improve self-awareness by using oneself as a learning tool. Test changes on oneself. 'Walk-the-talk' as a strategy to convince others. IV. Employ a scientific approach to understand problems and measure progress. Be curious, ask questions and collect and analyse data to understand problems before jumping into solutions. Measure progress. Maintain a healthy scepticism and think critically. Systematically collect and use data to analyse problems and measure progress. Reflection was facilitated through feedback and evaluation, observations and the use of theories. Emerging leaders appreciated the regular feedback and evaluation systems they experienced in management consulting companies and were consequently more aware of their own strengths and weaknesses compared with senior leaders. Observations allowed both groups to recognise good and bad leadership practices. Theory allowed them to make sense of their experiences (senior), and if successfully applied, generated enthusiasm for their leadership practice (emerging). Formal education was deemed useful only if integrated with practice. An organisational environment that nurtured ambition and learning helped create opportunities to practise and improve one's leadership capabilities. Qualities such as openness, honesty, commitment, competence, passion, enthusiasm, humility, curiosity, ambition and persistence were attributed to 'luck of the draw' in terms of both, 'just the way I am' (nature) or to one's upbringing (nurture). This was irrespective of growing up in a supportive family or one full of hardships. Of the six participants with an MBA or equivalent degree, none described being open minded, curious or humble. As for those with consulting backgrounds, there was a clear emphasis on project management as well as on strategic thinking in terms of structure, prioritisation, process and analysis. In terms of their learning approach, they were more externally orientated as opposed to engaging regularly in self-reflection. Two of the six described that they found their MBA education useful, either for learning about communication or making sense of past management experiences. They valued working on real-life projects or their own cases. dIsCussIOn The senior and emerging leaders in our study attributed their success in leading positive change to their qualities of a clarity Teaching and mentoring was an integral part of senior roles and seen as acts of leadership to help others develop new behaviours. Facilitating learning for others was experienced as a trigger to learn, particularly about group dynamics. II. Reflection. Feedback and evaluation. […] during projects and between projects, on a yearly basis, structured evaluations and feedback, which are very good because we go over my good and bad traits and efforts and how to improve. (PE13) Coaching and mentoring by more senior people. Make systematic approaches to feedback and evaluation as well as senior colleagues a part of daily work. Observation. [ It's definitely lots of genetics, but it's also my upbringing that's shaped me more than anything. (PE21) Upbringing encouraged openness, honesty, commitment, competence, clarity, passion, persistence. During upbringing learnt to take responsibility, observe, prioritise academic achievement, connect with people. of purpose to improve care, endurance, a positive outlook and authenticity, as well as their capability to ground management in medicine, engage others, catalyse systems by acting on interdependencies and employ a scientific approach to understand problems and measure progress. Qualities were seen as either innate or nurtured during their upbringing. Capabilities were developed through cross-pollination from a diversity of work experiences, (self-)reflection, those rare occasions when education provided insight into concurrent work challenges and when they were part of or created an organisational environment that encouraged learning and placed high expectations on them. At first glance, the findings may seem similar to general leadership theories, which raises the question of the uniqueness of physician leadership. 15 However, those theories have proven Original research difficult to translate into healthcare. 1 We suggest that there are nuanced and important differences which we explore below. While previous studies have highlighted the necessity of clinical knowledge, 3 23 28 our participants articulated and took action based on the medical consequences of management decisions. This allowed both the emerging and senior leaders to achieve their organisation's (often economic) goals without compromising their integrity of purpose to improve care. This is in contrast to a previous study where such purpose-driven behaviour, described as organisational altruism, was observed only among established leaders. 23 The integration of tasks in conflicting domains, such as economics and patient care, may have been further facilitated by their ability to authentically engage others. 29 This in turn helps retain motivation and well-being among staff, 30 even in the face of considerable downsizing requirements. 31 Thus, leadership which facilitates medical engagement, that is, the strategic involvement of physician leaders in improving care, is worth further study. 32 33 Integral and unique to the physicians' leadership was their scientific mindset. Similar to findings by Hopkins et al, 28 participants looked for the most pertinent questions to pose and data that could help to answer these questions and measure progress. This went beyond the 'management by analysis' training of the MBAs 34 and is rare even in quality improvement efforts (15%). 35 We could not find any study that linked educational level with healthcare leadership effectiveness and can therefore only speculate that since 13 of the 20 participants had doctoral degrees, the research skills they had developed may have influenced their leadership practice. Such an evidence-informed 'scientific' approach may resonate better with the professional ethos of healthcare staff who commonly understand change as a result of new research findings, 31 and may exemplify the kind of evidence-based management needed in healthcare. 36 Focus on learning Leadership studies and competency models frequently stress the importance of goals and performance. Participants were indeed accomplished high achievers. However, from the data it emerged they were first and foremost avid learners-they treated each situation as an opportunity to learn about others, themselves and their context. Their qualities and capabilities combined to engender a continual focus on learning. Participants differed from colleagues who, prone to overconfidence, avoid challenges that question their competence, that is, a 'fixed mind-set' that impedes learning. 37 Instead, they demonstrated qualities that contributed to a habit to critically reflect, 14 actively seek out challenging situations and expand their role and mandate, that is, they demonstrated a 'growth mind-set'. 37 Their positive outlook can be linked to improved cognitive functioning, including an openness to ideas, emotions and people. 38 Hopkins et al describe similar distinguishing leadership behaviours in terms of creating opportunities for learning and being open to new perspectives. 28 Despite growing evidence for the role of teamwork to achieve better clinical outcomes, 39 40 participants seldom described such teamwork in their leadership efforts. Instead, they consistently developed resonant relationships and collaborated with a broad range of stakeholders. 23 28 The findings suggest that physician leaders may benefit from moving beyond popular team-training initiatives designed for stable membership and well-defined tasks. 41 Participants' approaches could be characterised rather as 'teaming' or relational coordination, which involves creating meaningful multidisciplinary relationships 'on the fly' in a shifting mix of work-partners. [41][42][43] The combination of such dynamics and participants' approach to relationships attests to their emotional and social intelligence-both increasingly acknowledged attributes for effective physician leaders. 23 28 44 45 As systems catalysts, participants demonstrated more than the basic understanding of how health systems function and systembased patient care described in physician competency models. 5 6 46 Not only could they see interdependencies, 28 these leaders were aware of their own role in them. In contrast to several studies of physician leadership which see change as a result of the deliberate communication of leaders' own visions, 3 23 our participants took action based on the full acknowledgement of their organisations' complexity, that is, visions were created in concert with others. 47 Insights on learning from effective physician leaders A comparison of leadership development in healthcare 8 9 with leadership development research 10 reveals two insights empirically supported in this study: daily work as a platform for deliberate leadership practice and a symbiosis between organisational learning culture and effective leadership development. Transform learning from experience into deliberate practice Participants' descriptions of learning from a diversity of work experiences support the proposition that leadership development should be about 'helping people learn from their work rather than taking them away from their work to learn.' 17 However, learning from experience can be problematic. 11 For technical problems, such as perfecting a surgical technique, repetition might be enough. But for more complex challenges, learning from experience is only effective when coupled with reflection and high-quality feedback. 48 Feedback and evaluations were valued for improving self-awareness, which, in healthcare leadership programmes, is often missed. 8 Systematic feedback, however, does not guarantee behaviour change-it was participants' focus on learning, which helped them benefit from these practices. 10 Their reflective practice helped them transform implicit (performance-oriented) work experiences into explicit learning opportunities, 14 that is, transform learning from experience into deliberate practice. 48 A symbiosis between organisational learning culture and leadership development Healthcare, with its noticeable status differences and promotion of individual accomplishments, seldom exhibits a learning culture supportive of leadership development. 11 23 49 Still, as participants engaged others, they fostered a coherent organisational learning environment based on reciprocal relationships centred around shared meaning that provides assessment, challenge and support at all levels. 1 10 This is in line with effective leader and leadership development, 10 and why current teaching methods (lectures, seminars and group work) focused on individual leaders are not enough. 8 Our study has limitations. Despite the critique of AI's penchant for the positive, interviews also elicited challenges and negative experiences, but the emphasis was on what lessons have been learnt for future success. 50 The first and last authors' considerable experience with applying AI in the contexts of primary care, psychiatry, medical and leadership education, public health and global NGO development may have enabled this. We acknowledge the imbalance among the sexes of the senior leaders; however, no gender differences were identified among emerging leaders, which may suggest that the same holds true for senior Table 5 Implications for physician leadership development Individuals aspiring to leadership and management positions leadership development programmes Engage in self-reflection around the qualities necessary to develop a focus on learning. Support a growth mindset through establishing psychological safety and a learning orientation. Ground management practices in medicine through clinical knowledge and clarity of purpose. Facilitate clarity of purpose and help physician leaders develop the wisdom and tools to engage others in healthcare improvement. Work with others by 'working the system' or 'working with the system' (dependent on one's level of authority) through teaming. Help participants to practise addressing actual challenges in changing team constellations. Become systems catalysts and use oneself as a learning tool. Help participants develop a systems perspective through observations, identification of interdependencies and analysis of situations through the lens of different parties and provide training in how to engage others in multistakeholder change processes. Cultivate a scientific mindset that guides the analysis of problems and the measurement of progress. Link a scientific mindset to organisational improvement efforts by requiring that projects be anchored in evidence and use data to inform decisions and learning. leaders. As with qualitative studies in general, transferability is determined by how the description of the context, characteristics of the participants and the findings resonate with readers. Further studies could be conducted in the context of a leadership development effort to explore the mechanisms that foster learning and change. 8 A further analysis of the change strategies described by participants could contribute to a theory of physician leadership. Implications for practice Organisations could take a proactive and long-term approach to cultivate the qualities of clarity of purpose, endurance, positive outlook and authenticity that foster learning from experience. Recruiting and developing managers with these qualities may generate a virtuous cycle where staff become aware of and challenge their fixed mindsets. Further suggestions for individuals and organisations are summarised in table 5. A first step would be to move from haphazard self-reflection to facilitate the deliberate practice of leadership in daily (clinical) work. This could help organisations integrate leader and leadership development at all levels in a cohesive organisational strategy and learning environment that challenges aspiring leaders to grow their best selves at work. COnClusIOns This study reframes current leadership thinking by empirically identifying the qualities, capabilities and learning approaches that can contribute to effective physician leadership. Our findings resonate with cutting-edge leadership research that builds on complexity science, but which thus far has failed to make segues into healthcare. Instead of merely adapting leadership development programmes from other domains, this study suggests there are capabilities unique to effective physician leadership: ground management in medicine and employ a scientific approach to problem identification and solution development. The need for more physician engagement in the management and leadership of healthcare may be better addressed if leadership development meaningfully resonates with clinical practice and professional ethos.
2019-05-20T13:06:04.889Z
2018-09-01T00:00:00.000
{ "year": 2018, "sha1": "517bb1a0289811d40c9a7e69b193efa0b1fa6b96", "oa_license": "CCBYNC", "oa_url": "https://bmjleader.bmj.com/content/leader/2/3/95.full.pdf", "oa_status": "HYBRID", "pdf_src": "Highwire", "pdf_hash": "7fe77d9589369d44e78979b548fe66f30a38f4d8", "s2fieldsofstudy": [ "Education", "Medicine" ], "extfieldsofstudy": [ "Psychology" ] }
198897879
pes2o/s2orc
v3-fos-license
Hard-Aware Fashion Attribute Classification Fashion attribute classification is of great importance to many high-level tasks such as fashion item search, fashion trend analysis, fashion recommendation, etc. The task is challenging due to the extremely imbalanced data distribution, particularly the attributes with only a few positive samples. In this paper, we introduce a hard-aware pipeline to make full use of"hard"samples/attributes. We first propose Hard-Aware BackPropagation (HABP) to efficiently and adaptively focus on training"hard"data. Then for the identified hard labels, we propose to synthesize more complementary samples for training. To stabilize training, we extend semi-supervised GAN by directly deactivating outputs for synthetic complementary samples (Deact). In general, our method is more effective in addressing"hard"cases. HABP weights more on"hard"samples. For"hard"attributes with insufficient training data, Deact brings more stable synthetic samples for training and further improve the performance. Our method is verified on large scale fashion dataset, outperforming other state-of-the-art without any additional supervisions. In this paper, we address one of the major problems in fashion attribute classification: imbalanced data distribution, specifically the samples or attributes with very few positive labels. Patterns in fashion images are highly diversified due to its non-rigid nature and abundant semantic behind. Combined with very rich attributes of fashion items, it brings the imbalance and sparsity of positive labels for some attributes or specific kind of samples. The upper plot in Fig. 1 demonstrates the positive attribute counts from DeepFashion: Category and Attribute Prediction Benchmark (DeepFashion-C) [6]. The dataset contains images and tags from shopping websites and search engine, that is representative in a real-world scenario. Among the 1000 annotated attributes, the most frequent label "print" has 37,367 occurrences, whereas the least label "topstitched" only shows up in 51 images. In addition to imbalance, fashion attributes are usually sparsely distributed as shown in Fig. 1, over 1/5 attributes have fewer than 100 positive labels and on average there are only 3.3 positive tags per image. Moreover, the diversity of fashion items makes the problem even worse. Take "party" as an example, countless diversified fashion images can be defined as "party" (Fig. 1), such that a specific minority "party" case may not be easy to learn from the 2,882 tagged samples. So the problem is at both attribute and sample level. A big difficulty in training with such kind of dataset is that majority data are generally well trained while minority data is either under-trained or is prone to over-fitting with too few samples. Many efforts have been devoted to tackling this problem [22]. A common solution is re-sampling [23,24,25]. Though has been widely used, over-sampling has its limitations such as the tendency to over-fit, whereas under-sampling suffers from the risk of missing valuable information. Moreover, it is not trivial to extend re-sampling to multi-label datasets [26,27,28], and few of them focused on imbalanced multi-label computer vision problems [29]. Another popular family takes into account the misclassification errors, known as cost-sensitive learning [23,30,31,32,33]. Broadly speaking, it covers a wide range of methods that use algorithms or strategies based on cost. Among the scope, hard-aware methods are being actively studied in recent years with deep neural networks, such as focal loss [34], hard example mining [35,36,29], etc. In this work, we develop an approach leveraging both cost-sensitive and re-sampling strategies to make full use of "hard" data. The key idea is to focus on the minority data as much as possible, and don't affect the majority since they usually are already well trained. Minority data are often strongly correlated with high classification error as suggested by [24,34]. We also verified this by comparing the average predicted probability for positive attributes vs. the number of positive labels in Fig. 2 (more details will be discussed in Section 4.1). In the figure, the blue crosses are predicted probabilities for positive samples, from a well-trained model using cross entropy loss. Based on this, we use the error probability estimated by model [34] as a metric to identify "hard" data. To make the best of this key metric in training, two techniques are developed. We first present a solution from the view of cost-sensitive learning that to backpropagate losses on each sample and each attribute weighted by the estimated errors. We refer this method as Hard-Aware BackPropagation (HABP). From the perspective of re-sampling, we further suggest to sample synthetic complementary images, which are samples that around but not overlap with real samples in feature space, to train hard/minority attributes with generative adversarial networks [37] (GAN). The proposed method is similar to semi-supervised GAN [38] but is much easier to train and implement. A possible reason that GAN is not widely used in a practical problem is the trickiness to train with high-resolution such as 224 × 224. This was induced by problems including mode collapse [39] and gradient vanishing [40]. In order to generate diversified high-resolution complementary images, we introduce a decorrelation regularization loss to deal with mode collapse. It successfully relieves mode collapse in training a multi-resolution GAN (MR-GAN) architecture we used in this work. Evaluations on DeepFashion-C demonstrates that our approach outperforms the state-of-the-art, without using additional supervisions. Our main contribution is proposing to take full advantage of "hard" samples with two techniques from the view of cost-sensitive learning and re-sampling respectively: 1) We propose Hard-Aware BackPropagation (HABP) that effectively reduce the impact of strong imbalance in multi-label image dataset. 2) Based on hard labels identified, we present a method to train model with synthetic complementary samples and a decorrelation loss for stably generating high-resolution synthetic samples. Fashion Attribute Classification Fashion attribute classification has already become a prevalent topic in the research area [8]. However in the early stage, most published datasets are either small-scale or annotated with a few numbers of attributes [41,42,10]. Based on DeepFashion-C, FashionNet [6] proposed to jointly learn cloth attributes and landmarks. Corbière et al. [43] collected noisy data from shopping website to perform weakly supervised image tagging. In the recent work [44], the authors grounded human knowledge to landmark detection. Then attribute classification was improved with landmark enhanced visual attention. Most existing works incorporated other supervision (such as landmarks, low-level features [41]) to improve attribute classification. A few of them [29] used attribute annotations only, but the method is not strongly tied to vision problems. In contrast, our method only uses attribute annotations and makes full application of training images in a semi-supervised manner. Hard-Aware Learning Hard example mining [45] has been making successes with deep neural networks in areas including face recognition [46], object detection [35], person Re-ID [47], and metric learning [36]. Based on the same idea that hard samples are usually more informative, variants have been proposed. Among them, focal loss [34] (FL) is closely related to our work by sharing the idea of modeling the estimated probability of classification error and take it as weights in loss function. Variants of FL has been applied to attribute classification [48]. A key difference between HABP and FL is that HABP introduces an output dependent normalization term for better stability and performance. OHEM [35] is also related to our method in the idea of sampling "hard" data. More details will be discussed in Section 3. GAN & Semi-Supervised GAN GAN [37,39] has enjoyed a resurgence of interest in recent years for its ability to generate high fidelity images. A number of efforts have been made for synthesizing higher-resolution images. Denton et al. [49] employed a Laplacian pyramid with multiple discriminators to generate images at multiple resolutions. The idea of multi-resolution was further developed in [50] with the progressive growth of GAN. Based on the idea of weight sharing across multiple resolutions, Karnewar [51] published multi-scale gradients GAN (MSG-GAN) that train multi-resolution images simultaneously. To make use of GAN for discriminative tasks, semi-supervised GAN [38,52,53] jointly to train a generator and a discriminator that classify true labels for real sample and an auxiliary label for fake samples simultaneously. The scheme is good at learning a better decision boundary with only a few samples. In this work, we introduce deactivation based training with synthetic complementary samples (Deact), which is similar to semi-supervised GAN but is easier to train and implement. To make the proposed method stable, we moreover proposed decorrelation regularization to alleviate mode collapse [39,54] problem in GAN. HABP The key idea of HABP is to emulate sampling losses from the output nodes. Consider a batch with M samples and N attributes as illustrated in Fig. 3. After a forward pass there will be M × N output nodes, each can be calculated with labels for cross entropy (CE) loss: where P ij is the model predicted probability of target label, for the j th attribute of the i th sample in the batch. As an example, in binary classification a commonly used formula is: where σ (·), y andŷ are sigmoid function, ground truth label, and model output respectively. CE assumes that individual samples and attributes are equally important. When we apply CE loss on training extremely imbalanced dataset, minority attributes are always much less trained than majority attributes, resulting in much higher prediction errors. A natural idea is to only backpropagate losses on more informative nodes. For example, a solution is to simply sample "hard" nodes to backpropagate losses. We borrow the idea from FL, to model the sampling probability as the probability of wrong prediction: where γ is a tuning parameter. We then use Eq. (3) to calculate a weighted average of losses (Eq. (1)) in a batch to emulate sampling nodes for backpropagation, which we call HABP: Note that this is equivalent to sampling nodes with the error probabilities, while it is more efficient because directly sampling suffers from the risk of missing information in unsampled nodes. Compared to FL, HABP makes hard losses more prominent and stable. Because in multi-label training, losses on hard nodes may be averaged out by the big number of attributes, particularly in the late training stage. For example, in the beginning, most attributes and samples tend to be "hard". As the training goes on, the ratio of "hard" samples will be fewer than in the beginning. If the number of attributes is large, the total losses by FL at different training stages will possibly be different in orders of magnitude, which may results in either unstable at the beginning stage or too slow learning at the late stage. More discussions with experiments will be presented in Section 4.2 Deactivation Training with Synthetic Complementary Samples As a popular re-sampling technique, semi-supervised GAN has two drawbacks: 1) Training GAN is a tricky task. There are some differences between training a GAN and training a discriminative model. For example, GAN usually requires more iterations and larger batch size [50,55] to achieve better image quality, which may not be optimal and necessary for training a classification model. 2) Dai et al. stated and proved that a good semi-supervised GAN requires a "bad" generator. Ideally, the generator should synthesize samples around but not overlapped with real samples in feature space. This is again a tricky task. For these reasons, we present an alternative scheme which is easier to implement and more stable in training. We first train a generator with MR-GAN with enough epochs to synthesize recognizable images. Then to make sure the generator is "bad" enough for semi-supervised training, we degrade the generator by adding an element-wise perturbation to the most semantic meaningful feature maps (Fig. 4), which are the feature maps directly projected from latent space. Empirically, the perturbation should be strong enough to synthesize images that visually different from real samples as in Fig. 4. To make it easier for both implementation and extendability to binary attribute case, we propose an alternative to auxiliary classifier based semi-supervised GAN. Since activating the auxiliary output for fake samples is largely equivalent to deactivating outputs for real classes, we simply pose a deactivation loss to minimize activations of real classifier outputs when training with synthetic complementary images: where C is the number of classes, and T is a threshold of activation. We use T = −4.6 ≈ log (0.01) for all the experiments that in our paper. For binary attribute classification, we want the outputs do not activate for both positive and negative, so the formula is simplified to: Decorrelation Regularization for MR-GAN Aiming to synthesize high-resolution images with GAN, we employ a conditional [56] multi-resolution architecture as illustrated in Fig. 5. Both generator and discriminator deal with images at different resolutions simultaneously. In Fig. 5 z is the latent noise, and c is the conditional input vector of attribute/category annotations. Each dimension corresponds to an attribute. If a positive label is sampled, the value of the corresponding dimension is set to 1. In such a structure, the higher resolution images are the refined version of lower resolution images. Thus the training is much more stable than the single resolution scheme. As training to converge is not a problem anymore in such an architecture, we put our focus on mode collapse. Notice that the generated high-resolution images strongly depends on low-resolution images, so if we can have diversified low-resolution images, high-resolution images are not likely to fall into strong mode collapse. So we simply use a decorrelation (DC) regularization loss to decrease the correlation between latent dimensions (Fig. 5). For a transposed convolution projecting N Z dim noise to N F feature maps, we denote w ij as the filter weight of jth dimension of the noise to the ith channel of feature maps. Then we define decorrelation regularization loss as: where, Note that r (·) measures correlation as the square of cosine similarity, ranging from 0 to 1. Together with multi-resolution architecture, we call our method MR-GAN. Overall Training Pipeline With the key components above, we present our overall pipeline in Fig. 3. The underlying idea is that semi-supervised GAN usually does not help on data with sufficient labels. So we want to train with synthetic samples only with those minority or hard attribute labels, whilst not affecting the majority or easy attributes. We implement this idea by simply sampling synthetic samples from them. Texture Fabric Shape Part Style All Top-k Top-3 Top-5 Top-3 Top-5 Top-3 Top-5 Top-3 Top-5 Top-3 Top-5 Top-3 Top-5 Top-3 Top-5 FashionNet [6] 82.58 90. 17 As illustrated in Fig. 3, in each iteration we first train a batch of real samples (green dashed line box) with HABP and get the model estimated error probabilities for all labels. For each label, we update the error probability for the jth attribute with an exponential moving average: Category where y j ∈ {0, 1} is the label for the jth attribute, S j,t (y j ) is the being updated error of label y j , S j,t−1 (y j ) is error at last time y j showed up and |y j − P ij | γ is the average error probability of samples that with jth attribute labeled as y j within a training batch. We normalize the recorded errors along each category/attribute by dividing the sum. Then they are used as the probability mass function of categorical distribution to sample hard labels. The sampled labels are used as inputs to generate the synthetic complementary samples (red dashed line box) MR-GAN. To make the deactivation based part more focused on hard labels, we again use a errors weighted average on deactivation losses for all M × N nodes: The overall objective to minimize is then as follows with a tunning parameter λ : Experiments We first evaluate the proposed method on DeepFashion-C. Then more experiments on each module are further explored to verify the efficacy of the proposed method. Experiments on DeepFashion-C Dataset. The 1000 attributes of DeepFashion-C [6] are divided into 5 groups by the authors, characterizing texture, fabric, shape, part, and style. We follow the official split by DeepFashion-C, more specifically, 209,222 training samples, 40,000 validation samples, and 40,000 test samples. The validation set is only used to make sure there was no overfitting. Evaluation Metrics. Two evaluation metrics and the corresponding settings are used: 1) top-k recall/accuracy. For binary attribute prediction, we calculate top-k recall following [57], which is obtained by ranking the classification Table 2: Class-balanced accuracy on DeepFashion-C (%) scores and determine how many attributes have been matched in the top-k list for each group. For category classification, top-k classification accuracy is calculated; 2) To further prove the effectiveness and flexibility of our approach, we also conduct experiments evaluating the class-balanced accuracy for attributes, which is calculated by averaging accuracy for both positive and negative labels attribute-wise [29]. Comparsions. For attribute and category classification, we compared our method with recently published results [6,43,44], and our re-produced results of popular hard-aware methods including OHEM [35] focal loss (FL) [34], and weighted FL for multi-label dataset [48]. Weighted FL weight loss of each attribute with w c = e −a , where a is the prior attribute distribution. For OHEM we tried different ratio (0.5, 0.33, 0.17, 0.1) of hard nodes, and select the best result with the ratio of 0.17. For FL based methods, we use the same γ=1.2 as we used for HABP. As we discussed in Section 3, without an output dependent normalization term, FL may result in either unstable at the beginning stage or too slow learning in the late stage. To avoid a low performance of FL by either case and make the comparison more sensible, we tried different base learning rate for FL. In our experiments, we found that for top-k recall/accuracy lr = 0.2 gives the best result using FL. Similarly, we run experiments for weighted FL and report the best results with lr = 0.15. We also tried a commonly used strategy to weight the positive/negative ratio for binary cross entropy loss: where w n is a weight depends on positive/negative ratio of a given attribute. Denoting n pos,j and n neg,j as number of positive and negative labels for the j th attribute among N attributes, we tried two ways for the weight 2 : A: weight each attribute adaptively. w j = log (n neg,j /n pos,j ); B: one weight for all attributes. w j = N j=1 n neg,j / N j=1 n pos,j . Implementation Details. 1) For top-k recall/accuracy, the base model we used is an imagenet pre-trained VGG-16 [58]. We replace the fully-connected layers by two 3 × 3 convolutions without padding. The first convolution outputs 2048 channels, and the second outputs 4096 channels. Each convolution is followed by a ReLU activation. Then the 4096 channels are reduced to a vector by average pooling. A dropout with the probability of 0.5 is followed to avoid over-fitting. Final output for category and binary attributes are fully-connected layers with 50 and 1000 outputs respectively. We view attribute classification as 1000 binary classification tasks, and category classification as a multi-class task, such that the loss weight we used for category classification is the same as every single task in attribute classification. We train the network 15 epochs with mini-batch of 16 images in all experiments. Each image is cropped with the ground truth bounding box and resized to 224 × 224. For the first 6 epochs, the learning rate is 0.01, then it is decreased by a factor of 10 every 3 epochs. γ for HABP is set to 1.2 for all experiments. Loss from synthetic complementary samples is added with a weight of 1e-4, and σ p for semantic feature perturbation is set to 1.5. For training efficiency, deactivation loss is computed every 20 iterations. 2) For class-balanced accuracy, we follow [29] by using ResNet50 [59] as the network. We found that the best result is achieved by using weighted CE-B described in the last paragraph as the base loss term L ij in eq. 4. For experiments, the weight for deactivation loss is 0.001, and γ=0.1. Other settings remain the same as the experiments for top-k recall/accuracy. Results. As mentioned before, error probability strongly depends on the number of positive labels. We first verified this and the effectiveness of our approach on reducing prediction errors of minority data. For the convenience of visualization, we compute the average of predicted positive probability σ(ŷ) for all positive labels instead of error probability. Comparisons between two well-trained models with CE loss and our method on both train and test set are illustrated in Fig. 2. From the figure, we can see that our method significantly reduced errors of positive labels, particularly for minority labels. The evaluation results using top-k recall/accuracy on test set are summarized in Table 1. From the table, our overall performance on attribute classification out-performs all others including the current state-of-the-art [44]. The category classification result is also better than most of the others. We also observed that with only HABP, the result surpass FL, weighted FL, and OHEM, which are methods with similar spirits. This proved the better stability of HABP. To understand this, consider that if the number of hard nodes is only a few in a batch, both FL and OHEM will result in a small loss that does not contribute much to gradients, while HABP constantly backpropagates a stable total loss from hard nodes no matter how many hard nodes in a batch. Together with HABP, deactivation based training with synthetic complementary samples further improves the final result, as demonstrated in the lower part of Table 1. Note that our method only used attribute annotations, while in both [6] and [44] landmark annotations are used to enhance the attribute classification. An ablation study is also presented in the lower part of Table 1. By independently activate HABP and synthetic complementary samples, we found the two techniques both improves over baseline. We also tried to replace HABP with FL in the pipeline. It achieves a better result than both baseline and deactivation based training with synthetic complementary samples. Yet it is still lower than our proposed pipeline, which further proves the advantage of HABP over FL. The experimental results with class-balanced accuracy in Table 2 further shows the flexibility and superiority of the proposed method. We observed that both HABP and deactivation loss improve the performance by some margin. Unlike the baseline with CE loss, by using the settings of weighted CE-B, training with synthetic complementary samples contributes more than HABP. We think a future work worth to study is how to optimally combine HABP and Deact given a specific task. HABP vs. FL In Table 1 we already verified the better performance of our method over other popular choices. In this section, we focus on the comparison between HABP and FL by more experiments. As we already mentioned in Section 4.1, we tune the base learning rate for FL to avoid either too low learning or numerical instability. With the sample experimental settings for top-k recall/accuracy in Table. 1, we demonstrate the top-3 recall for attributes by both HABP and FL under different base learning rate in Fig. 6. Compared to FL, HABP demonstrates not only better performance (as shown in the blue dashed line), but also less sensitive to base learning rate. We also plotted the training loss for attributes in the first epoch with the experiment setting for Table. 1. From Fig. 7 we can see that loss calculated by HABP keeps prominent as training goes, while the loss by FL at the beginning is almost two orders of magnitude larger than the loss at the end of the first epoch. This sensitive behavior of FL limited its performance because a too large learning rate may result in convergency issues, while a smaller learning rate may not be able to learn parameters in the late stage. More Experiments of Deact We further independently verified the validity of the proposed deactivation based training on MNIST classification in this section. The network we used is a LeNet-5 [60] with ReLU activations. We train a subset of randomly sampled training data with different sizes ranging from 25∼1000. The total number of training samples is set to 500k, with a Table 4: FID with and without decorrelation regularization on DeepFashion-C and CelebA-HQ. The lower the better. Effectiveness of Decorrelation Regularization and MR-GAN We validate the effectiveness of MR-GAN with DeepFashion-C and CelebA-HQ [50]. In training, images at different resolutions are generated by one forward pass, whilst multiple forwards with discriminator are needed for generating corresponding outputs. Due to the strong stability of MR-GAN, we simply use vanilla GAN loss [37] for training. The proposed decorrelation regularization is added to the loss of G with a weight of 2e-6 for all experiments. For discriminator, we calculate the mean over losses for multiple resolutions as the final loss. Implementation Details. We crop each image with ground truth bounding box provided by DeepFashion-C, and resize to 224 × 224. Adam [61] optimizer is used for both G and D with a learning rate at 1e-4. We use 32 as the number of channels for generating the highest resolution images, and the maximum number of channels is set to 512. The network is trained for 30 epochs with the batch size of 128, on train set only. To further validate the MR-GAN's ability to synthesizes higher resolution images, we also experimented with CelebA-HQ dataset, which contains 30k face images at 1024 × 1024. We build the network for images from 4 × 4 to 512 × 512. The number of channels for the largest image is set to 12, and the number of channels is limited up to 384. For CelebA-HQ we train without conditional inputs for 50 epochs, with mini-batch of 64 images. We computed Fréchet Inception Distance [62] (FID) from 30k images on the last 10 epochs for both datasets, and pick the smallest FID. For DeepFashion-C, the labels are sampled with prior distribution from the train set. The results are summarized in table 4, showing that the image samples using both datasets with decorrelation regularizations are better than without it. The cosine similarities between the transposed convolution kernels that projecting latent noises are calculated and demonstrated in Fig. 8. The correlations between weights are clearly reduced by decorrelation regularization loss as shown. Sample images generated with MR-GAN are illustrated in Fig. 9(a). In Fig. 9(b), left is random samples without decorrelation loss, we can see very similar faces labeled with red boxes, while this is not observed in the right image with decorrelation regularization loss. Conclusion We have proposed a pipeline to make use of "hard" data with two techniques from the view of cost-sensitive learning and the view of re-sampling respectively. It consists of HABP that effectively and adaptively learning with hard data, and deactivation based training with synthetic complementary samples that is more stable to train and easier to implement. HABP focus on positive minority data, whilst deactivation based training helps to learn a better decision boundary by deactivating complementary samples for minority data. The two components can either be combined or separately used depending on the specific metric. Along with the pipeline, we also presented a decorrelation regularization loss for training a multi-resolution GAN. Evaluations are performed on a large scale fashion dataset and related datasets. Overall our method achieves the state-of-the-art for attribute classification. At the same time, from the observations in experiments, we believe how to optimally combine the components we proposed will be a topic that worth future studies.
2019-07-25T05:02:02.000Z
2019-07-25T00:00:00.000
{ "year": 2019, "sha1": "24b72274f4d088f059056d50f086b4423fc518ba", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "42cd5db7d4352ce00766d88cecccf8bed0eded5c", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
219726994
pes2o/s2orc
v3-fos-license
The Tristetraprolin Family of RNA-Binding Proteins in Cancer: Progress and Future Prospects Post-transcriptional regulation of gene expression plays a key role in cellular proliferation, differentiation, migration, and apoptosis. Increasing evidence suggests dysregulated post-transcriptional gene expression as an important mechanism in the pathogenesis of cancer. The tristetraprolin family of RNA-binding proteins (RBPs), which include Zinc Finger Protein 36 (ZFP36; commonly referred to as tristetraprolin (TTP)), Zinc Finger Protein 36 like 1 (ZFP36L1), and Zinc Finger Protein 36 like 2 (ZFP36L2), play key roles in the post-transcriptional regulation of gene expression. Mechanistically, these proteins function by binding to the AU-rich elements within the 3′-untranslated regions of their target mRNAs and, in turn, increasing mRNA turnover. The TTP family RBPs are emerging as key regulators of multiple biological processes relevant to cancer and are aberrantly expressed in numerous human cancers. The TTP family RBPs have tumor-suppressive properties and are also associated with cancer prognosis, metastasis, and resistance to chemotherapy. Herein, we summarize the various hallmark molecular traits of cancers that are reported to be regulated by the TTP family RBPs. We emphasize the role of the TTP family RBPs in the regulation of trait-associated mRNA targets in relevant cancer types/cell lines. Finally, we highlight the potential of the TTP family RBPs as prognostic indicators and discuss the possibility of targeting these TTP family RBPs for therapeutic benefits. Introduction In healthy cells, expression of mRNAs for genes associated with cell survival pathways is maintained at normal levels through tight transcriptional and post-transcriptional mechanisms. In contrast, tumor cells possess abnormally stable mRNAs for various categories of pro-survival genes, including protooncogenes, tumor suppressors, and cytokines. A large number of these tumor-associated pro-survival mRNAs possess AU-rich elements (AREs) in their 3 -untranslated regions (3 UTRs). Specific ARE-binding proteins, such as the tristetraprolin family of RNA-binding proteins, are known to post-transcriptionally regulate the expression of these mRNAs. The tristetraprolin family of RNA-binding proteins (TTP family RBPs) are characterized by the presence of one or more CCCH zinc finger domain(s) that contain three cysteine (C) and one histidine (H) residues. There are three human members in this family, including Zinc Finger Protein 36 (ZFP36) or TTP itself, encoded by the ZFP36 gene; Zinc Finger Protein 36 Like 1 (ZFP36L1), encoded by the ZFP36L1 gene; and Zinc Finger Protein 36 Like 2 (ZFP36L2), encoded by the ZFP36L2 gene [1] (Table 1). A fourth member, Zinc Finger Protein 36 Like 3 (ZFP36L3), is restricted to rodents. Orthologues of the TTP family RBPs have been found in many vertebrates with the only exception in birds [2]. Through a highly conserved zinc finger domain, the TTP family RBPs bind to AREs at the 3 UTRs of their target mRNAs in a sequence-and structure-specific manner and catalyze the removal of the poly (A) tail, thus resulting in their mRNA decay. The consensus sequence of AREs in the 3 UTRs of the target mRNAs is UUAUUUAUU, although some variations of this sequence still mediate high affinity binding [3]. All the mammalian TTP family members appear to act similarly in biochemical studies involving RNA binding and decay. Interestingly, germline gene knockouts of the three TTP family RBPs in mice resulted in vastly different phenotypes [4][5][6][7]. For instance, while germline deletion of TTP resulted in a systemic inflammatory syndrome [4], germline deletion of ZFP36L1 was embryonically lethal [5], and germline deletion of ZFP36L2 resulted in post-natal mortality within two weeks post-birth due to defects in hematopoiesis [6]. These experiments clearly demonstrated that the TTP family RBPs may have differential target and cell/tissue-type specificity under varying physiological and pathological conditions. Furthermore, the TTP family RBPs may be expressed at different times during pre-and post-natal life. Some studies, including our unpublished observations, have also reported on the redundant functions of the TTP family RBPs [8]. Although the TTP family proteins were discovered more than 20 years ago, most of the studies investigating their role in carcinogenesis have been reported within the last decade. The TTP family RBPs have tumor-suppressor properties, which are directly related to their ability to post-transcriptionally regulate oncogenic mRNAs. For instance, oncogenes, including NOTCH1, MYC, BCL-2, and COX-2, contain 3 UTR AREs, and have been identified as direct TTP family RBP targets [8][9][10][11]. Conversely, TTP expression is also directly suppressed by certain oncoproteins [9]. TTP has also been shown to complement the function of tumor suppressors, such as p53, through downregulation of the oncogenes [12]. In fact, TTP expression is induced by p53 in cancer cells [12]. In rare instances, the TTP family RBPs are also known to directly target tumor suppressors. For instance, TTP has been shown to target the mRNA encoding the tumor suppressor LATS2 [13]. In sum, while increasing evidence suggests a protective role of the TTP proteins in tumorigenesis, some mechanisms seem to exist that counter the beneficial aspects of the TTP family RBPs. Alterations in the expression/activity of the TTP family RBPs have been reported to be associated with multiple cancers [14] (Table 2). Numerous studies have specifically reported a loss of TTP family RBP expression in a variety of cancers [15][16][17][18]. Loss of expression/activity of the TTP family RBPs is expected to result in increased stability of their target mRNAs. Three different mechanisms for loss of expression/activity of the TTP family RBPs have been reported: (1) MicroRNA-mediated regulation; (2) epigenetic silencing via DNA methylation; and (3) modulation of protein activity through post-translational modifications, particularly phosphorylation. Regardless of the mechanisms involved, owing to the regulation of a broad range of target mRNAs concurrently, the TTP family RBPs loss can result in significant changes in gene expression and can have dramatic consequences for the development and progression of cancer. In this review, we will discuss the key molecular traits of cancer that the TTP family RBPs regulate; the molecular mechanisms of the regulation; and the TTP family RBP mRNA targets that have been identified in various cancer cells and tissues. Specifically, the molecular traits of cancers, including uncontrolled cellular proliferation in the absence of external growth signals, resistance to apoptosis, sustained angiogenesis, as well as tissue invasion and metastasis, will be discussed. We will also discuss the potential of the TTP family RBPs as prognostic biomarkers and the possibility of targeting the TTP family RBPs for therapeutic purposes. The outstanding questions that remain will also be highlighted. ZFP36L2 ZFP36L2, among other 7 genes, was identified as a prognostic indicator in muscle-invasive bladder cancer [20]. uPA, uPAR, MMP1 TTP is downregulated in advanced breast and prostate cancers and is a negative prognostic indicator in breast cancer patients. VEGF TTP expression inversely correlates with breast cancer aggressiveness and metastatic potential. A synonymous polymorphism in TTP gene in Hs578T cells is significantly associated with lack of response to Herceptin in HER2-positive breast cancer patients [22]. TTP inhibits AHRR expression [23]. AHRR TTP is significantly downregulated in invasive breast carcinomas. TTP expression positively correlates with differentiation in normal and tumor cells [17]. Low TTP-expressing breast cancer and lung adenocarcinoma patients show reduced survival and more aggressive tumors. The TTP-low gene signature is characterized by 20 underexpressed CREB targets [19]. TTP binds to ERα and represses ERα transactivation in breast cancer cells, resulting in reduced proliferation and reduced ability of cells to form tumors in a mouse model [24]. MicroRNA-29a overexpression suppresses TTP and promotes EMT and metastasis in breast cancer cells. miR-29a is upregulated and TTP is downregulated in breast cancer patient samples [25]. TTP inhibits c-Jun transcription by impairing NF-κB p65 nuclear translocation resulting in S-phase cell cycle arrest in breast cancer cells [26]. TTP suppresses mitosis by downregulating a cluster of mitosis ARE mRNAs. Poor breast cancer patient survival is significantly associated with low TTP and high mitotic ARE-mRNAs [27]. Metformin induces TTP expression in breast cancer cells in a Myc-dependent manner and impairs cell proliferation [28]. Chemotherapy induced activation of p38 resulted in phosphorylation/inactivation of ZFP36L1 thus stabilizing Nanog and Klf4 mRNA that resulted in chemotherapy-resistant breast cancer stem cell phenotype [31]. KLF4, NANOG ZFP36L1 is downregulated in human breast tumor samples and three breast cancer cell lines [32]. ZFP36L2 Expression of ZFP36L2, among other genes, significantly associated with the development of bone metastasis in breast cancer [33]. ZEB1, MACC1, SOX9 TTP negatively regulates PD-L1 expression, an immunosuppressive protein that plays a role in evasion of the host immune system [37]. Resveratrol suppressed the proliferation and invasion/metastasis of colorectal cancer cells by activating TTP [39]. Epithelial Ovarian Cancer ZFP36L1 ZFP36L1 was identified as mucinous-type epithelial ovarian cancer risk gene in a GWAS study [40]. Esophageal Squamous cell Carcinoma ZFP36L2 ZFP36L2 was identified as a significantly mutated gene in esophageal squamous cell carcinoma and was validated as a tumor suppressor in this cancer type [41]. Follicular Thyroid Carcinoma ZFP36L2 ZFP36L2 was identified as a metastasis suppressor, NME1 regulated gene in human follicular thyroid carcinoma cell lines [42]. TTP/ZFP36 TTP is significantly reduced in gastric cancer tissues and is associated with invasion, lymph node metastasis, and survival. TTP suppresses IL-33 and inhibits the progression of gastric cancer [43]. IL-33 ZFP36L2 ZFP36L2 is upregulated in gastric cancer samples. Overexpressed ZFP36L2 in gastric epithelial cells promoted cell growth and colony formation. Silencing ZFP36L2 reduces NCI-N87 growth in vivo. A tandem duplication hotspot in the super-enhancer region of ZFP36L2 was associated with an increase in ZFP36L2 expression [44]. PIM-1, PIM-2, XIAP Glioma TTP/ZFP36 Hyperphosphorylation/inactivation of TTP by p38-MAPK promoted progression of malignant gliomas by inhibiting its RNA destabilizing function. Induced expression of TTP blocked glioma cell proliferation and survival through rapid decay of IL-8 and VEGF [46]. IL-8, VEGF Resveratrol suppressed cell growth and induced apoptosis in human glioma cells by inducing TTP [47]. ZFP36L1 ZFP36L1 is required for oligodendrocyte-astrocyte lineage transition and thus is an important regulator of gliomagenesis [49]. TNFα ZFP36L1 ZFP36L1 is downregulated in acute myeloid leukemia patient samples [52]. NOTCH1 Thymocyte-specific ZFP36L1 and ZFP36L2 deficient mice develop T cell acute lymphoblastic leukemia by upregulating Notch 1 [8]. TTP was reduced in HCC cells and tissues. Methylation of a single CpG site within the TGF-beta1-responsive region of the TTP promoter was significantly associated with TTP downregulation in both HCC cells and tissues [53]. TTP is downregulated in HCC tumors and hepatic TTP has a tumor suppressive role during tumor progression [54]. Lung Adenocarcinoma TTP/ZFP36 Patients with low-TTP expressing lung adenocarcinoma had decreased survival rates and more aggressive tumors [19]. TTP is significantly downregulated in human lung tumor samples [37]. Human melanoma cell lines express very low TTP. TTP regulates the expression of CXCL8 in melanoma cells [57]. CXCL8 Anti-tumor activity of DM-1, a curcumin analogue in melanoma cells is potentially mediated by TTP, ZFP36L1, and ZFP36L2 among others [58]. Myelofibrosis ZFP36L1 ZFP36L1 is a novel candidate tumor suppressor gene in myelofibrosis. Aberrant enhancer hypermethylation of ZFP36L1 reduced its expression in a myelofibrosis cohort [59]. miR-29a was up-regulated and TTP downregulated in pancreatic cancer tissues and cell lines. miR-29a overexpression correlated with increased metastasis. miR-29a enhanced the expression of pro-inflammatory and EMT markers by suppressing TTP [60]. TTP was markedly reduced in pancreatic cancer samples. Low TTP was associated with age, tumor size, tumor differentiation, post-operative T, N, and TNM stage. Low TTP predicted poor prognosis in pancreatic cancer patients. Over-expression of TTP in pancreatic cancer cells increased apoptosis, decreased cellular proliferation, and reduced expression of PIM-1 and IL-6 [61]. Low-TTP in prostate cancer correlated with increased recurrence. Induced TTP expression reduced cell proliferation, clonogenic growth, and tumorigenic potential of prostate cancer cells [64]. TTP protein was significantly lower in human prostate cancer tissues [65]. Low TTP levels in prostate cancer shorten time to recurrence or metastasis compared with TTP-high tumors [66]. Rectal Cancer TTP/ZFP36 TTP levels in the peripheral blood mononuclear cells were higher in patients with locally advanced rectal cancer that responded to chemoradiation [67]. TTP Family Proteins and Cell Cycle Control Dysregulation of the cell cycle is a characteristic feature of all cancers. The cell cycle involves four distinct phases, i.e., G1 (Gap 1 or first growth phase), S (DNA replication phase), G2 (Gap 2 or second growth phase), and M (mitosis phase). Regulatory mechanisms are in place to ensure that cells in the G1 phase that acquire DNA damage are prohibited from entering the S phase, and that errors during DNA replication in the S phase are repaired in the G2 phase before the cells enter the M phase. Several oncogenic processes function by dysregulating the normal controls and checkpoints and enforcing the cells into cell cycle progression in a mitogen-independent manner. TTP's role in regulating the cell cycle is linked to its ability to bind and destabilize critical cell cycle regulators. For instance, critical cell cycle regulators, namely c-Myc and cyclin D1, possess 3 UTR AU-rich elements and have been shown to be regulated by TTP [73]. C-Myc is a member of the Myc oncogene family of transcription factors that regulate cellular proliferation, differentiation, metabolism, and apoptosis, and are frequently dysregulated in human cancers [74]. Cyclin D1 is a proto-oncogene that regulates G1-S phase progression and is frequently overexpressed in cancer. Another example of a cell cycle checkpoint protein regulated by TTP is E2F transcription factor 1 (E2F1). E2F1 regulates G1-S phase progression and is frequently overexpressed in many types of human cancers. Aberrant expression of E2F1 is associated with high-grade tumors, metastases, and unfavorable patient prognosis. TTP was shown to post-transcriptionally regulate E2F1, suggesting that TTP controls cellular proliferation through the regulation of E2F1 mRNA stability [75]. Along similar lines, Xu et al. reported that TTP inhibits cellular proliferation in breast tumor cells in vitro and breast tumor growth in vivo by inducing cell cycle arrest at the S phase. TTP was found to inhibit c-Jun expression through blocking the nuclear factor kappa-light-chain-enhancer of activated B cells (NF-κB) p65 nuclear translocation, which resulted in increased expression of Wee1, a regulatory molecule that controls cell cycle transition from the S to the G2 phase [26]. Interestingly, resveratrol (3,5,4 -trihydroxystilbene), a naturally occurring polyphenol compound found in natural sources, including grape skin and red wine, was shown to activate TTP, resulting in downregulation of E2F1, inhibitor of apoptosis 2 (cIAP2), large tumor suppressor kinase 2 (LATS2), and lin-28 homolog A (Lin28)-all downstream targets of TTP-thus suppressing the proliferation and invasion/metastasis of colon cancer cells [39]. Hitti et al. performed a systematic analysis of ARE-mRNA expression across multiple cancer types, including invasive breast cancer, and showed that ARE-mRNAs were overrepresented and correlate with TTP expression. A cluster of 11 overexpressed ARE-mRNAs that are involved in the mitotic (M phase) cell cycle phase was found to negatively correlate with TTP expression. These ARE-mRNAs also physically interacted with TTP, indicating 7 of 21 direct regulation. Furthermore, breast cancer patients with a high mean expression of this cluster showed poor survival [27]. These studies suggested an anti-mitotic role of TTP. TTP was also shown to directly bind NEDD9, a protein that has a potential role in prostate cancer cell growth regulation [63]. Chen et al. showed that cyclin-dependent kinase 6 (CDK6) is post-transcriptionally regulated by ZFP36L1. These authors demonstrated that ZFP36L1 functions as a positive regulator of monocyte/ macrophage differentiation by regulating CDK6. Accordingly, levels of ZFP36L1 were found to be significantly reduced in acute myeloid leukemia patients [52]. ZFP36L1 has also been suggested to be a post-transcriptional regulator of the cell cycle signaling genes, including E2F1 and CCND1 (cyclin D1). In a recent study, ZFP36L1 was particularly shown to regulate hypoxia signaling through direct binding and degradation of HIF1A. This study indicated a tumor-suppressor role for ZFP36L1 through regulation of hypoxia, cell cycle, and angiogenesis [15]. Furthermore, these authors found that ZFP36L1 is epigenetically silenced through hypermethylation of the second exon and is downregulated in several patient cohorts of bladder and breast cancers. Functionally, silencing ZFP36L1 enhanced tumor cell growth while overexpression of ZFP36L1 suppressed cell proliferation and migration in bladder and breast cancer cell lines [15]. Finally, both ZFP36L1 and ZFP36L2 were shown to inhibit cellular proliferation through downregulation of cyclin D expression, resulting in cell cycle arrest at the G1 phase [76]. All these studies indicate that TTP, ZFP36L1, and ZFP36L2 are critical regulators of the cell cycle. TTP Family Proteins and Control of Apoptosis One of the hallmark characteristics of cancer is to evade apoptosis or resist cell death [77]. Apoptosis occurs through two distinct pathways: the intrinsic (mitochondrial) pathway and the extrinsic (death receptor) pathway. While the two pathways are distinct, both involve activating caspases in the final steps. The TTP family RBPs modulate tumor cell apoptosis by directly regulating the apoptotic mediators within both pathways. Johnson et al. demonstrated that TTP results in apoptotic cell death in various cell types, likely through direct regulation of the TTP targets [78]. This was one of the earliest studies that indicated a role of TTP proteins in cell survival and apoptosis. The authors suggested that TTP was unique and somewhat different from the two other family members, ZFP36L1 and ZFP36L2, because TTP, but not ZFP36L1 and ZFP36L2, could also sensitize the cells to apoptosis by inducing TNFα. However, it remained unclear whether TTP was inactive upon ectopic expression in these studies. Hydroquinone, an aromatic organic compound, induces apoptosis in human leukemia U937, human leukemia HL-60, and Jurkat cells through a TTP-dependent mechanism. Mechanistically, this study showed that TTP phosphorylation and inactivation through the p38 MAPK pathway resulted in increased TNFα-induced apoptosis [50]. Along similar lines, albendazole, a microtube-targeting anthelmintic, was demonstrated to induce apoptosis in human leukemia cells through the p38-TTP-TNFα axis [51]. Conversely, resveratrol was able to induce TTP expression in human glioma cells that resulted in apoptosis and suppression of cell growth through destabilization of the urokinase plasminogen activator (uPA) and urokinase plasminogen activator receptor (uPAR). Both, uPA and uPAR are overexpressed in glioblastomas and play a role in invasion [47]. PIM1, an oncogenic serine-threonine kinase, functions by repressing apoptosis and is a direct target of TTP [61,79]. In fact, ectopic TTP expression impaired the viability and invasiveness of glioblastoma multiforme cancer cells by destabilizing the PIM1, PIM2, and X-linked inhibitor of apoptosis proteins (XIAP) in these cells [45]. Park et al. recently reported that TTP enhances cisplatin sensitivity in head and neck squamous cell carcinoma (SCCHN) cells by reducing the levels of BCL-2, an anti-apoptotic protein, which is overexpressed in cancer and confers resistance to cisplatin [70]. While, earlier, Lee et al. had showed that ZFP36L1 enhanced cisplatin sensitivity in SCCHN cells by inhibiting the human inhibitor of apoptosis protein-2 (cIAP2) and resulting in increased caspase-3 activity [71]. ZFP36L1 has also been shown to mediate its pro-apoptotic effects on malignant B-cells by regulating BCL-2 [10]. The BCL-2 family of proteins control the permeabilization of the mitochondrial outer membrane, thus regulating commitment to apoptosis. The role of ZFP36L2 in modulating apoptosis remains unexplored. Together, all these studies indicate an important role of the TTP family RBPs in regulating apoptosis. TTP Family Proteins and Regulation of Pro-Tumorigenic Inflammatory Mediators TTP is a known regulator of inflammation. This critical function of TTP first became evident when germline TTP knockout mice were generated [4]. TTP knockout mice appeared normal at birth; however, within 2-3 weeks, they developed a systemic inflammatory syndrome characterized by cachexia, arthritis, dermatitis, conjunctivitis, myeloid hyperplasia, and autoimmunity [4]. This phenotype was largely attributed to overexpression of TNFα, a potent pro-inflammatory cytokine, as evidenced by the prevention of the development of the syndrome by anti-TNFα antibody injections [4]. Further, it was demonstrated that TTP binding to AREs within the 3 UTR of TNFα mRNA results in an increased turnover, an effect that was abrogated in TTP deficiency [80]. Subsequent studies showed that a number of other pro-inflammatory cytokines and chemokines, including IL-23, IL-17, IL-1β, CXCL1, and CXCL2, are also directly regulated by TTP [81,82]. These and other mRNA targets of the TTP family RBPs are reviewed elsewhere [83]. Furthermore, germline overexpression of TTP using its endogenous promoter resulted in protection against immune-mediated inflammatory diseases, including arthritis, psoriasis, and autoimmune encephalomyelitis [84]. These critical studies indicate that TTP directly regulates inflammation and that loss of TTP expression/activity results in enhanced inflammation. Inflammation is also a critical component of the tumor progression and tumor microenvironment, and many tumors are known to arise at the site of chronic inflammation. TTP is a well-established post-transcriptional regulator of pro-inflammatory cytokines and chemokines and, due to this function, TTP is an important modulator of tumor development and progression. Twizere et al. showed that TTP physically interacts with viral protein Tax, thus reverting the inhibition of pro-inflammatory cytokine TNFα [48]. Sawaoka et al. showed an interesting mechanism of regulation of cyclooxygenase-2 (COX-2) by TTP in colon adenocarcinoma cells. These cells express two distinct transcript variants of COX-2: a full length, 4465nt mRNA; and a truncated 2577nt polyadenylation variant, in which the terminal 1888 nt 3 untranslated region is absent. During cellular growth, the levels of the full-length transcript reduced, whereas the levels of the truncated variant increased. Most importantly, TTP levels were inversely correlated with the levels of the full-length transcript, and TTP transfection resulted in a reduction in the levels of the full-length transcript, indicating TTP regulation of COX-2 in these cells [11]. COX-2 is a product of an immediate early gene that is induced by growth factors and cytokines and plays a role in cellular proliferation [85][86][87]. COX-2 expression is increased in a number of cancers, including human colorectal [88], esophageal [89], pancreatic [90], lung [91], prostate [92], and mammary [93]. Similarly, TTP was shown to destabilize interleukin 8 (IL-8) and vascular endothelial growth factor (VEGF) mRNAs in malignant glioma cells, resulting in a dose-dependent decrease in cellular proliferation, loss of cell viability, and apoptosis. TTP was, in fact, ubiquitously expressed in primary gliomas and benign astrogliotic tissues; however, hyperphosphorylated/inactive TTP was present in malignant glioma tumors. It is generally accepted that the TTP activity is repressed upon phosphorylation [46]. Al-Souhibani et al. showed that TTP expression is significantly lower in invasive breast cancer cells compared to normal breast cells, and that the genes involved in cellular growth, invasion, and metastasis, namely matrix metalloproteinase 1 (MMP1), urokinase-type plasminogen activator (uPA), and urokinase plasminogen activator receptor (uPAR), were directly regulated by TTP in breast cancer cells [21]. Along the same lines, TTP was found to be weakly expressed in melanoma cells. These cells express high levels of the C-X-C motif chemokine ligand 8 (CXCL8), which plays a role in cellular growth and angiogenesis. These authors further demonstrated that extracellular signal-regulated kinase (ERK) inhibition restored TTP, which destabilized and inhibited CXCL8, suppressed cellular proliferation, and induced apoptosis [57]. TTP was also shown to directly regulate hypoxia-inducible factor 1 (HIF-1), a factor critically required for survival in hypoxic conditions, indicating that a low TTP poses a significant advantage to cancer cells by increasing HIF-1 and allowing adaptation to hypoxia [94]. Interestingly, latent membrane protein 1, a viral oncoprotein, was found to significantly enhance HIF-1A expression in nasopharyngeal carcinoma cells by inhibiting TTP [95]. In yet another study, TTP was shown to post-transcriptionally regulate interleukin 23 (IL-23) in mouse colon cancer cells [35]. IL-23 is highly expressed in many tumors and its levels correlate with tumor progression. Squamous cell carcinoma of the head and neck (SCCHN) patients with low interleukin 6 (IL-6) and high MMP9, or with high IL-6 and low MMP9, were found to have the poorest outcomes followed by patients with both high IL-6 and high MMP9. In comparison, patients with low IL-6 and low MMP9 had the best outcomes with respect to tumor recurrence, surgery, or death. Functionally, TTP suppression enhanced cellular invasiveness in vitro in an oral-cancer-equivalent 3D model and in vivo in chick chorioallantoic membrane models, resulting from increased secretion of IL-6, MMP2, and MMP9 [68]. TTP was found to be remarkably reduced in gastric cancer and inversely correlated with interleukin 33 (IL-33) expression. Furthermore, low TTP expression contributed to gastric cancer progression and was associated with depth of invasion, lymph node metastasis, advanced TNM stage, and poor survival. Conversely, elevated TTP expression was shown to inhibit the proliferation, migration, and invasion of gastric cancer cells through suppression of IL-33, a tumor promoting cytokine [43]. Similar results were found in human glioma tissues and cells where TTP was significantly downregulated and associated with reduced survival. In this particular study, TTP inversely correlated with IL-13 levels in glioma tissues and TTP inhibited the growth, migration, and invasion of glioma cells through downregulation of IL-13 and attenuation of the PI3K/Akt/mTOR pathway [96]. TTP knockout mice has increased numbers of cytotoxic T-cells due to direct regulation of interleukin 27 (IL-27), a CD8 + T-cell regulatory cytokine. Interestingly, in a mouse mammary gland tumor model, TTP knockout mice showed retracted tumor growth due to increased tumor-infiltrating CD8+ T cells [29]. Kratochvill et al. showed that TTP is constitutively highly expressed in tumor-associated macrophages. However, the effects of TTP on mRNA stability were blocked by the constitutively active p38 in the tumor microenvironment, which drove the production of inflammatory cytokines [97]. A very elegant study by Coelho et al. showed innately immunoresistant RAS mutant tumors are characterized by the upregulation of immunosuppressive protein programmed death-ligand 1 (PD-L1) through RAS-MEK-MK2-induced TTP phosphorylation/inactivation, resulting in increased PD-L1 mRNA stability. In humans, RAS activation was associated with PD-L1 upregulation in human lung and colon adenocarcinoma [37]. TTP has also been identified as one of the eight genes functionally related to the NF-κB pathway that were highly downregulated in lethal prostate cancer [63]. Gambogic acid, a polyprenylated xanthone, was demonstrated to significantly inhibit cancer stem cells in colorectal carcinoma, both in vitro and in vivo, by inhibiting EGFR-ERK signaling, resulting in upregulation of TTP [38]. TTP has also been demonstrated to be a post-transcriptional regulator of aryl hydrocarbon receptor repressor (AHRR) in breast cancer cells [23]. Similar to TTP, ZFP36L1 phosphorylation and inactivation by the p38-MK2 axis has been shown to stabilize Nanog and Klf4 in triple-negative breast cancer cells, resulting in breast cancer stem cell phenotype, a feature of chemotherapy-resistance in triple-negative breast cancer [31]. TTP Family Proteins and Cellular Senescence Cellular senescence is characterized by cells undergoing growth arrest in response to a wide variety of extrinsic and intrinsic insults, including DNA damage, loss of telomeres, and oncogenic activation. Cells undergoing senescence secrete a collective set of proteins that includes cytokines, chemokines, and growth factors, among others. While under basal conditions, cellular senescence may be beneficial in maintaining tissue homeostasis, cellular senescence is potentially detrimental in aging. Cellular senescence has dynamic roles in cancer: beneficial in tumor cells by improving the therapeutic outcomes, and detrimental in non-tumor cells by causing relapse and secondary tumors. Selected studies have shown a role for the TTP family proteins in regulating senescence. For instance, human papilloma virus-18 (HPV-18)-positive HeLa cells were used to show that TTP promoted cellular senescence through rapid decay of E6-associated protein (E6-AP) mRNA, resulting in p53 stabilization and inhibition of human telomerase reverse transcription gene (hTERT) transcription. E6 is a viral protein that HPV uses for cellular transformation. Association of E6 with E6-AP facilitates cell transformation by p53 degradation and activation of hTERT. This study linked the TTP-mediated post-transcriptional regulation to HPV-associated cervical carcinogenesis [34]. In fact, TTP was found to be consistently absent in cervical carcinomas compared to normal human cervixes. These studies also suggested the tumor suppressive role of TTP in cervical cancer. In a recent study, ZFP36L1 was demonstrated as a key regulator of cellular senescence by directly regulating components of the senescence-associated secretory protein (SASP) through post-transcriptional regulation. In this study, ZFP36L1 was found to signal downstream of mTOR in regulating SASP mRNAs and phosphorylation inhibited ZFP36L1 activity [98]. The role of ZFP36L2 in cellular senescence remains undetermined. TTP Family Proteins and Regulation of Angiogenesis One of the hallmarks of cancer is angiogenesis, the formation of new blood vessels, which are required for supplying nutrition and overcome a hypoxic microenvironment in rapidly growing tumor masses. VEGF is an angiogenic cytokine that plays a key role in tumor angiogenesis. VEGF and IL-6 are markedly increased in squamous cell carcinoma of the head and neck (SCCHN) and are associated with poor survival. GALR2, a pro-survival G-protein coupled receptor promoted angiogenesis via p38-mediated phosphorylation/inactivation of TTP, resulting in increased VEGF and IL-6 levels both in vitro in SCCHN cancer cells and in vivo in murine tumor xenografts and chorioallantoic membrane models [69]. Importantly, ZFP36L1 has been specifically shown to post-transcriptionally regulate VEGF [99]. Interestingly, a single intratumoral injection of a ZFP36L1 fusion protein was shown to be effective at decreasing VEGF, acidic FGF, TNFα, IL-1α, and IL-6, as well as at reducing tumor growth [100]. Both TTP and ZFP36L1 have been demonstrated to regulate HIF1α, a member of the family of transcription factors that are the primary effectors of the adaptive response of tumor cells to hypoxia [15,101]. The role of ZFP36L2 in regulating modulators of angiogenesis has not been explored yet. TTP Family Proteins and Epithelial Mesenchymal Transition Epithelial-mesenchymal transition (EMT) is a reversible process whereby epithelial cells transition to the mesenchymal phenotype by repressing epithelial-specific traits, i.e., intercellular adhesion and proliferation, and acquisition of mesenchymal traits, i.e., migration and invasion. EMT is a crucial step in metastasis and drug resistance, and epithelial tumors are well known to undergo EMT. TTP has been shown to directly regulate EMT regulators, including ZEB1 (zinc finger E-box binding homeobox 1), SOX9 (sex-determining region Y box 9), and MACC1 (metastasis associated in colon cancer 1), all of which are known to be downregulated in colorectal carcinomas (CRC). Re-expressing TTP reverted the EMT phenotype in this study [36]. Two other EMT regulators, TWIST1 (twist-related protein 1) and SNAIL1 (zinc finger protein snail 1), are also known targets of TTP [36]. Interestingly, TTP was identified as a target of a microRNA, miR-29a, and miR-29a-mediated downregulation of TTP was associated with EMT and metastasis in breast cancer [25]. Rataj et al. demonstrated that ZFP36L1 was markedly suppressed in breast cancer cells and patient tissues and that a derivative of ZFP36L1 fused to cell-penetrating peptide inhibited the proliferation, migration, invasion, and anchorage-independent growth in vitro and impaired the tumor growth and EMT markers, including Snail, Vimentin, and N-cadherin, in vivo [32]. Interestingly, ZFP36L1 was identified as a key regulator of neural progenitor cell-fate transition from oligodendrocyte to astrocytes and through this process it is a key regulator of processes such as myelination and gliomagenesis [49]. This study showed that while the loss of ZFP36L1 in the neural lineage resulted in myelination deficits due to the oligodendrocyte-astrocyte switch, in tumorigenesis this process was in fact beneficial by preventing gliomagenesis, thus enhancing survival. The role of ZFP36L2 in EMT has not been explored yet. TTP Family Proteins and Tumor Suppressor and Oncogenic Roles Tumorigenesis is a consequence of mutations in oncogenes and tumor-suppressor genes that frequently result in either an overexpression of oncogenes or loss of tumor suppressors. A key study in 2012 discovered TTPs important role as a tumor suppressor. The MYC oncoprotein was found to directly suppress TTP transcription, and TTP repression appeared to be a hallmark of malignancies with MYC involvement. Furthermore, enforced expression of TTP impaired the development of lymphoma and abolished the maintenance of the malignant state. ZFP36L1 was also repressed by MYC; however, it was not suggested to be a tumor suppressor in this model [9]. TTP has also been shown to function as a tumor suppressor through downregulation of estrogen receptor alpha (ER-α) transactivation, resulting in reduced cellular proliferation and reduced potential of the cells to form tumors in a mouse model. In this study, TTP was shown to be associated with ER-α and was recruited to the promoter region, indicating that TTP may be a bona fide nuclear receptor corepressor [24]. TTP has also been shown to be downregulated in hepatocellular carcinoma (HCC) cells and tumors through an epigenetic mechanism that involves hypermethylation of a single CpG site within the TGFβ1 responsive region of the TTP promoter. The epigenetic inactivation of TTP resulted in an increased half-life of c-Myc, causing cancer cells to undergo selective resistance to TGFβ1 antiproliferative signaling [53]. TTP is significantly downregulated in liver tumors. During tumor progression, TTP functions as a tumor suppressor and inhibits proliferation and migration, reduces expression of several oncogenes, and increases chemo sensitivity [54]. The anti-proliferative properties of metformin, an anti-diabetic drug, in breast cancer cells were mediated by induction of TTP through c-Myc downregulation [28]. Interestingly, ZFP36L1 was found to be downregulated due to enhancer hypermethylation within the second exon in myelofibrosis, which conversely led to an increased expression of its target mRNAs. Functionally, ZFP36L1 expression induced apoptosis in leukemia cells, indicating a tumor-suppressor role of ZFP36L1 in myelofibrosis [59]. Similar to TTP and ZFP36L1, ZFP36L2 also functions as a tumor suppressor. Hypermethylation of a super-enhancer site in ZFP36L2 resulted in epigenetic silencing in a large data set of esophageal squamous cell carcinoma (SCC) whole-exome sequenced tissues. This phenomenon was also found in other SCCs analyzed from the cancer genome atlas (TCGA) and resulted in reduced mRNA expression in all SCCs [41]. The strongest evidence for the role of ZFP36L1 and ZFP36L2 as tumor suppressors came from studies done by Hodson et al. [8]. These authors demonstrated that loss of both ZFP36L1 and ZFP36L2 in mouse thymocytes resulted in the development of T cell acute lymphoblastic leukemia (T-ALL) due to stabilization of an oncogenic transcriptional regulator, Notch 1 [8]. Interestingly, both ZFP36L1 and ZFP36L2 were found to function in a redundant manner in this study. Recently, genomic mutation in ZFP36L1 was identified as a potential driver of tumorigenesis in patients with concomitant diffuse large B-cell lymphoma and hepatitis B virus (HBV) infection [55]. One study in particular suggested an oncogenic role for ZFP36L2. The authors showed that tandem duplication induced amplification of the super enhancers and were associated with an increase in ZFP36L2 expression in~10% of gastric cancers. Functionally, ZFP36L2 promoted the growth of gastric cancer cells in this study [44]. Together, these studies indicate that all three members of the TTP family RBPs function as tumor suppressors in various types of cancers. TTP Family Proteins and Regulation of Tumor Metastasis Tumor metastasis is defined by cancer cells acquiring features of motility, invasion, plasticity, and ability to colonize secondary organs/tissues, and is the primary cause of cancer morbidity and mortality. Interestingly, microRNA-29a (miR-29a) was found to promote tumor progression and invasion by downregulating TTP both in vitro and in vivo in pancreatic cancer. miR-29a was upregulated and TTP was downregulated in pancreatic cancer cells and tissues [60]. In another study, two main clusters of breast cancers that differed on their lymph node status were identified from the breast cancer serial analysis of gene expression. Interestingly, ZFP36L1 was upregulated only in lymph node positive primary breast cancer, indicating that patterns of gene expression in primary tumors at the time of surgical removal could discriminate those that have lymph node metastasis [30]. Finally, ZFP36L2 was identified as an NME1, a metastatic suppressor, regulated gene in a screen of two metastatic cancer cells: melanoma and follicular thyroid carcinoma [42]. TTP Family Proteins as Potential Biomarkers A very elegant study in 2010 showed that TTP is widely suppressed in a number of human cancers, including those of the thyroid, lung, ovary, uterus, and breast, as well as in a number of cancer cell lines, including those of lung and cervical cancer. Here, suppressed TTP was a negative prognostic indicator in breast cancer where more advanced tumors exhibited the weakest TTP expression [16]. Moreover, restoring TTP expression in cancer cells resulted in suppression of the tumorigenic phenotypes while reducing the TTP levels promoted the neoplastic phenotype [16]. Another similar study showed that among breast cancer types, higher grade tumors showed the weakest TTP expression at the protein level compared to low grade tumors, suggesting that the TTP protein levels correlate with prognosis [17]. TTP has also been suggested as a promising biomarker for prostate cancer risk assessment. TTP expression was markedly reduced in metastatic prostate cancer compared to primary tumors [64]. Men with low TTP-expressing primary prostate cancer had significantly increased chances of biochemical reoccurrence in this study. Induction of TTP inhibited the growth, proliferation, and tumorigenic potential of prostate cancer cells in a mouse xenograft model of prostate cancer [64]. Another study that also investigated prostate cancers, showed that low-TTP tumors had faster reoccurrence or metastasis versus high-TTP tumors [66]. Additionally, the low time to reoccurrence in low-TTP tumors was more pronounced in low-grade tumors. This study suggested that TTP is a promising prostate cancer biomarker for predicting the low-grade radical prostatectomy prostate cancer patients that will have poor outcomes [66]. Another study similarly showed low TTP expression in prostate cancer compared to non-cancerous tissues [65]. Low TTP expression in tumors versus adjacent normal tissues was also shown in pancreatic cancer [61]. Here, TTP expression was almost negative in poorly differentiated cancer, weakly positive in moderately differentiated, and highly positive in well differentiated pancreatic cancers. Low TTP expression was associated with age, tumor size, tumor differentiation, postoperative T stage, postoperative N stage, and TNM stage. Low TTP expression correlated with low patient survival rates and poor prognosis, suggesting that TTP could act as a prognostic indicator in pancreatic cancer [61]. Components of the AP-1 transcription factor, including JUN, JUNB, FOS, FOSB, were enriched in association with TTP as a conserved co-regulated group of genes and were significantly downregulated in breast, liver, lung, kidney, and thyroid carcinomas. Patients with low expression of these genes displayed poor prognosis [18]. Furthermore, TCGA datasets for breast cancer, lung adenocarcinoma, lung squamous cell carcinoma, and colon adenocarcinoma revealed a shared signature of 50 genes that were differentially expressed between the low-and high-TTP-expressing tumors [19]. The TTP-low gene signature was also a feature of several other cancers, including pancreatic, bladder, and prostate from non-TCGA datasets. Low TTP expression was a poor prognostic indicator in breast cancer and lung adenocarcinoma patients and was associated with decreased survival and more aggressive necrotic tumors. A TTP-low signature was characterized by perturbation of several inflammatory pathways in this study [19]. ZFP36L1 was found to be one of the genes with variants that was associated with an increased risk of subtype-specific epithelial ovarian cancers [40]. ZFP36L2 was overexpressed in pancreatic ductal adenocarcinoma (PDAC) tissues and cells as a result of suppression of microRNA-375, indicating the involvement of ZFP36L2-regulated pathways in PDAC pathogenesis. This was further supported by silencing ZFP36L2 in vitro, which inhibited cancer cell aggressiveness in PDAC cells. High ZFP36L2 expression also predicted shorter survival in PDAC, indicating that ZFP36L2 expression could be used as a prognostic marker in PDAC [62]. ZFP36L2 was also identified as a potential candidate for prediction of bone metastasis of breast cancer [33]. Finally, ZFP36L2 has been shown as a reoccurrence-associated gene in bladder cancer [20]. TTP Family Proteins and Response to Treatment TTP proteins have also been associated with response to treatment in cancer in a few selective studies. Griseri et al. showed the presence of a synonymous polymorphism (rs3746083) in the TTP gene in an aggressive TTP-negative breast cancer cell line [22]. Interestingly, this mutation did not change the corresponding amino acid but affected the protein translation and was significantly associated with a lack of response to Herceptin treatment in HER2-positive breast cancer patients [22]. Whole genome microarray profiling of peripheral blood mononuclear cells of locally advanced rectal cancer patients revealed that, among other genes, TTP was differentially expressed between responders and non-responders to chemoradiotherapy [67]. Finally, the antitumor activity of curcumin analogue DM-1 in melanoma was shown to be mediated by multiple targets, which included TTP, ZFP36L1, and ZFP36L2, among others [58]. Outstanding Questions A few outstanding questions remain regarding the role of the TTP family RBPs in carcinogenesis. For instance, much of our current knowledge regarding the role of TTP family RBPs in human cancer comes from the gene and protein expression data on patient tumor samples. Therefore, it is not clear whether the loss of TTP family RBPs is an early event that initiates tumor development or is a consequence of tumor development. Hence, there is an utmost need to develop transgenic animal models to understand the mechanisms by which the TTP family RBPs modulate the initiation and progression of cancer. Conclusions Together, the studies discussed in this review indicate that the TTP family RBPs are critical regulators of multiple cancer traits (Figure 1). Given the increasing number of cancers in which TTP family proteins have been reported to be dysregulated, it appears that this family of proteins are common and important regulators of many, if not all, cancer types. It also appears that the TTP family RBPs are frequently silenced with loss of function in a majority of cancers, indicating their role as tumor suppressors. Silencing of TTP family RBP expression occurs at the epigenetic, post-transcriptional, and post-translational levels. Moreover, TTP family proteins appear to regulate multiple pathways involved in cancer development and progression and present poor prognostic outcomes for patients. While conventional cancer therapies target single genes or pathways at one time, therapeutically targeting TTP family RBPs, the master regulators of multiple cancer-relevant genes, would target multiple cancer-relevant pathways simultaneously. Therefore, targeting these proteins and restoring their function may represent an effective and novel therapeutic approach. transcriptional, and post-translational levels. Moreover, TTP family proteins appear to regulate multiple pathways involved in cancer development and progression and present poor prognostic outcomes for patients. While conventional cancer therapies target single genes or pathways at one time, therapeutically targeting TTP family RBPs, the master regulators of multiple cancer-relevant genes, would target multiple cancer-relevant pathways simultaneously. Therefore, targeting these proteins and restoring their function may represent an effective and novel therapeutic approach. . For example, it is possible that normal cells undergo loss of TTP family RBPs expression under the influence of an unknown stressor, thus driving tumor initiation or complementing the effects of the driver genes. Conversely, it is possible that loss of TTP family RBP expression occurs at subsequent stages after initial tumor formation, thus accelerating tumor progression and promoting tumor aggressiveness. Broken bidirectional arrows with red question marks represent this possibility. Author Contributions: S.P conceived the article; S.P. and Y.S wrote the article; J.C. prepared Table 2 and edited the article; Y.S., J.C., and S.P. proofread and approved the submitted version. Funding: This work was supported by grants from the National Institutes of Health and Louisiana Board of Regents. Conflicts of Interest: The authors have no conflict of interest to report. Author Contributions: S.P. conceived the article; S.P. and Y.S. wrote the article; J.C. prepared Table 2 and edited the article; Y.S., J.C., and S.P. proofread and approved the submitted version. All authors have read and agreed to the published version of the manuscript. Funding: This work was supported by grants from the National Institutes of Health and Louisiana Board of Regents. Conflicts of Interest: The authors have no conflict of interest to report.
2020-06-18T09:05:16.513Z
2020-06-01T00:00:00.000
{ "year": 2020, "sha1": "b8d7759e0b7cf58d6bd954aaaf0f65c5082f8888", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2072-6694/12/6/1539/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "6ba0c459ecd80e59521fd8036bb6af901b84010f", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
267535163
pes2o/s2orc
v3-fos-license
Photodynamic Therapy for Atherosclerosis Atherosclerosis, which currently contributes to 31% of deaths globally, is of critical cardiovascular concern. Current diagnostic tools and biomarkers are limited, emphasizing the need for early detection. Lifestyle modifications and medications form the basis of treatment, and emerging therapies such as photodynamic therapy are being developed. Photodynamic therapy involves a photosensitizer selectively targeting components of atherosclerotic plaques. When activated by specific light wavelengths, it induces localized oxidative stress aiming to stabilize plaques and reduce inflammation. The key advantage lies in its selective targeting, sparing healthy tissues. While preclinical studies are encouraging, ongoing research and clinical trials are crucial for optimizing protocols and ensuring long-term safety and efficacy. The potential combination with other therapies makes photodynamic therapy a versatile and promising avenue for addressing atherosclerosis and associated cardiovascular disease. The investigations underscore the possibility of utilizing photodynamic therapy as a valuable treatment choice for atherosclerosis. As advancements in research continue, photodynamic therapy might become more seamlessly incorporated into clinical approaches for managing atherosclerosis, providing a blend of efficacy and limited invasiveness. Introduction In 2016, the World Health Organization (WHO) reported that coronary vascular disease accounts for 31% of all deaths worldwide, representing 17.9 million deaths.The main factor in cardiovascular disease is atherosclerosis whose incidence is trending upwards.Therefore, atherosclerosis is a major life-threatening cardiovascular condition causing death from myocardial infarction and stroke [1,2].Generally, the mechanisms of lipid metabolism contribute to atherosclerosis.The first study of atherosclerotic plaque took place in 1908 by Ignatowski, who studied plaques in the aortic walls of rabbits.Prior to this, in the early 19th century, Rudolph Virv described arterial inflammation [3][4][5].Atherosclerosis is defined as a progressive inflammatory disease characterized by the accumulation of lipids in the walls of arterial vessels, a disease in which atherosclerotic plaques are deposited in the inner walls of the arteries.Atherosclerotic plaques lead to progressive lipid deposition, subsequent accumulation of T cells, macrophages in the arterial endothelium, and narrowing of the vessel lumen [6][7][8][9][10][11]. Current therapeutic strategies aim to modify risk factors, control inflammation, and stabilize vulnerable plaques.Statins, antiplatelet agents, and lifestyle interventions constitute the cornerstones of atherosclerosis management.Emerging research focuses on innovative approaches, including targeted drug delivery and immunomodulation, offering glimpses of a future where personalized and precise interventions may redefine the atherosclerosis treatment [12,13]. The vascular endothelial dysfunction of arteries is a serious factor in atherosclerotic symptoms.Atherosclerotic plaques containing apolipoprotein B-induced lipoprotein deposition from plasma can break off and cause thrombi.Apolipoprotein B triggers inflammatory processes.Some cytokines can cause atherosclerotic plaques to rupture and lead to the complete closure of the vessel lumen.Atherosclerotic plaques usually form asymptomatically at a young age and progress with age, giving a variety of symptoms.The early detection of atherosclerotic plaques is a priority [14,15].Currently, biomarkers are used to verify atherosclerotic lesions.Most of them are limited to the myocardium only.The main biomarkers under consideration are myoglobin, troponin T, troponin I, and the muscle fraction of creatine kinase.Researchers also emphasize the importance and significant influence of myeloperoxidase, pregnancy plasma protein A, albumin, lipoprotein-associated phospholipase A2, interleukin 18, and CD40 receptor ligand [16][17][18].Imaging diagnostics, primarily angiography and Doppler ultrasound, are also used.Many genetic and environmental factors are considered to be causes of atherosclerosis [19].The primary risk factors are hypertension, diabetes, obesity, smoking, hypercholesterolemia, physical inactivity, chronic stress, age, and unhealthy diets.Scientists agree that all risk factors contribute to the pathogenesis of atherosclerotic lesions [20][21][22][23][24][25]. Pathogenesis of Atherosclerosis Atherosclerosis plaque formation (Figure 1) emerges as a complex and dynamic process, weaving together molecular, cellular, and systemic influences.Understanding the intricacies of plaque development is paramount in the pursuit of effective prevention and intervention strategies [26].Atherosclerosis initiates with endothelial dysfunction, a pivotal event often triggered by risk factors such as hyperlipidemia, hypertension, and smoking.The disruption of the endothelial layer leads to the exposure of the underlying vascular smooth muscle cells circulating blood constituents, marking the commencement of plaque formation [27].Reduced bioavailability of nitric oxide (NO), a vasodilator, and increased expression of adhesion molecules like vascular cell adhesion molecule-1 (VCAM-1) and intercellular adhesion molecule-1 (ICAM-1) create a proinflammatory and proatherogenic endothelial environment [28].Central to atherosclerosis is the accumulation of low-density lipoprotein (LDL) within the subendothelial space [29].Oxidized LDL becomes a beacon for monocytes, which traverse the endothelium, attracting them to the site of injury.Hypercholesterolemia is one of the disorders that stimulate the production of monocytes from the bone marrow [30].Monocytes arise from myeloid progenitor cells in the bone gist [31].The accumulation of LDL in the bloodstream leads to its infiltration across the endothelium into the intima, primarily facilitated by transcytosis involving receptors SR-B1 and ALK1.This process occurs in conjunction with caveolae, emphasizing the significance of caveolae-dependent LDL uptake in LDL transcytosis [32].Trapped LDL particles undergo oxidation in the subendothelial space, promoting atherosclerotic plaque development.Inflammation, oxidative stress, and enzymatic activity contribute to LDL oxidation, resulting in minimally modified (mmLDL) or extensively oxidized (oxLDL) forms with distinct biological activities.Extensively oxidized LDLs, unrecognizable by LDL receptors, contribute to inflammation and atherosclerosis through scavenger receptor uptake [33][34][35][36]. Macrophages can stimulate or suppress inflammation.This mechanism is essential in the functioning of the body, especially in fighting infections.The engorged macrophages, known as foam cells, constitute a hallmark of early atherosclerotic lesions.Foam cells release proinflammatory cytokines, including tumor necrosis factor-alpha (TNF-α) and interleukin-1 (IL-1) [42].These cytokines propagate inflammation, attracting additional immune cells and perpetuating the atherogenic process.This cascade of events recruits additional immune cells (Figure 2), further promoting inflammation within the arterial wall [43][44][45].Endothelial activation, triggered by proinflammatory stimuli, leads to phenotypic modulation, termed type II activation.This sustained activation induces a complex inflammatory response involving increased NF-kB production, upregulation of adhesion molecules, chemokines, and prothrombotic mediators, fostering monocyte recruitment into the intima [37].Upon infiltration, monocytes differentiate into macrophages and engulf oxidized LDL, forming foam cells rich in cholesterol esters.The accumulation of cholesterol crystals activates the NLRP3 inflammasome, intensifying inflammation [38][39][40][41].Macrophages can stimulate or suppress inflammation.This mechanism is essential in the functioning of the body, especially in fighting infections.The engorged macrophages, known as foam cells, constitute a hallmark of early atherosclerotic lesions.Foam cells release proinflammatory cytokines, including tumor necrosis factor-alpha (TNF-α) and interleukin-1 (IL-1) [42].These cytokines propagate inflammation, attracting additional immune cells and perpetuating the atherogenic process.This cascade of events recruits additional immune cells (Figure 2), further promoting inflammation within the arterial wall [43][44][45].The inflammatory milieu contributes to the progression of atherosclerosis by fostering oxidative stress and perpetuating endothelial dysfunction.Activated macrophages release growth factors such as platelet-derived growth factor (PDGF), stimulating the migration of vascular smooth muscle cells (VSMCs) from the media to the intima [46].VSMCs undergo proliferation, contributing to the formation of a fibrous cap overlying the atherosclerotic plaque.The cap, composed of VSMCs and Extracellular Matrix (ECM), acts as a subendothelial barrier, preventing the exposure of the necrotic core to circulating coagulation factors.Persistent mitogen production hinders VSMC transition back to the contractile phenotype, facilitating lesion development.Oxidative stress is a hallmark of atherosclerosis, fueled by the activation of NADPH oxidase and the oxidative modification of LDL [47,48].This results in the generation of ROS, including superoxide radicals and hydrogen peroxide, contributing to endothelial dysfunction, lipid peroxidation, and amplification of the inflammatory response.The fibrous cap's characteristics determine plaque stability [49][50][51].Plaque vulnerability arises when the delicate equilibrium between inflammation, cell proliferation, and matrix deposition is disrupted.As the plaque matures, it undergoes structural alterations.The fibrous cap may become thin and vulnerable, increasing the risk of rupture.Continued infiltration of lipids, immune cells, and sustained inflammatory signaling contribute to the formation of advanced atherosclerotic lesions [52,53].Atheroma plaques form in areas of low Wall Shear Stress, causing endothelial dysfunction and eccentric plaque growth.Outward vessel remodeling initially occurs but perpetuates low-WSS conditions, making plaques rupture-prone [54].Vulnerable plaques have a large necrotic core, a thin fibrous cap, and increased inflammation due to continuous exposure to a pro-atherogenic environment.The fibrous cap's integrity, influenced by VSMC death and inflammation, determines plaque vulnerability [55].The scavenger-receptor-mediated uptake of lipoproteins by macrophages leads to the formation of foam cells, characterized by the intracellular accumulation of lipids.Cholesterol crystals may form within these foam cells, triggering the release of IL-1B.IL-1B, in turn, stimulates smooth muscle cells to produce IL-6.Both IL-1B and IL-6 exert proinflammatory effects, contributing to the inflammatory milieu within the atherosclerotic lesion.Additionally, circulating IL-6 may signal to the liver, prompting the production of C-reactive protein (CRP), which serves as a marker of inflammation.In summary, the cascade of events initiated by LDL retention involves endothelial activation, monocyte recruitment and differentiation, foam cell formation, and the release of proinflammatory cytokines.This inflammatory environment plays a crucial role in the progression of atherosclerosis, a condition characterized by the buildup of plaque within arterial walls. The inflammatory milieu contributes to the progression of atherosclerosis by fostering oxidative stress and perpetuating endothelial dysfunction.Activated macrophages release growth factors such as platelet-derived growth factor (PDGF), stimulating the migration of vascular smooth muscle cells (VSMCs) from the media to the intima [46].VSMCs undergo proliferation, contributing to the formation of a fibrous cap overlying the atherosclerotic plaque.The cap, composed of VSMCs and Extracellular Matrix (ECM), acts as a subendothelial barrier, preventing the exposure of the necrotic core to circulating coagulation factors.Persistent mitogen production hinders VSMC transition back to the contractile phenotype, facilitating lesion development.Oxidative stress is a hallmark of atherosclerosis, fueled by the activation of NADPH oxidase and the oxidative modification of LDL [47,48].This results in the generation of ROS, including superoxide radicals and hydrogen peroxide, contributing to endothelial dysfunction, lipid peroxidation, and amplification of the inflammatory response.The fibrous cap's characteristics determine plaque stability [49][50][51].Plaque vulnerability arises when the delicate equilibrium between inflammation, cell proliferation, and matrix deposition is disrupted.As the plaque matures, it undergoes structural alterations.The fibrous cap may become thin and vulnerable, increasing the risk of rupture.Continued infiltration of lipids, immune cells, and sustained inflammatory signaling contribute to the formation of advanced atherosclerotic The scavenger-receptor-mediated uptake of lipoproteins by macrophages leads to the formation of foam cells, characterized by the intracellular accumulation of lipids.Cholesterol crystals may form within these foam cells, triggering the release of IL-1B.IL-1B, in turn, stimulates smooth muscle cells to produce IL-6.Both IL-1B and IL-6 exert proinflammatory effects, contributing to the inflammatory milieu within the atherosclerotic lesion.Additionally, circulating IL-6 may signal to the liver, prompting the production of C-reactive protein (CRP), which serves as a marker of inflammation.In summary, the cascade of events initiated by LDL retention involves endothelial activation, monocyte recruitment and differentiation, foam cell formation, and the release of proinflammatory cytokines.This inflammatory environment plays a crucial role in the progression of atherosclerosis, a condition characterized by the buildup of plaque within arterial walls. The rupture of a vulnerable plaque exposes its thrombogenic core, triggering platelet activation and aggregation.Thrombus formation within coronary or cerebral vessels can result in acute clinical events such as myocardial infarction or stroke, marking critical points in the progression of atherosclerosis [56,57].Efforts to resolve the plaque involve macrophage phagocytosis of cellular debris.However, chronic inflammation may lead to unresolved plaque components and calcium deposition, contributing to plaque stability but compromising its flexibility [58,59].It is also worth noting that atherosclerosis susceptibility is influenced by genetic factors, with polymorphisms in genes related to lipid metabolism (e.g., APOE), inflammation (e.g., IL-6), and vascular function (e.g., eNOS) playing significant roles.Epigenetic modifications, including DNA methylation and histone acetylation, further regulate the expression of genes associated with atherosclerosis [60][61][62]. Biomarkers in Atherosclerosis Myoglobin (Figure 3) is a protein found in muscle tissues that plays a crucial role in oxygen storage and transport within muscle cells.Myoglobin has been implicated in oxidative stress and inflammation.Oxidative stress occurs when there is an imbalance between the product of reactive oxygen species (ROS) and the body's capability to neutralize them.Reactive oxygen species can contribute to the inflammation and damage of arterial walls, a key feature of atherosclerosis [63].Additionally, myoglobin may be involved in the recruitment and activation of immune cells, such as macrophages, which play a role in the inflammatory response associated with atherosclerosis.The exact mechanisms by which myoglobin influences atherosclerosis are not fully understood, and research in this area is ongoing [64][65][66]. 62]. Biomarkers in Atherosclerosis Myoglobin (Figure 3) is a protein found in muscle tissues that plays a crucial role in oxygen storage and transport within muscle cells.Myoglobin has been implicated in oxidative stress and inflammation.Oxidative stress occurs when there is an imbalance between the product of reactive oxygen species (ROS) and the body's capability to neutralize them.Reactive oxygen species can contribute to the inflammation and damage of arterial walls, a key feature of atherosclerosis [63].Additionally, myoglobin may be involved in the recruitment and activation of immune cells, such as macrophages, which play a role in the inflammatory response associated with atherosclerosis.The exact mechanisms by which myoglobin influences atherosclerosis are not fully understood, and research in this area is ongoing [64][65][66].Troponin T (Figure 4) is a protein primarily known for its role in muscle contraction, particularly in cardiac muscle.While troponin T is traditionally associated with cardiac muscle and is a well-established biomarker for diagnosing myocardial infarction, its direct involvement in atherosclerosis, a process of plaque buildup in arteries, is not as extensively studied [67].However, there is some emerging research suggesting potential links between troponin T and cardiovascular diseases, including atherosclerosis.Troponin T may be implicated in the inflammatory and pathological processes that contribute to plaque formation in atherosclerosis.Inflammation plays a significant role in the development and progression of atherosclerosis, and there is ongoing research to understand the specific roles of various proteins, including troponin T, in these inflammatory pathways [68][69][70].Troponin T (Figure 4) is a protein primarily known for its role in muscle contraction, particularly in cardiac muscle.While troponin T is traditionally associated with cardiac muscle and is a well-established biomarker for diagnosing myocardial infarction, its direct involvement in atherosclerosis, a process of plaque buildup in arteries, is not as extensively studied [67].However, there is some emerging research suggesting potential links between troponin T and cardiovascular diseases, including atherosclerosis.Troponin T may be implicated in the inflammatory and pathological processes that contribute to plaque formation in atherosclerosis.Inflammation plays a significant role in the development and progression of atherosclerosis, and there is ongoing research to understand the specific roles of various proteins, including troponin T, in these inflammatory pathways [68][69][70].Troponin I (Figure 4) may be implicated in the inflammatory pathways associated with atherosclerosis, possibly contributing to the vascular damage that occurs during the development of plaques [71]. Creatine kinase (CK), also known as creatine phosphokinase (CPK), is an enzyme found in various tissues, including skeletal muscles, heart muscles, and the brain.Creatine kinase catalyzes the conversion of creatine and adenosine diphosphate (ADP) into phosphocreatine and adenosine triphosphate (ATP), playing a crucial role in energy metabolism.There is a particular interest in the muscle fraction of creatine kinase, known as CK-MB (creatine kinase-MB) [72].Creatine kinase MB is often associated with cardiac muscle damage and is a well-established marker for diagnosing myocardial infarction.The elevated levels of CK-MB in the blood are indicative of damage to heart muscle cells.While CK-MB is primarily associated with cardiac muscle, atherosclerosis is a condition that involves the buildup of plaques in the arteries, and its direct involvement in atherosclerosis is not as prominent.The release of CK-MB into the bloodstream can occur when there is damage to the heart muscle, which may happen in the context of ischemia or infarction associated with advanced atherosclerosis [73][74][75][76]. Myeloperoxidase (MPO) is an enzyme released by white blood cells, particularly neutrophils, as part of the immune response.It plays a role in the body's defense against infections by generating ROS and promoting the formation of hypochlorous acid [77].While MPO is a crucial component of the immune system, it has also been associated with inflammatory processes in various diseases, including atherosclerosis.In the context of Troponin I (Figure 4) may be implicated in the inflammatory pathways associated with atherosclerosis, possibly contributing to the vascular damage that occurs during the development of plaques [71]. Creatine kinase (CK), also known as creatine phosphokinase (CPK), is an enzyme found in various tissues, including skeletal muscles, heart muscles, and the brain.Creatine kinase catalyzes the conversion of creatine and adenosine diphosphate (ADP) into phosphocreatine and adenosine triphosphate (ATP), playing a crucial role in energy metabolism.There is a particular interest in the muscle fraction of creatine kinase, known as CK-MB (creatine kinase-MB) [72].Creatine kinase MB is often associated with cardiac muscle damage and is a well-established marker for diagnosing myocardial infarction.The elevated levels of CK-MB in the blood are indicative of damage to heart muscle cells.While CK-MB is primarily associated with cardiac muscle, atherosclerosis is a condition that involves the buildup of plaques in the arteries, and its direct involvement in atherosclerosis is not as prominent.The release of CK-MB into the bloodstream can occur when there is damage to the heart muscle, which may happen in the context of ischemia or infarction associated with advanced atherosclerosis [73][74][75][76]. Myeloperoxidase (MPO) is an enzyme released by white blood cells, particularly neutrophils, as part of the immune response.It plays a role in the body's defense against infections by generating ROS and promoting the formation of hypochlorous acid [77].While MPO is a crucial component of the immune system, it has also been associated with inflammatory processes in various diseases, including atherosclerosis.In the context of atherosclerosis, MPO is thought to contribute to the oxidative modification of low-density lipoprotein (LDL) cholesterol, a key step in the development of atherosclerotic plaques [78].The oxidative modification of LDL makes it more likely to be taken up by immune cells, leading to the formation of foam cells and the initiation of the inflammatory response within arterial walls.The elevated levels of MPO have been detected in atherosclerotic lesions, and studies suggest that it may serve as a marker for increased cardiovascular risk.MPO is considered a potential biomarker for assessing inflammation and oxidative stress associated with atherosclerosis, and its measurement in blood samples is being explored in clinical research [79,80]. Pregnancy-associated plasma protein A (PAPP-A) is an enzyme that plays a role in the regulation of insulin-like growth factor (IGF) activity.While its primary association is with pregnancy, where it is involved in the growth and development of the placenta, PAPP-A has also been studied in the context of atherosclerosis [81].In atherosclerosis, increased levels of PAPP-A have been observed, and research suggests that it may be involved in the destabilization of atherosclerotic plaques.Atherosclerosis involves the buildup of plaques in the arterial walls, and the stability of these plaques is a critical factor in determining the risk of complications such as heart attacks or strokes [82].PAPP-A may contribute to plaque instability by promoting the degradation of the fibrous cap that covers atherosclerotic lesions.A weakened fibrous cap is more prone to rupture, leading to the release of plaque contents into the bloodstream, which can trigger blood clot formation and result in cardiovascular events.As a result, PAPP-A is considered a potential biomarker for assessing the vulnerability of atherosclerotic plaques and predicting the risk of cardiovascular events.The elevated levels of PAPP-A in blood samples may be indicative of increased plaque instability and higher cardiovascular risk [83][84][85]. Albumin (Figure 5) is a protein commonly found in blood plasma and serves various physiological functions, including maintaining osmotic pressure and transporting substances such as hormones, fatty acids, and drugs [86].While albumin itself is not directly implicated in the development of atherosclerosis, its levels and the presence of certain modifications to albumin can be relevant in assessing cardiovascular health.Researchers have explored associations between altered albumin levels and atherosclerotic risk factors [87].For example, low serum albumin levels have been observed in individuals with cardiovascular disease, and hypoalbuminemia is considered a potential marker of systemic inflammation and malnutrition, which are factors that may contribute to the progression of atherosclerosis.Additionally, the oxidative modifications of albumin, such as carbonylation, have been implicated in the inflammatory processes associated with atherosclerosis.Oxidative stress plays a role in the initiation and progression of atherosclerotic plaques, and modified proteins, including albumin, may contribute to the overall inflammatory milieu [88][89][90]. Lipoprotein-associated phospholipase A2 (Lp-PLA2) is an enzyme that is associated with lipoproteins, particularly low-density lipoprotein (LDL) cholesterol.It plays a role in the inflammatory processes within arterial walls and has been studied in the context of atherosclerosis [91].In atherosclerosis, Lp-PLA2 is thought to be involved in the modification of LDL particles, making them more prone to inflammation and contributing to the development of atherosclerotic plaques.The enzyme catalyzes the hydrolysis of oxidized phospholipids in LDL, releasing proinflammatory substances [92].The elevated levels of Lp-PLA2 have been associated with increased cardiovascular risks.It is considered a potential biomarker for assessing the vulnerability of atherosclerotic plaques, as well as a marker of systemic inflammation.The measurements of Lp-PLA2 in blood samples are being explored in clinical research as a tool for predicting the risk of cardiovascular events, such as heart attacks or strokes [93,94]. inflammation and malnutrition, which are factors that may contribute to the progression of atherosclerosis.Additionally, the oxidative modifications of albumin, such as carbonylation, have been implicated in the inflammatory processes associated with atherosclerosis.Oxidative stress plays a role in the initiation and progression of atherosclerotic plaques, and modified proteins, including albumin, may contribute to the overall inflammatory milieu [88-90].Lipoprotein-associated phospholipase A2 (Lp-PLA2) is an enzyme that is associated with lipoproteins, particularly low-density lipoprotein (LDL) cholesterol.It plays a role in the inflammatory processes within arterial walls and has been studied in the context of atherosclerosis [91].In atherosclerosis, Lp-PLA2 is thought to be involved in the modification of LDL particles, making them more prone to inflammation and contributing to the development of atherosclerotic plaques.The enzyme catalyzes the hydrolysis of oxidized phospholipids in LDL, releasing proinflammatory substances [92].The elevated levels of Lp-PLA2 have been associated with increased cardiovascular risks.It is considered a potential biomarker for assessing the vulnerability of atherosclerotic plaques, as well as a marker of systemic inflammation.The measurements of Lp-PLA2 in blood samples are being explored in clinical research as a tool for predicting the risk of cardiovascular events, such as heart attacks or strokes [93,94]. Interleukin-18 (IL-18) (Figure 6) is a proinflammatory cytokine involved in the vulnerable response.In the context of atherosclerosis, IL-18 has been implicated in the inflammatory processes that contribute to the development and progression of atherosclerotic plaques [95].IL-18 is produced by various cells, including macrophages and endothelial cells, within the arterial walls.It can stimulate the release of other inflammatory molecules and contribute to the recruitment of immune cells, such as T lymphocytes, into atherosclerotic lesions [96].This inflammatory response promotes the formation of atherosclerotic plaques, characterized by the buildup of cholesterol, immune cells, and other substances within the arterial walls.The elevated levels of IL-18 have been observed in individuals with atherosclerosis, and it is considered a potential biomarker for assessing the inflammatory status associated with cardiovascular disease [97].Research suggests Interleukin-18 (IL-18) (Figure 6) is a proinflammatory cytokine involved in the vulnerable response.In the context of atherosclerosis, IL-18 has been implicated in the inflammatory processes that contribute to the development and progression of atherosclerotic plaques [95].IL-18 is produced by various cells, including macrophages and endothelial cells, within the arterial walls.It can stimulate the release of other inflammatory molecules and contribute to the recruitment of immune cells, such as T lymphocytes, into atherosclerotic lesions [96].This inflammatory response promotes the formation of atherosclerotic plaques, characterized by the buildup of cholesterol, immune cells, and other substances within the arterial walls.The elevated levels of IL-18 have been observed in individuals with atherosclerosis, and it is considered a potential biomarker for assessing the inflammatory status associated with cardiovascular disease [97].Research suggests that IL-18 may play a role in plaque instability, making it a subject of interest in understanding the mechanisms underlying the complications of atherosclerosis, such as plaque rupture and thrombosis [98].The CD40 (Figure 7) receptor ligand, often referred to as CD40L or CD154, is a molecule expressed on the surface of activated immune cells, particularly T cells.Its interaction with the CD40 receptor on various cell types, including endothelial cells and immune cells, plays a critical role in immune responses and inflammation [99].In the context of atherosclerosis, CD40L has been implicated in the inflammatory processes that contribute to the development and progression of atherosclerotic plaques.CD40L is involved in the activation of endothelial cells, which line the inner surface of blood vessels [100].This activation leads to the increased expression of adhesion molecules and the recruitment of immune cells, such as monocytes and T cells, into the arterial walls.Within the atherosclerotic lesions, CD40L further stimulates the release of pro-inflammatory molecules, contributing to the formation of atherosclerotic plaques [101].The elevated levels of CD40L have been observed in individuals with atherosclerosis, and research suggests that it may serve as a biomarker for assessing the inflammatory status associated with cardiovascular disease.Additionally, the CD40L/CD40 interaction is considered a potential target for therapeutic interventions aiming to modulate the inflammatory response in atherosclerosis [102][103][104][105].The CD40 (Figure 7) receptor ligand, often referred to as CD40L or CD154, is a molecule expressed on the surface of activated immune cells, particularly T cells.Its interaction with the CD40 receptor on various cell types, including endothelial cells and immune cells, plays a critical role in immune responses and inflammation [99].In the context of atherosclerosis, CD40L has been implicated in the inflammatory processes that contribute to the development and progression of atherosclerotic plaques.CD40L is involved in the activation of endothelial cells, which line the inner surface of blood vessels [100].This activation leads to the increased expression of adhesion molecules and the recruitment of immune cells, such as monocytes and T cells, into the arterial walls.Within the atherosclerotic lesions, CD40L further stimulates the release of pro-inflammatory molecules, contributing to the formation of atherosclerotic plaques [101].The elevated levels of CD40L have been observed in individuals with atherosclerosis, and research suggests that it may serve as a biomarker for assessing the inflammatory status associated with cardiovascular disease. immune cells, such as monocytes and T cells, into the arterial walls.Within the atherosclerotic lesions, CD40L further stimulates the release of pro-inflammatory molecules, contributing to the formation of atherosclerotic plaques [101].The elevated levels of CD40L have been observed in individuals with atherosclerosis, and research suggests that it may serve as a biomarker for assessing the inflammatory status associated with cardiovascular disease.Additionally, the CD40L/CD40 interaction is considered a potential target for therapeutic interventions aiming to modulate the inflammatory response in atherosclerosis [102][103][104][105]. Atherosclerosis Treatment and Photodynamic Therapy Atherosclerosis treatment involves lifestyle changes, medications to control risk factors, and in some cases, interventional procedures or surgery.Lifestyle modifications include a heart-healthy diet, exercise, and smoking cessation.Medications such as statins, antiplatelet agents, and antihypertensives are commonly prescribed [106].Interventional procedures like angioplasty and stenting can open narrowed arteries, while surgeries like bypass surgery or endarterectomy may be considered in severe cases.Emerging therapies include immunotherapy, gene therapy, and innovative approaches like photodynamic therapy.Treatment is tailored based on individual factors, emphasizing the importance of regular monitoring and adherence to minimize complications [107]. Atherosclerosis Treatment and Photodynamic Therapy Atherosclerosis treatment involves lifestyle changes, medications to control risk factors, and in some cases, interventional procedures or surgery.Lifestyle modifications include a heart-healthy diet, exercise, and smoking cessation.Medications such as statins, antiplatelet agents, and antihypertensives are commonly prescribed [106].Interventional procedures like angioplasty and stenting can open narrowed arteries, while surgeries like bypass surgery or endarterectomy may be considered in severe cases.Emerging therapies include immunotherapy, gene therapy, and innovative approaches like photodynamic therapy.Treatment is tailored based on individual factors, emphasizing the importance of regular monitoring and adherence to minimize complications [107]. Cardiovascular diseases, particularly atherosclerosis, stand as formidable challenges to global health.In the ongoing quest for effective interventions, photodynamic therapy (PDT) has emerged as a beacon of hope, leveraging light-sensitive compounds to selectively target and stabilize atherosclerotic plaques [108].The journey commences with the strategic administration of a photosensitizer, a molecular envoy designed with precision to selectively accumulate within atherosclerotic plaques.The photosensitizer's choice hinges on its ability to distinguish between diseased and healthy tissues, marking the first step in PDT's tailored approach.The photosensitizer's trajectory is guided by the distinct characteristics of atherosclerotic plaques, such as increased vascular permeability [109].Exploiting these unique features, the photosensitizer hones in on the target site, ensuring a concentrated payload within the very heart of the pathology.A crucial inflection point arises when the accumulated photosensitizer encounters light of a specific wavelength (Figure 8).Cardiovascular diseases, particularly atherosclerosis, stand as formidable challenges to global health.In the ongoing quest for effective interventions, photodynamic therapy (PDT) has emerged as a beacon of hope, leveraging light-sensitive compounds to selectively target and stabilize atherosclerotic plaques [108].The journey commences with the strategic administration of a photosensitizer, a molecular envoy designed with precision to selectively accumulate within atherosclerotic plaques.The photosensitizer's choice hinges on its ability to distinguish between diseased and healthy tissues, marking the first step in PDT's tailored approach.The photosensitizer's trajectory is guided by the distinct characteristics of atherosclerotic plaques, such as increased vascular permeability [109].Exploiting these unique features, the photosensitizer hones in on the target site, ensuring a concentrated payload within the very heart of the pathology.A crucial inflection point arises when the accumulated photosensitizer encounters light of a specific wavelength (Figure 8).The activation of the photosensitizer by light propels it into an excited state, setting the stage for the cascade of events that define PDT's therapeutic prowess [110].In this excited state, the photosensitizer transforms into a molecular alchemist, engaging in a dance with molecular oxygen (Figure 9).This intricate interplay results in the generation of reactive oxygen species (ROS), notably singlet oxygen [111].These ROS are the molec- The activation of the photosensitizer by light propels it into an excited state, setting the stage for the cascade of events that define PDT's therapeutic prowess [110].In this excited state, the photosensitizer transforms into a molecular alchemist, engaging in a dance with molecular oxygen (Figure 9).This intricate interplay results in the generation of reactive oxygen species (ROS), notably singlet oxygen [111].These ROS are the molecular architects of PDT, orchestrating the targeted cellular destruction that defines its mechanism.The unleashed ROS embark on a mission of selective cellular devastation within the atherosclerotic plaque.Endothelial cells and smooth muscle cells, which are integral components of the plaque architecture, bear the brunt of this orchestrated assault [112].The high reactivity of ROS induces oxidative stress, precipitating a cascade of events that culminate in cellular ablation.As ROS emerge, they usher in a state of oxidative stress within the targeted cells and tissues [113].This oxidative stress, characterized by an imbalance between ROS production and cellular defense mechanisms, becomes the driving force behind PDT's therapeutic effects.One of the key movements in this process is lipid peroxidation.Singlet oxygen initiates a chain reaction leading to the oxidative degradation of lipids in cell membranes [114].The resulting cacophony of reactive aldehyde species, notably malondialdehyde, contributes to cellular damage and sets the tone for further cellular responses.As the process progresses, ROS extend their influence on the intricate world of proteins.Oxidative stress can modify proteins, altering their structure and function.The formation of carbonyl groups in proteins becomes equal to functional impairment and cellular dysfunction [115].The process reaches a crescendo with the profound impact of ROS on DNA.Single-strand and double-strand breaks in the DNA sequence become a poignant movement, activating cellular repair mechanisms.However, excessive damage may lead to overwhelming responses, tipping the balance toward programmed cell death [116].Mitochondrial proximity to ROS generation sites renders them susceptible to oxidative stress.Reactive oxygen species-induced damage to mitochondrial components becomes a pivotal movement, impacting energy production and triggering apoptotic pathways [117].This cellular stage responds to ROS-induced oxidative stress in multifaceted ways.Low levels of oxidative stress may activate survival pathways and cellular repair mechanisms, embodying resilience.However, the culmination of oxidative stress can lead to irreversible damage, propelling the cellular changes toward programmed cell death [118].Beyond its direct cellular impact, PDT introduces an intriguing dimension with its immunomodulatory effects.The ROS-induced cellular damage triggers an inflammatory response, a controlled conflagration within the treated area.This interplay between inflammation and healing mechanisms introduces complexity to the therapeutic landscape, offering potential avenues for immune-mediated plaque resolution [119].As the ablated cells undergo programmed cell This oxidative stress, characterized by an imbalance between ROS production and cellular defense mechanisms, becomes the driving force behind PDT's therapeutic effects.One of the key movements in this process is lipid peroxidation.Singlet oxygen initiates a chain reaction leading to the oxidative degradation of lipids in cell membranes [114].The resulting cacophony of reactive aldehyde species, notably malondialdehyde, contributes to cellular damage and sets the tone for further cellular responses.As the process progresses, ROS extend their influence on the intricate world of proteins.Oxidative stress can modify proteins, altering their structure and function.The formation of carbonyl groups in proteins becomes equal to functional impairment and cellular dysfunction [115].The process reaches a crescendo with the profound impact of ROS on DNA.Single-strand and doublestrand breaks in the DNA sequence become a poignant movement, activating cellular repair mechanisms.However, excessive damage may lead to overwhelming responses, tipping the balance toward programmed cell death [116].Mitochondrial proximity to ROS generation sites renders them susceptible to oxidative stress.Reactive oxygen speciesinduced damage to mitochondrial components becomes a pivotal movement, impacting energy production and triggering apoptotic pathways [117].This cellular stage responds to ROS-induced oxidative stress in multifaceted ways.Low levels of oxidative stress may activate survival pathways and cellular repair mechanisms, embodying resilience.However, the culmination of oxidative stress can lead to irreversible damage, propelling the cellular changes toward programmed cell death [118].Beyond its direct cellular impact, PDT introduces an intriguing dimension with its immunomodulatory effects.The ROSinduced cellular damage triggers an inflammatory response, a controlled conflagration within the treated area.This interplay between inflammation and healing mechanisms introduces complexity to the therapeutic landscape, offering potential avenues for immunemediated plaque resolution [119].As the ablated cells undergo programmed cell death (Figure 10), macrophages and other phagocytic cells enter, engaging in the clearance of cellular debris.This orchestrated removal process contributes to the resolution of the atherosclerotic plaque [120].Simultaneously, the potential for tissue healing and restructuring unfolds as the body's regenerative mechanisms come into play.PDT may exert anti-proliferative effects on vascular smooth muscle cells, inhibiting their excessive proliferation and migration.This property can contribute to stabilizing the plaque and preventing restenosis [121,122]. ities of atherosclerosis with finesse.As research progresses, the unraveling of these intricate processes promises to illuminate the path forward in the quest for effective and targeted atherosclerosis treatment [123].The key advantage of PDT lies in its ability to selectively target atherosclerotic lesions while sparing healthy tissues.This makes it a promising strategy for plaque-specific intervention.While preclinical studies have demonstrated encouraging results, ongoing research and clinical trials are necessary to further refine the technique, optimize treatment protocols, and evaluate long-term safety and efficacy [124].The potential of PDT in combination with other therapeutic modalities adds to its versatility, offering a novel avenue for addressing atherosclerosis and its associated cardiovascular risks.PDT not only stands as a treatment modality but also as a beacon of hope, guiding us toward a future where cardiovascular health is illuminated by the light of innovative therapeutic strategies.While the mechanism of PDT in atherosclerosis paints an optimistic picture, challenges persist on the road to clinical translation [125].Optimizing the properties of photosensitizers for enhanced plaque specificity, refining light delivery systems, and navigating the delicate balance between controlled inflammation and collateral damage are ongoing priorities for researchers.The mechanism of PDT in atherosclerosis unfolds as a symphony of precision and selectivity.From the strategic deployment of photosensitizers to the choreography of ROSinduced cellular ablation and controlled inflammation, PDT navigates the complexities of atherosclerosis with finesse.As research progresses, the unraveling of these intricate processes promises to illuminate the path forward in the quest for effective and targeted atherosclerosis treatment [123].The key advantage of PDT lies in its ability to selectively target atherosclerotic lesions while sparing healthy tissues.This makes it a promising strategy for plaque-specific intervention.While preclinical studies have demonstrated encouraging results, ongoing research and clinical trials are necessary to further refine the technique, optimize treatment protocols, and evaluate long-term safety and efficacy [124].The potential of PDT in combination with other therapeutic modalities adds to its versatility, offering a novel avenue for addressing atherosclerosis and its associated cardiovascular risks.PDT not only stands as a treatment modality but also as a beacon of hope, guiding us toward a future where cardiovascular health is illuminated by the light of innovative therapeutic strategies.While the mechanism of PDT in atherosclerosis paints an optimistic picture, challenges persist on the road to clinical translation [125].Optimizing the properties of photosensitizers for enhanced plaque specificity, refining light delivery systems, and navigating the delicate balance between controlled inflammation and collateral damage are ongoing priorities for researchers. Statin therapy, a cornerstone in cardiovascular care, focuses on reducing blood cholesterol levels, particularly low-density lipoprotein cholesterol (LDL-C). By inhibiting Statin therapy, a cornerstone in cardiovascular care, focuses on reducing blood cholesterol levels, particularly low-density lipoprotein cholesterol (LDL-C).By inhibiting HMG-CoA reductase, the enzyme responsible for cholesterol synthesis, statins not only lower LDL-C but also exert anti-inflammatory and antioxidant effects [126,127].These properties make statins valuable in stabilizing atherosclerotic plaques and preventing cardiovascular events.Complementing statin therapy, photodynamic therapy introduces a targeted and selective dimension to atherosclerosis treatment [128].The synergy between statin therapy and PDT is evident in their complementary mechanisms of action.Statins address the systemic factors contributing to atherosclerosis, reducing cholesterol levels and stabilizing plaques [129].Meanwhile, PDT provides a localized intervention, directly targeting the atherosclerotic lesion and promoting plaque regression [130].Moreover, the anti-inflammatory effects of statins may synergize with PDT, potentially enhancing the therapeutic response.The dual approach addresses not only the lipid-related aspects of atherosclerosis but also the cellular and inflammatory components, offering a more comprehensive solution to the multifaceted nature of the disease [131].Integrating PDT with statin therapy presents a holistic and nuanced approach to atherosclerosis management.By combining the systemic benefits of statins with the targeted precision of PDT, this comprehensive strategy aims to address the diverse facets of atherosclerosis, paving the way for enhanced patient outcomes and improved cardiovascular health.As research advances, this integrative approach may herald a new era in the treatment of cardiovascular diseases, offering a beacon of hope for those grappling with the complexities of atherosclerosis.Despite the promise of combining PDT and statin therapy, challenges persist.Optimizing the choice of photosensitizers for enhanced plaque specificity, refining light delivery systems, and conducting thorough safety assessments are ongoing priorities.Additionally, understanding the long-term effects of combined therapy and its impact on cardiovascular outcomes requires further investigation. Nanoparticles Nanoparticles are minuscule particles with dimensions typically ranging from 1 to 100 nanometers and have emerged as promising tools in the field of medicine, offering unique opportunities for targeted drug delivery and diagnostic applications [132].In the context of atherosclerosis, nanoparticles have garnered attention for their potential to address key challenges in treatment.Atherosclerosis poses a significant global health burden, contributing to conditions such as heart attacks and strokes [133].Traditional therapeutic approaches often face limitations, including systemic side effects and insufficient targeting of affected areas.Nanoparticles, due to their size and tunable properties, present an innovative avenue for enhancing the precision and efficacy of treatments for atherosclerosis [134].The role of nanoparticles in combating atherosclerosis includes examining their ability to navigate through the circulatory system, selectively targeting atherosclerotic lesions, and delivering therapeutic payloads with improved efficiency [135].By harnessing the unique properties of nanoparticles, researchers aim to revolutionize the treatment landscape of atherosclerosis, offering new hope for more effective and targeted interventions in the battle against cardiovascular disease [136].Table 1 shows common nanoparticles in atherosclerosis. High-Density Lipoprotein (HDL) Mimicking Nanoparticles Nanoparticles are designed to mimic the structure and function of HDL, often loaded with anti-inflammatory or antioxidant agents.They can target atherosclerotic plaques to deliver therapeutic payloads [137]. Superparamagnetic Iron Oxide Nanoparticles (SPIONs) Used for imaging purposes in magnetic resonance imaging (MRI).SPIONs can be functionalized with targeting ligands for specific binding to atherosclerotic plaques, enabling non-invasive imaging [138]. Gold Nanorods Utilized for both imaging and therapy.Gold nanorods can absorb near-infrared light, enabling photothermal therapy to target and treat atherosclerotic plaques [139]. PLGA (Poly(lactic-co-glycolic acid)) Nanoparticles Biodegradable polymeric nanoparticles that can encapsulate drugs for sustained release.PLGA nanoparticles have been investigated for the targeted delivery of anti-inflammatory drugs to atherosclerotic lesions [140]. Perfluorocarbon Nanobubbles Used as contrast agents for imaging, nanobubbles can be designed to target atherosclerotic plaques.They have been explored for ultrasound imaging to detect and monitor plaque progression [141]. Mesoporous Silica Nanoparticles Designed to carry therapeutic agents and deliver them to specific locations.Mesoporous silica nanoparticles can be functionalized for targeted drug delivery to atherosclerotic lesions [142]. Nanoparticle-Mediated Gene Therapy Nanoparticles can be used to deliver therapeutic genes to cells within atherosclerotic plaques.This approach aims to modulate the expression of specific genes to mitigate inflammation or promote plaque stabilization [143]. Targeting Peptide-Modified Nanoparticles Nanoparticles functionalized with peptides that have an affinity for molecules overexpressed in atherosclerotic plaques.This enhances the nanoparticles' ability to target and accumulate at specific sites [144]. In Vitro An investigation into optimal conditions for PDT using Upconversion Nanoparticles (UCNPs)-Ce6 yielded significant insights into treating atherosclerosis.The research identified a specific dosage of UCNPs-Ce6, combined with precise laser parameters, as effective in reducing lipid storage in THP-1 macrophage-derived foam cells.This reduction was linked to the promotion of cholesterol efflux, with the study revealing autophagy induction as a mechanism mediated by ABCA1 and involving the suppression of the PI3K/Akt/mTOR pathway [145].The application extended to peritoneal macrophage-derived foam cells, emphasizing the potential of UCNPs-Ce6-mediated PDT in atherosclerosis treatment. In a novel approach, bTiO2-HA-p nanoprobes demonstrated synergistic effects on lipid metabolism within atherosclerotic foam cells.These nanoprobes exhibited promising photothermal and photodynamic properties, along with excellent biocompatibility, offering enhanced anti-atherosclerosis therapeutic effects without inducing excessive cell apoptosis or necrosis [146]. Spyropoulos-Antonakakis et al. [147] introduced G0 PAMAM dendrimers and G0 PA-MAM/ZnPc conjugates to carotid tissues, revealing significant modifications in nanoscale texture characteristics.The study emphasized the need for future investigations on various factors influencing dendrimer penetration across the endothelial barrier [147]. Macrophage-targeted PDT using chlorin(e6) (ce6) and maleylated bovine serum albumin (BSA-mal) conjugates demonstrated selective targeting of macrophages.The study highlighted differences in macrophage activation and scavenger receptor class A (SRA) expression between murine macrophage tumor cell lines, indicating the influence of macrophage activation status on PDT efficacy [148]. A comprehensive examination of curcumin-mediated photodynamic therapy (CUR-PDT) on Vascular Smooth Muscle Cells (VSMCs) revealed optimal treatment conditions and identified autophagy as a key mechanism underlying therapeutic effects, suggesting CUR-PDT's potential for treating atherosclerosis [149]. UCNPs-Ce6-mediated PDT targeting mitochondria showed promising outcomes in reducing macrophage infiltration in atherosclerotic plaques, indicating its potential as a treatment for atherosclerosis [150]. The use of a novel photosensitizer, ATMPyP, in combination with PDT demonstrated selective accumulation in atherosclerotic plaques, enhancing therapeutic effects [151]. The intravascular delivery of photosensitizers using oxidatively modified low-density lipoprotein (OxLDL) as a carrier showcased improved selectivity for atherosclerotic lesions, emphasizing the potential of targeted delivery systems in optimizing PDT outcomes [152]. A study by Spyropoulos-Antonakakis et al. focused on a dextran-bovine serum albumin conjugate (DB) used to create a nanoemulsion loaded with UCNPs and Ce6 (UCNPs-Ce6@DB).This approach facilitated recognition and binding to scavenger receptor class A (SR-A) on macrophages, promoting uptake by macrophage-derived foam cells in atherosclerotic lesions.UCNPs-Ce6@DB-mediated PDT enhanced ROS generation, induced autophagy, upregulated ABCA1 expression for cholesterol efflux, and suppressed proinflammatory cytokine secretion, showing promise in inhibiting plaque formation in AS mouse models [153]. The impact of pyropheophorbide-α methyl ester (MPPa)-mediated PDT on apoptosis and inflammation in murine macrophage RAW264.7 cells demonstrated the potential to induce apoptosis and alleviate inflammation, presenting a promising therapeutic approach for atherosclerosis [154]. The capability of Photodynamic Diagnosis (PDD) and PDT using chlorin e6 for detecting atherosclerotic plaque was explored in a study with human aorta and coronary artery specimens.Chlorin e6 accumulation in atheromatous plaque suggested specificity for discriminating between atheromatous and normal/calcified segments, providing a potential tool for real-time imaging and targeted therapy in various forms and stages of atherosclerosis [155]. In Vivo in Mice Models In a comprehensive exploration of advanced therapeutic approaches for atherosclerosis, researchers have embarked on multiple studies with a focus on precision and targeted intervention.One notable investigation centered on stable iron-based nanoparticles (FeC-NPs) modified with Ce6 and 3,4-DA for targeted atherosclerosis treatment [156].These FeCNPs demonstrated effective plaque accumulation, showcasing significant reductions at both low and high doses, with superior outcomes observed at lower doses.Biodistribution tracking confirmed sustained FeCNP retention in plaques, accompanied by positive changes in plaque composition, including improvements in lipid-rich areas and reduced necrotic cores.Moreover, the FeCNPs exhibited a targeted approach towards MCP1, resulting in the reduction in macrophages and key markers associated with atherosclerosis progression.The study's nuanced findings suggested potential avenues for improved targeting through molecule modifications, emphasizing the promising therapeutic potential of FeCNPs in atherosclerosis treatment.Additionally, the investigation explored the efficacy of chemiexcited Photodynamic Therapy (PDT) using Fe 3+ -catechol cross-linked CPPO-loaded polymeric nanoparticles, presenting promising prospects for treating atherosclerosis and underscoring the importance of refining plaque-targeting ability for optimal therapeutic effectiveness [156]. In a distinct approach, researchers developed theranostic nanoparticles targeting atherosclerotic plaques using osteopontin (OPN).By encapsulating a photosensitizer (IR780) and a chemo-drug (TPZ), these OPN-targeted nanoparticles selectively accumulated in foamy macrophages within vulnerable plaques.Photodynamic therapy (PDT) using these nanoparticles demonstrated reduced plaque area, increased stability, and improved blood perfusion, indicating a promising approach for precise imaging and effective treatment of atherosclerosis [157]. Han X. et al. [158] delved into the impact of upconversion nanoparticles encapsulating chlorin e6 (UCNPs-Ce6)-mediated PDT on M1 peritoneal macrophages.The research demonstrated that ROS generated by UCNPs-Ce6-mediated PDT induced autophagy, inhibiting the expression of pro-inflammatory factors in these macrophages.The study established the activation of autophagy through the PI3K/AKT/mTOR signaling pathway, leading to a reduction in inflammatory cytokines.The findings suggest a potential therapeutic avenue for mitigating inflammation associated with atherosclerosis [158]. In a nuanced exploration of photosensitizer distribution in atherosclerotic plaques, a study revealed variations based on lesion characteristics.Plaques in ApoE−/− mice exhibited high lipid content, necrotic cores, and endothelial lining.PDT resulted in reduced vasoconstriction and relaxation, indicating an impact on arterial function [159]. A study involving the photosensitizer mTHPC loaded into Ben-PCL-mPEG micelles addressed the challenge of premature release from micelles, focusing on improving stability for enhanced macrophage selectivity.The successful achievement of this goal, coupled with the effective targeting of atherosclerotic plaques, suggested the potential of mTHPC-loaded micelles for selective macrophage photocytotoxicity [160]. In Vivo on Rabbits Waksman R. et al. [161] investigated the impact of PDT on regional atherosclerosis using cholesterol-fed rabbits.Their study compared PhotoPoint PDT treatment, photosensitizer alone, and light alone, revealing a substantial reduction in plaque progression by 35% at 7 days and 53% at 28 days.Importantly, PDT demonstrated the ability to decrease macrophages by 98% at 7 days and sustained a reduction by 92% at 28 days, without causing adverse vascular effects.This research highlighted PDT's potential for treating acute coronary syndromes by reducing inflammation and promoting plaque stabilization [161]. Another study focused on the accumulation of the photosensitizer hematoporphyrin derivative (HPD) in atherosclerosis.The results demonstrated uniform HPD accumulation in injured media, suggesting PDT as a promising avenue for inhibiting smooth muscle cell growth in atherosclerotic lesions and potentially preventing restenosis following angioplasty [162]. Further exploration into various photosensitizers such as CASPc [163], NPe6 [164], and Lu-Tex [165] showcased their selective accumulation in atheromatous plaques.These findings underscored the versatility of PDT in utilizing different photosensitizers for the targeted treatment of atherosclerosis. An investigation into texaphyrins, specifically Lutetium texaphyrin (PCI-0123), demonstrated selective localization in both cancer and atheromatous plaques.The ability of texaphyrins to selectively target diseased tissues suggested their potential for inducing tissuespecific damage, opening avenues for further exploration in the treatment of atherosclerosis [166]. In a recent study, an innovative approach was demonstrated for enhanced targeting in atherosclerotic plaques using a scavenger-receptor-directed photosensitizer (PS) conjugate named MA-ce6 [167].This approach showed increased accumulation of MA-ce6 in macrophage-rich plaques, indicating potential benefits for targeted PDT.Future research aims to correlate PS conjugate accumulation with macrophage localization and investigate the therapeutic benefits of PDT through the intravascular delivery of red light in living rabbits [168]. Another noteworthy study focused on enhancing PDT efficacy for cardiovascular diseases by delivering photosensitizers specifically to atherosclerotic lesions.Utilizing oxidatively modified low-density lipoprotein (OxLDL) as a targeted carrier for the photosensitizer aluminum phthalocyanine chloride (AlPc), the study demonstrated light-dependent cytotoxicity in macrophages incubated with OxLDL-AlPc.This targeted delivery system holds promise for improving the therapeutic benefits of PDT in cardiovascular diseases [168]. Exploring PDT for treating inflamed atherosclerotic plaque in a rabbit model, a study utilized 5-aminolevulinic acid (ALA) as a photosensitizer.Positive correlations between ALA-derived protoporphyrin IX (PpIX) fluorescence and macrophage content in the plaque were observed.Photodynamic therapy resulted in a significant reduction in the plaque area, macrophage content, and the depletion of smooth muscle cells, indicating the potential for inhibiting plaque progression [169]. The potential of PDT for preventing in-stent restenosis after angioplasty was investigated in normal rabbits.Photodynamic therapy before stent deployment led to almost complete medial cell ablation at 3 days and significantly inhibited in-stent restenosis at 28 days, suggesting PDT as a promising adjuvant therapy to percutaneous intervention in patients with vascular disease [170]. In the realm of pharmacokinetics, a study compared the pre-photosensitizers ALA and ALA-ethyl ester in a rabbit model with post-balloon injured arteries.ALA demonstrated a higher selective build-up in atheromatous plaque compared to ALA-ethyl, suggesting potential effectiveness for endovascular PDT of atheromatous plaque.However, further optimization is needed to prevent potential restenosis [171]. The importance of the treatment field in PDT for preventing intimal hyperplasia (IH) in the rat carotid artery was emphasized in a study.While PDT initially prevented IH by depleting medial smooth muscle cells, the study concluded that including the entire injured artery or a section of an uninjured margin in the treatment field is crucial for the effective prevention of IH with PDT [172]. Finally, Motexafin lutetium (Lu-Tex), a PDT agent that localizes in atheromatous plaque, demonstrated significant potential in treating accelerated atherosclerosis associated with transplantation in a rodent allograft model of graft coronary artery disease (GCAD).Lu-Tex-mediated PDT significantly reduced affected vessels and intimal proliferation compared to control groups, suggesting its efficacy in managing transplantation-related atherosclerosis [173]. In Vivo on Pigs In a study conducted on a large animal model, the objective was to develop a percutaneous method for applying PDT to iliac and coronary arteries and assess its impact on arterial remodeling and the intimal hyperplastic response to balloon injury.The results indicated that PDT treatment exerted a favorable influence on the arterial response to balloon injury in both coronary and peripheral circulations.This outcome presents a promising approach to address restenosis following endovascular procedures, providing potential insights for enhancing the effectiveness of percutaneous interventions [174]. Furthermore, in an exploration of intravascular light delivery for arterial PDT, a study employed a balloon catheter in a pig non-injury model.The study demonstrated that PDT delivered via a standard percutaneous transluminal angioplasty (PTA) balloon effectively depleted the vascular smooth muscle cell population within the arterial wall without complications.This suggests the potential of intravascular PDT to reduce the incidence of restenosis post-angioplasty, showcasing its viability as a minimally invasive intervention for addressing arterial complications [175]. In Vivo on Humans Photodynamic therapy has been explored as an adjuvant to femoral percutaneous transluminal angioplasty (PTA) in a clinical study involving seven patients.These individ-uals, who had previously undergone conventional angioplasty at the same site resulting in restenosis or occlusion, received PDT sensitized with oral 5-aminolaevulinic acid.Red light (635 nm) was delivered to the angioplasty point via a ray fiber within the angioplasty balloon.The study demonstrated that all patients tolerated the procedure well without adverse complications, maintained sustained asymptomatic status, and had patent vessels without restenosis during the study interval.These findings suggest that endovascular PDT might be a safe and potentially effective method to reduce restenosis following angioplasty.Further investigation through a randomized controlled trial is warranted to validate these promising results [176]. Motexafin lutetium, an investigational photosensitizer, underwent evaluation in a clinical trial involving 47 patients with arterial stenosis.The trial suggested the safety and potential benefits of motexafin lutetium therapy in the context of atherosclerosis, paving the way for further clinical exploration [177]. In another research endeavor, the applicability of Benzoporphyrin derivative (BPD) as a potent photosensitizer for treating atherosclerosis through PDT was investigated.The study delved into the uptake of BPD in atherosclerotic plaque, examining both in vitro and in vivo scenarios.Atherosclerotic human arteries and induced atherosclerotic miniswine were exposed to different concentrations of BPD, revealing substantial uptake in atherosclerotic vessels.These results suggest that BPD-MA could be a promising candidate for photodynamic therapy in the treatment of atherosclerosis [178]. A clinical study aimed to assess the safety and efficacy of local delivery of a photosensitizer followed by PDT for reducing in-stent restenosis.The study utilized Porfimer sodium delivered via a local delivery catheter, followed by pulse laser irradiation.The 18-month clinical follow-up revealed no adverse events, indicating that PDT with local delivery of Porfimer sodium is safe and may be a feasible technique for preventing in-stent restenosis [179]. In conclusion, the collective body of research presented here highlights a diverse array of strategies and promising outcomes in the utilization of photodynamic therapy for the treatment of atherosclerosis.The emphasis on targeted photosensitizers, nanoparticle formulations, and optimal treatment conditions underscores the evolving landscape of therapeutic interventions for atherosclerosis.While the results are indeed encouraging, further research and clinical trials are imperative to validate and translate these findings into effective and safe clinical interventions for patients with atherosclerosis.The interdisciplinary nature of these investigations, integrating aspects of nanotechnology, chemistry, and clinical medicine, holds significant promise for the future development of innovative therapeutic approaches in the ongoing battle against atherosclerosis. Comparative Analysis of Costs and Side Effects: Photodynamic Therapy vs. Statin Therapy in Atherosclerosis Atherosclerosis, a chronic and complex vascular disease, necessitates a nuanced approach to treatment.Photodynamic therapy and statin therapy represent two distinct modalities with different mechanisms of action.This comparative analysis aims to explore and contrast the costs and side effects associated with these therapeutic options in the management of atherosclerosis.In comparing the costs and side effects of PDT and statin therapy in atherosclerosis, several factors come into play.PDT offers a targeted, localized approach with potential economic implications and specific skin-related side effects.On the other hand, statin therapy, with its well-established safety profile, presents a cost-effective and systemic strategy for managing atherosclerosis.The decision between these modalities should be guided by a comprehensive assessment of individual patient needs, disease characteristics, and overall therapeutic goals.Further research and clinical experience will continue to refine our understanding of the comparative aspects of these treatment options.Table 2 shows difference between PDT and statin therapy. Costs Photodynamic therapy involves the administration of photosensitizers and the use of specialized light sources for activation.The costs associated with PDT can be relatively high due to the need for specific equipment, skilled personnel, and the development or acquisition of photosensitizing agents.Additionally, repeated sessions may be required for optimal efficacy, contributing to the overall economic burden. Statin therapy is a well-established and widely prescribed approach for managing atherosclerosis.The costs associated with statins are generally lower compared to PDT.Statins are available in generic forms, contributing to cost-effectiveness.However, the overall economic impact may vary depending on the specific statin prescribed, patient adherence, and the need for additional cardiovascular medications. Side effects While PDT is generally considered a localized and targeted therapy, side effects can occur. Effectiveness PDT may be more suitable for localized and specific interventions, aimed at the ablation of localized plaques. Statins provide a broader systemic approach suitable for chronic management, primarily addressing systemic factors, including cholesterol levels and inflammation. Long-term considerations The long-term safety and efficacy of PDT, especially concerning repeated treatments over extended periods, require further investigation. Statin therapy has an extensive track record of long-term safety and efficacy, supported by numerous clinical trials and real-world evidence. Conclusions The application of PDT in the context of atherosclerosis holds significant promise as a novel and potentially effective treatment approach.Atherosclerosis, a complex and chronic inflammatory condition, poses a substantial global health burden, necessitating innovative therapeutic strategies.PDT, leveraging the synergistic effects of light, photosensitizing agents, and molecular oxygen, offers a targeted and minimally invasive solution.The ability of PDT to selectively target atherosclerotic plaques, reduce inflammation, and induce localized cell death highlights its potential to mitigate the progression of atherosclerosis.The technique's non-invasiveness and specificity contribute to its appeal, minimizing collateral damage to surrounding healthy tissues.Additionally, PDT's ability to modulate the inflammatory response and promote plaque stabilization underscores its potential in preventing plaque rupture, a critical event in atherosclerosis-associated complications like myocardial infarction and stroke.However, while the preclinical and early clinical studies are promising, further research is essential to establish the long-term safety, efficacy, and optimal treatment parameters of PDT for atherosclerosis.Overcoming challenges such as the penetration depth of light in vascular tissues and refining the delivery systems will be crucial for translating PDT into routine clinical practice.The application of photodynamic therapy in atherosclerosis represents an exciting frontier in cardiovascular medicine.Continued research and clinical trials are imperative to validate its efficacy, optimize protocols, and address practical challenges.If successful, PDT could emerge as a valuable addition to the therapeutic arsenal against atherosclerosis, offering a targeted and minimally invasive approach to combat this prevalent cardiovascular disease. Figure 1 . Figure 1.Arteriosclerosis process.The individual stages of atherosclerosis are fatty streak, intermediate lesion, fibrous plaque, rupture, and thrombosis. Figure 2 . Figure 2. The immunology side of atherosclerosis.LDL retention serves as the initial trigger for the development of atherosclerosis.The subendothelial accumulation of lipoproteins prompts the upregulation of adhesion molecules on the endothelial surface, facilitating the recruitment of monocytes to the forming lesion.Subsequently, monocytes transmigrate into the subendothelial space, where they differentiate into macrophages in response to specific signals.Notably, smooth muscle cells have the capacity to undergo transdifferentiation into macrophage-like cells within the developing lesion.The scavenger-receptor-mediated uptake of lipoproteins by macrophages leads to the formation of foam cells, characterized by the intracellular accumulation of lipids.Cholesterol crystals may form within these foam cells, triggering the release of IL-1B.IL-1B, in turn, stimulates smooth muscle cells to produce IL-6.Both IL-1B and IL-6 exert proinflammatory effects, contributing to the inflammatory milieu within the atherosclerotic lesion.Additionally, circulating IL-6 may signal to the liver, prompting the production of C-reactive protein (CRP), which serves as a marker of inflammation.In summary, the cascade of events initiated by LDL retention involves endothelial activation, monocyte recruitment and differentiation, foam cell formation, and the release of proinflammatory cytokines.This inflammatory environment plays a crucial role in the progression of atherosclerosis, a condition characterized by the buildup of plaque within arterial walls. Figure 2 . Figure 2. The immunology side of atherosclerosis.LDL retention serves as the initial trigger for the development of atherosclerosis.The subendothelial accumulation of lipoproteins prompts the upregulation of adhesion molecules on the endothelial surface, facilitating the recruitment of monocytes to the forming lesion.Subsequently, monocytes transmigrate into the subendothelial space, where they differentiate into macrophages in response to specific signals.Notably, smooth muscle cells have the capacity to undergo transdifferentiation into macrophage-like cells within the developing lesion.The scavenger-receptor-mediated uptake of lipoproteins by macrophages leads to the formation of foam cells, characterized by the intracellular accumulation of lipids.Cholesterol crystals may form within these foam cells, triggering the release of IL-1B.IL-1B, in turn, stimulates smooth muscle cells to produce IL-6.Both IL-1B and IL-6 exert proinflammatory effects, contributing to the inflammatory milieu within the atherosclerotic lesion.Additionally, circulating IL-6 may signal to the liver, prompting the production of C-reactive protein (CRP), which serves as a marker of inflammation.In summary, the cascade of events initiated by LDL retention involves endothelial activation, monocyte recruitment and differentiation, foam cell formation, and the release of proinflammatory cytokines.This inflammatory environment plays a crucial role in the progression of atherosclerosis, a condition characterized by the buildup of plaque within arterial walls. Int. J. Mol.Sci.2024, 25, x FOR PEER REVIEW 8 of 25 that IL-18 may play a role in plaque instability, making it a subject of interest in understanding the mechanisms underlying the complications of atherosclerosis, such as plaque rupture and thrombosis [98]. Figure 8 . Figure 8. Photodynamic therapy in atherosclerosis.Each wavelength of light has a different depth of tissue penetration. Figure 8 . Figure 8. Photodynamic therapy in atherosclerosis.Each wavelength of light has a different depth of tissue penetration. 25 Figure 9 . Figure 9. PDT mechanism.The entire process of destroying cancer cells during PDT may occur according to two reactions.Both mechanisms can occur simultaneously.The PSs become active (PS*) when exposed to a certain wavelength of light, transferring their energy to molecules of oxygen, which then change oxygen from its initial state into a lethal singlet state, producing singlet oxygen ( 1 O2) that kills tumor cells.As a result, ROS are crucial to PDT's ability to eliminate malignancies. 1O2 is formed by the excitation of the triplet oxygen molecule.It belongs to one of the ROS.ROS are products of successive stages of reduction in the oxygen molecule.Oxygen reduction and excitation products are more reactive than triplet oxygen. 1 O2 is the major cytotoxic agent involved in PDT.The mechanism of action of 1 O2 is based on its high chemical reactivity.The excited 1 O2 molecule aims to reduce the energy state, which is achieved in the process of oxidation of various substances.It reacts readily with lipids, proteins, and nucleic acids, changing their structure. Figure 9 . Figure 9. PDT mechanism.The entire process of destroying cancer cells during PDT may occur according to two reactions.Both mechanisms can occur simultaneously.The PSs become active (PS*) when exposed to a certain wavelength of light, transferring their energy to molecules of oxygen, which then change oxygen from its initial state into a lethal singlet state, producing singlet oxygen ( 1 O 2 ) that kills tumor cells.As a result, ROS are crucial to PDT's ability to eliminate malignancies. 1O 2 is formed by the excitation of the triplet oxygen molecule.It belongs to one of the ROS.ROS are products of successive stages of reduction in the oxygen molecule.Oxygen reduction and excitation products are more reactive than triplet oxygen. 1 O 2 is the major cytotoxic agent involved in PDT.The mechanism of action of 1 O 2 is based on its high chemical reactivity.The excited 1 O 2 molecule aims to reduce the energy state, which is achieved in the process of oxidation of various substances.It reacts readily with lipids, proteins, and nucleic acids, changing their structure. Table 2 . Costs and side effects: photodynamic therapy vs. statin therapy. Common side effects include photosensitivity reactions, skin irritation, and temporary discoloration of the treated area.The specificity of PDT for atherosclerotic plaques minimizes systemic side effects, but the potential for skin reactions remains a consideration.Statins are generally well-tolerated, with a favorable safety profile.Common side effects include muscle pain or weakness, gastrointestinal disturbances, and liver enzyme abnormalities.Serious side effects, such as rhabdomyolysis, are rare but can occur.Regular monitoring of liver function and muscle health is recommended during statin therapy to mitigate potential adverse effects.
2024-02-08T16:06:28.582Z
2024-02-01T00:00:00.000
{ "year": 2024, "sha1": "d8fad6ee3527f1e158bb142b1ae0b76a181c2a48", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1422-0067/25/4/1958/pdf?version=1707209245", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "fd8d5a6d3e6bb4d66748be81c90e22894045cde4", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
13947140
pes2o/s2orc
v3-fos-license
2-Arachidonoylglycerol ameliorates inflammatory stress-induced insulin resistance in cardiomyocytes Several studies have linked impaired glucose uptake and insulin resistance (IR) to functional impairment of the heart. Recently, endocannabinoids have been implicated in cardiovascular disease. However, the mechanisms involving endocannabinoid signaling, glucose uptake, and IR in cardiomyocytes are understudied. Here we report that the endocannabinoid 2-arachidonoylglycerol (2-AG), via stimulation of cannabinoid type 1 (CB1) receptor and Ca2+/calmodulin-dependent protein kinase β, activates AMP-activated kinase (AMPK), leading to increased glucose uptake. Interestingly, we have observed that the mRNA expression of CB1 and CB2 receptors was decreased in diabetic mice, indicating reduced endocannabinoid signaling in the diabetic heart. We further establish that TNFα induces IR in cardiomyocytes. Treatment with 2-AG suppresses TNFα-induced proinflammatory markers and improves IR and glucose uptake. Conversely, pharmacological inhibition or knockdown of AMPK attenuates the anti-inflammatory effect and reversal of IR elicited by 2-AG. Additionally, in human embryonic stem cell-derived cardiomyocytes challenged with TNFα or FFA, we demonstrate that 2-AG improves insulin sensitivity and glucose uptake. In conclusion, 2-AG abates inflammatory responses, increases glucose uptake, and overcomes IR in an AMPK-dependent manner in cardiomyocytes. Cardiac insulin resistance (IR), 4 at its most fundamental level, inhibits the myocardial uptake of glucose in response to insulin. Insulin-resistant cardiomyocytes therefore preferentially metabolize FFA rather than glucose. IR contributes to the development of myocardial dysfunction, ultimately leading to diabetic cardiomyopathy (DCM), a major cause of morbidity and mortality in developed nations (1). Chronic lipid overload leads to increased cardiac lipid uptake and, via processes remaining to be fully elucidated, causes IR (2). Furthermore, local and systemic inflammatory processes participate in the development of cardiac IR (3). Previous studies demonstrated the development of IR by application of the proinflammatory cytokine TNF␣ (4,5). TNF␣, via induction and increased nuclear translocation of its downstream effector nuclear factor light chain enhancer of activated B cells p105 subunit (NFKB1) and IL-6 and IL-8 (CXCL8), along with chemokines like monocyte chemoattractant protein 1 (CCL2), participate in the activation and maintenance of intracellular pathways that promote the development of IR. However, irrespective of whether cardiac IR is induced by lipid overload or inflammatory stress, in both cases, cardiomyocytes down-regulate glucose utilization in long-term precipitating DCM. Vice versa, improvement of glucose metabolism and anti-inflammatory actions could improve IR and therefore prevent DCM. Recent developments identify the endocannabinoid system (ECS) as a physiological signaling network and therapeutic target for the treatment of various pathological conditions, including cardiovascular disease (6). The endocannabinoids 2-arachidonoylglycerol (2-AG) and anandamide (AEA) are endogenous lipid mediators that predominantly act as a ligand for the G protein-coupled cannabinoid type 1 (CB1R) and type 2 (CB2R) receptors, respectively. Before its recent withdrawal from the market because of its association with increased incidence of psychiatric adverse events, the CB1R antagonist rimonabant was shown to improve body weight and metabolic and inflammatory abnormalities in several trials in obese subjects. Conversely, evidence suggests that endocannabinoids have important protective roles in pathophysiological conditions such as ischemic shock and myocardial infarction (7,8). However, the underlying molecular and cellular mechanisms of endocannabinoid action remain elusive. In this study, we have demonstrated that 2-AG, in a CaMKK␤dependent manner, activates the AMPK signaling pathway. In an inflammatory stress-induced model of IR, we have demonstrated that 2-AG exerts anti-inflammatory effects, restores the dysfunctional insulin signaling pathway, and stimulates glucose uptake in cardiomyocytes. Additionally, we have created a novel model mimicking human cardiomyocyte insulin resistance by differentiating human embryonic stem cells into functional cardiomyocytes and exposing them to inflammatory stress or lipid overload. In this unique model, we have demonstrated that 2-AG exerts strong anti-inflammatory effects and restores insulin sensitivity in differentiated cardiomyocytes. Overall, our study provides a novel and detailed analysis of the molecular mechanism linking 2-AG with cardiomyocyte insulin resistance and indicates a potential beneficial role in the treatment of cardiometabolic diseases. TNF␣ induces IR in HL-1 cardiomyocytes Initially, we investigated the effect of TNF␣ on insulin signaling and insulin-induced glucose uptake in HL-1 cardiomyocytes. TNF␣ exposure significantly induced gene expression of the key proinflammatory markers Nfkb1, Il6, Cxcl8, and Ccl2, suggesting that TNF␣ induces an inflammatory response in cardiomyocytes (Fig. 1A). Next, untreated and TNF␣-treated cardiomyocytes were stimulated with insulin for a short time. In TNF␣-challenged cardiomyocytes, insulin-induced phosphorylation of protein kinase B (AKT), AKT substrate 160 (AS160), and glycogen synthase kinase 3␤ (GSK-3␤) was dramatically reduced, suggesting a deregulation of the insulin signaling pathway by TNF␣ (Fig. 1B). Furthermore, TNF␣ expo-sure led to a significant decrease in basal Glut4 mRNA level (Fig. 1C) and insulin-stimulated plasmalemmal GLUT4 translocation (Fig. 1D). Subsequently, as a functional consequence of perturbed insulin signaling, insulin-stimulated glucose uptake was significantly abrogated upon TNF␣ challenge compared with untreated cardiomyocytes (Fig. 1E). Additionally, in TNF␣-challenged cardiomyocytes, periodic acid-Schiff staining indicated lowered glycogen content, visible as pink coloration (Fig. 1F), and quantitative analysis by glucose oxidase assay confirmed a decrease in glycogen (ϳ25%) compared with untreated cells (Fig. 1G). Overall, these results demonstrate that TNF␣ induces an inflammatory response and perturbs the insulin signaling pathway, coupled with decreased GLUT4 translocation, glucose uptake, and glycogen content, ultimately leading to IR in cardiomyocytes. 2-AG-induced activation of the AMPK signaling pathway is mediated via the upstream kinase CaMKK␤ Previously, it was reported that cannabinoid derivative trans-⌬ 9 -tetrahydrocannabinol activates the AMPK signaling pathway (9). Therefore, we initially investigated the time-and dosedependent effect of the endocannabinoid 2-AG on AMPK Western-blotting analysis of the insulin (Ins) signaling pathway following insulin stimulation. p, phospho; t, total. C, relative mRNA level of Glut4. *, p Ͻ 0.01 by unpaired Student's t test. D, insulin-stimulated GLUT4-myc translocation. *, p ϭ 0.047 by two-way ANOVA with Bonferroni's multiple comparisons post hoc test. E, insulin-stimulated glucose uptake. *, p ϭ 0.041 by two-way ANOVA with Bonferroni's multiple comparisons post hoc test. ACTB, ␤-actin. F, representative periodic acid-Schiff staining for intracellular glycogen content. Scale bar ϭ 100 m, ϫ40 magnification. G, glucose oxidase assay for quantitation of intracellular glycogen content. *, p Ͻ 0.01 by unpaired Student's t test. Data are expressed as mean Ϯ S.D. n ϭ 4 -5 independent experiments in HL-1 cardiomyocytes. activation in HL-1 cardiomyocytes (supplemental Fig. 1). Based on previous reports (10, 11), a working concentration of 10 M 2-AG was chosen for the experiments. We observed that 2-AG treatment led to rapid phosphorylation of AMPK within 1 h (supplemental Fig. 1A). Upon further investigation, we observed that 2-AG in the range of 1-20 M dose-dependently increased AMPK phosphorylation in HL-1 cardiomyocytes (supplemental Fig. 1B). Additionally, 2-AG treatment also led to rapid phosphorylation (within minutes) of both AMPK and ACC in HL-1 cardiomyocytes and isolated primary rat cardiomyocytes ( Fig. 2A). Furthermore, cells were pretreated with specific antagonists of CB1R (AM251, 5 M) or CB2R (AM630, 5 M), followed by treatment with 2-AG (Fig. 2B). In the presence of AM251, 2-AG failed to increase the phosphorylation of AMPK and ACC, but no significant inhibition was observed with AM630 treatment (Fig. 2B). Next, we explored the cardiac expression pattern of cannabinoid receptors in a physiological model of IR, viz. leptin receptor-deficient db/db mice. Interestingly, both Cb1r and Cb2r gene expression is significantly lowered in diabetic, insulin-resistant db/db mice in comparison with their heterozygous, non-diabetic (db/ϩ) counterpart (Fig. 2C). This suggests that, in the context of cardiac tissue, the ECS is dysfunctional in type 2 diabetes. As an additional control for the receptor specificity, the well known 2-AG-induced gene expression of Cb1r was significantly inhibited in the presence of AM251 but not by AM630. 2-AG had no significant effect on Cb2r gene expression (Fig. 2D). Next, to understand the downstream mechanism involved in 2-AG-induced AMPK activation, we pretreated LKB1-deficient HeLa cells with pharmacological inhibitors of known upstream AMPK kinases-KN93 (KN, Ca 2ϩ /calmodulin-dependent protein kinase II, 10 M), STO-609 (STO, a CaMKK␤ inhibitor, 20 M), or OZ/(5Z)-7oxozeanol (OZ, TGF␤-activated kinase 1, 5 M)-followed by 2-AG treatment. 2-AG-induced phosphorylation of AMPK was completely abrogated in the presence of STO but not KN or OZ (Fig. 2E). Similar results were obtained in primary rat cardiomyocytes, where pretreatment with STO or an AMPK inhibitor (compound C (CC), 10 M) suppressed 2-AG-induced phosphorylation of AMPK, whereas KN or OZ did not show any significant effect (Fig. 2F). Additionally, A, phospho (p-) and total (t-) ACC and AMPK protein levels upon 2AG treatment in HL-1 and primary cardiomyocytes. veh, vehicle. B, phospho and total ACC and AMPK protein levels upon 2-AG treatment for 1 h, preceded by treatment with AM251 or AM630 for 1 h in HL-1 cardiomyocytes. C, relative mRNA levels of Cb1r and Cb2r in cardiac tissues obtained from db/ϩ (n ϭ 4) and db/db (n ϭ 6) mice. *, p Ͻ 0.01 by unpaired Student's t test. D, relative mRNA levels of Cb1r and Cb2r in HL-1 cardiomyocytes upon 2-AG treatment for 1 h, preceded by treatment with AM251 or AM630 for 1 h in HL-1 cardiomyocytes. *, p Ͻ 0.01 by one-way ANOVA with Tukey's multiple comparisons post hoc test. Ctl, control. E, phospho and total AMPK protein levels upon 2-AG treatment for 1 h, preceded by treatment with KN, STO, or OZ for 1 h in LKB1-deficient HeLa cells. F, phospho and total AMPK protein levels upon 2-AG treatment for 1 h, preceded by treatment with CC, KN, STO, or OZ for 1 h in primary rat cardiomyocytes. G, glucose uptake in primary rat cardiomyocytes upon 2-AG treatment for 1 h, preceded by treatment with CC, KN, STO, or OZ for 1 h. ACTB, ␤-actin. *, p Ͻ 0.01 by one-way ANOVA with Tukey's multiple comparisons post hoc test. Data are expressed as mean Ϯ S.D. n ϭ 4 -5 independent experiments. 2-AG significantly induced glucose uptake in adult rat cardiomyocytes, whereas both STO and CC pretreatment markedly reduced this effect (Fig. 2G). Overall, these results indicate that 2-AG-induced AMPK activation and increased glucose uptake require the CB1 receptor and CaMKK␤ in cardiomyocytes. 2-AG inhibits TNF␣-induced IR Next, to determine the effect of 2-AG on reversing IR, cardiomyocytes were challenged with TNF␣ in the presence or absence of 2-AG, followed by insulin stimulation (Fig. 3A). 2-AG co-treatment markedly reversed the inhibitory effect of TNF␣ on the insulin signaling pathway (i.e. it restored AKT, AS160, and GSK3␤ phosphorylation upon insulin stimulation) and increased both AMPK and ACC phosphorylation. Additionally, TNF␣-induced activation of NF-B (as represented by the phosphorylation of the p65 subunit of NF-B) and JNK/ stress-activated protein kinase signaling pathway was signif-icantly inhibited upon co-treatment with 2-AG (supplemental Fig. 2A). Furthermore, the inhibitory effect of TNF␣ on plasmalemmal GLUT4-myc translocation (Fig. 3B) and insulinstimulated glucose uptake (Fig. 3C) was restored upon 2-AG co-treatment. Interestingly, AEA also partially restored insulin signaling and increased glucose uptake along with AMPK activation in TNF␣-challenged cardiomyocytes (supplemental Fig. 2, B and C), indicating that both endocannabinoids ameliorate inflammatory stress-induced IR. However, in comparison with AEA, 2-AG exhibited stronger beneficial effects. Next, to determine the molecular mechanism involved in 2-AG-mediated reversal of IR, we pharmacologically inhibited the CaMKK␤ and AMPK signaling pathways in primary cardiomyocytes. In the presence of either STO or CC, 2-AG failed to increase AMPK or ACC phosphorylation and failed to restore the TNF␣-induced reduction in AKT, AS160, or GSK3␤ phosphorylation (supplemental Fig. 2D). Similarly, in the presence of STO or CC, 2-AG failed to enhance glucose 2-AG reverses inflammatory stress-induced IR. A, representative Western-blotting analysis of the insulin (Ins) signaling pathway following insulin stimulation. p, phospho; t, total. B, insulin-stimulated GLUT4-myc translocation. *, p Ͻ 0.01; #, p ϭ 0.043; two-way ANOVA with Bonferroni's multiple comparisons post hoc test. Ctl, control. C, insulin-stimulated glucose uptake. *, p Ͻ 0.01 by two-way ANOVA with Bonferroni's multiple comparisons post hoc test. ACTB, ␤-actin. D, relative mRNA levels of the proinflammatory markers Glut4 and Ppargc1a. *, p Ͻ 0.01 by one-way ANOVA with Tukey's multiple comparisons post hoc test. E, glucose oxidase assay for quantitative determination of intracellular glycogen content in cardiomyocytes subjected to 1-h pretreatment with STO or CC, followed by 16 h of TNF␣ and/or 2-AG treatment as indicated. *, p Ͻ 0.01 by one-way ANOVA with Tukey's multiple comparisons post hoc test. Data are expressed as mean Ϯ S.D. n ϭ 4 -5 independent experiments in primary rat cardiomyocytes. uptake or restore insulin-stimulated glucose uptake in TNF␣challenged cardiomyocytes (supplemental Fig. 2E). Furthermore, we determined the anti-inflammatory effect of 2-AG and the involvement of AMPK in the context of TNF␣induced IR. Gene expression analysis demonstrated that TNF␣ significantly induced the mRNA levels of the proinflammatory markers Nfkb1, Il6, Cxcl8, and Ccl2 (Fig. 3D). Conversely, TNF␣ exposure led to a decrease in gene expression of both Glut4 and Ppargc1a, the latter being a master cardiac transcriptional coactivator (12). 2-AG co-treatment decreased inflammatory stress, as evidenced by a decrease in the mRNA levels of Nfkb1, Il6, Cxcl8, and Ccl2, and increased cardiomyocyte energy efficiency by up-regulating Glut4 and Ppargc1a mRNA levels. However, pretreatment with either STO or CC blocked the beneficial effects of 2-AG in cardiomyocytes. Similarly, the 2-AG-mediated increase in intracellular glycogen content, either in the presence or absence of TNF␣, was dramatically reduced in the presence of the inhibitors (Fig. 3E). Overall, these results strongly indicate that 2-AG, via CaMKK␤, activates AMPK, exerts anti-inflammatory effects, reverses IR, and restores insulin-stimulated glucose uptake in TNF␣-challenged cardiomyocytes. 2-AG inhibits IR in hESC-CMs To address the role of 2-AG in in vitro human pathological models of IR, we challenged hESC-CMs with either TNF␣ or FFA (the latter representing an overarching, well established cell model of diet-induced IR in vivo). Differentiation of hESC to functional cardiomyocytes was confirmed by gene expression analysis showing a robust induction of the key cardiac structural and functional markers ␣-myosin heavy chain (MYH6) and cardiac troponin T type 2 (TNNT2), respectively (Fig. 5A). Both TNF␣ (Fig. 5B) and FFA (supplemental Fig. 4A) increased gene transcription of the proinflammatory markers NFKB1, IL6, and CXCL8 and inhibited PPARGC1A gene expression. Conversely, co-treatment of 2-AG led to a marked decrease in the mRNA levels of all proinflammatory markers along with an increase in PPARGC1A gene expression. Pretreatment with the inhibitors AM251, STO, or CC, representing key checkpoints in the 2-AG-AMPK signaling pathway, abolished the anti-inflammatory effect of 2-AG on TNF␣or FFA-induced inflammation. Similarly, insulin-stimulated phosphorylation of AKT, AS160, and GSK3␤ was strongly inhibited by both TNF␣ (Fig. 5C) and FFA (supplemental Fig. 4B), and 2-AG co-treatment restored the phosphorylation levels of these key proteins involved in the insulin signaling pathway along with induction of AMPK and ACC phosphorylation. Finally, both TNF␣and FFA-mediated inhibition of insulin stimulated-glucose uptake were significantly reversed in the presence of 2-AG, and, as expected, pretreatment with inhibitors completely blocked the ameliorative effect of 2-AG in IR ( Fig. 5D and supplemental Fig. 4C). The results obtained in hESC-CMs suggest that 2-AG, via CB1R and CaMKK␤, activates the AMPK pathway to counteract FFA-and TNF␣-induced inflammatory stress and IR and subsequently restores metabolic homeostasis of cardiomyocytes by potentiating insulin-stimulated glucose uptake (Fig. 6). Overall, these results clearly indicate that 2-AG ameliorates FFA-and TNF␣-induced IR. Discussion Evidence points to inflammation-induced cytokine production and/or high plasma fatty acid levels as primary etiologic factors in the development of cardiac IR. Here we demonstrated that inflammation induces IR and perturbs glucose metabolism in cardiomyocytes. Similarly, exposure to FFA produced IR in cardiomyocytes, in accordance with earlier findings (2). We show that, irrespective of the cause, IR can be alleviated by 2-AG treatment. Mechanistically, 2-AG, via the CB1 receptor and CaMKK␤, activates AMPK. Treatment with 2-AG decreased the gene expression of proinflammatory cytokines, reversed IR, and enhanced glucose uptake, thus restoring metabolic homeostasis of cardiomyocytes. Pharmacological inhibition or genetic interference with the AMPK signaling pathway impaired these 2-AG actions, suggesting that AMPK is mediating the beneficial effects. These findings identify 2-AG as a key signaling molecule against IR and inflammation, prominent Previous reports have demonstrated the detailed mechanism of TNF␣-mediated inhibition of insulin signaling to induce IR in various metabolically relevant tissues (4,5). Here we have delineated the effect of TNF␣ on insulin signaling and the functional consequences on cardiomyocytes, a topic that is remarkably understudied. Our data demonstrated that TNF␣ inhibits the insulin-stimulated phosphorylation of its downstream effectors AKT, AS160, and GSK3␤ to promote IR. The cardiac system normally responds to injury by altering substrate metabolism from the use of fatty acids toward glucose uptake and utilization. IR prevents this adaptive response and aggravates the injury by contributing to lipotoxicity and inflammation, among other stressors. These effects are in line with our observation of reduced insulin-stimulated GLUT4 translocation, glucose uptake, and intracellular glycogen content in TNF␣-challenged cardiomyocytes, constituting a novel model to study metabolic derangements observed during cardiac inflammation. 2-AG ameliorates cardiomyocyte insulin resistance A previous report (9) has indicated that the plant-derived cannabinoid trans-⌬ 9 -tetrahydrocannabinol can activate AMPK, an important integrator of signals managing energy balance and acting as a protective response to energy stress during metabolic deregulations (13,14). Being highly expressed in the cardiac system, AMPK has been demonstrated to have cardioprotective effects (15). We observed that the endocannabinoid 2-AG activates the AMPK signaling pathway in various cell types in a CB1R-CaMKK␤-dependent manner. This observation is in line with previous reports suggesting that 2-AG induces a rapid, transient increase in intracellular Ca 2ϩ via CB1R stimulation (16); on the other hand, increased intracellular Ca 2ϩ leads to AMPK activation (17). Additionally, we observed that 2-AG, via AMPK activation, enhances basal glucose uptake in cardiomyocytes, in accordance with the contraction-stimulated rise in AMPK activity and consequent increase of substrate uptake (15). However, conflicting results regarding the ECS under pathological conditions has emerged from recent studies, with experimental disease models demonstrating that pharmacological approaches that modulate the same specific targets of the ECS can have both positive and negative effects (18,19). In this context, it should be noted that a substantial amount of 2-AG has been detected in various tissues, including the liver, kidney, spleen, several regions of the brain, lung, and plasma, as well as in human milk (20,21). Furthermore, stimulus-induced generation of 2-AG in various tissues adds to the complexity in ascertaining the systemic effects of 2-AG. Therefore, the biosynthetic pathways for 2-AG appear to differ depending on the types of tissues and cells and the types Inflammation (induced by TNF␣) and lipid oversupply (FFA) perturbs insulin sensitivity and restricts glucose uptake in cardiomyocytes in the long term, leading to DCM. The endocannabinoid 2-AG, via CB1R and CaMKK␤, activates the AMPK signaling pathway to inhibit inflammation, restore insulin sensitivity, and facilitate glucose uptake in cardiomyocytes, implying that 2-AG treatment can be a viable therapeutic approach to restore cardiometabolic homeostasis via energy balance in IR. 2-AG ameliorates cardiomyocyte insulin resistance of stimuli. Thus, on one hand, an elevated level of 2-AG upon chronic alcohol challenge has been implicated in the progression of alcoholic steatohepatitis (19), 2-AG-induced activation of JNK signaling pathway has been shown to aberrantly trigger hepatic gluconeogenesis (11), and rodent models with CB1R deficiency or antagonism have been demonstrated to be resistant to diet-induced obesity and insulin resistance, specifically in liver (22)(23)(24). On the other hand, our results as well as several previous studies have strongly indicated potential beneficial effects of 2-AG in the context of the cardiovascular system, neurotransmission, and immunomodulation (7,8,18,20,24). Nevertheless, the precise physiological function of 2-AG in acute and chronic inflammation and/or immune responses remains relatively obscure. Several questions still remain to be answered. What does the serum level of 2-AG indicate? How does it change in the context of metabolic disease, such as type 2 diabetes? In this context, our results demonstrating diminished cardiac Cb1r and Cb2r gene expression in db/db mice indicate a potential correlation between a dysfunctional ECS and cardiac IR. However, further studies are necessary for a better understanding of the metabolism and mode of action of 2-AG in a cell type-and tissue-specific manner to decipher the role of this highly complex signal transduction system in distinct pathological conditions. Both in vitro and in vivo, the ECS has been shown to exhibit immunomodulatory properties (25). In our inflammation-induced IR model, 2-AG reverses IR by inhibiting the gene expression of proinflammatory cytokines, restoring the insulin signaling pathway, and enhancing glucose uptake. Furthermore, 2-AG treatment also led to an increase in Glut4 and Ppargc1a mRNA levels under these conditions. These findings are important in the context of DCM, as overexpression of human GLUT4 in db/db mice was demonstrated to normalize cardiac contractile dysfunction (26). Several reports indicate that PGC1␣, in response to physiologic stressors, initiates biological responses that equip cardiomyocytes to meet energy demands by augmenting mitochondrial biogenesis, cellular respiration rates, and substrate utilization (12). Interestingly, both Ca 2ϩ and AMPK have been positively implicated in the regulation of PGC1␣. Indeed, we observed that pharmacological inhibition or genetic interference of CaMKK␤ and AMPK resulted in attenuation of 2-AG-mediated induction of Ppargc1a as well as Glut4. Hence, we speculate that they are key contributors in mediating the beneficial effects of 2-AG in TNF␣-induced IR and, possibly, for treatment of DCM. Because the majority of ECS-related studies have been performed in mouse models, the relevance of 2-AG in humans, in the context of inflammation and IR, has remained largely elusive. In this study, we addressed this issue by utilizing hESC-CMs as a model to mimic adult human cardiomyocytes. The responsiveness of hESC-CMs to FFA-and TNF␣-induced IR further indicates the possible value of this model as a novel tool for metabolic characterization of various human pathological conditions. Furthermore, in line with our previous results in rodent cardiomyocytes, the ameliorative effect of 2-AG on inflammatory stress, insulin signaling, and substrate utilization was observed to occur via the CB1R-CaMKK␤-AMPK signaling cascade, indicating that this novel signaling cascade is conserved in humans. Overall, our study unravels an unprecedented role of 2-AG in the regulation of inflammation-induced IR. Endocannabinoids are crucial to bioregulation, and, because of their hydrophobic nature, their main actions are limited to autocrine or paracrine effects rather than systemic. With scientific evidence suggesting their conflictive roles in inflammation, IR, and energy metabolism, e.g. in the heart as opposed to the liver, tissuespecific application of endocannabinoids may be a valuable tool to counter cardiac IR. Further investigation of this exciting field will provide insights into the mechanisms of health and disease and provide novel therapeutic avenues. Isolation and culturing of primary rat cardiomyocytes Cardiac myocytes were isolated from male Lewis rats (200 -250 g) using a Langendorff perfusion system and a Krebs-Henseleit bicarbonate medium equilibrated with a 95% O 2 /5% CO 2 gas phase at 37°C as described previously (27,28). Animals Male heterozygous non-diabetic (db/ϩ) and diabetic (db/db) C57BL/Ks mice (10 weeks of age) were purchased from Charles River Laboratories. The animals were housed in the specific pathogen-free animal facility of Eindhoven University of Technology in individually ventilated cages and in the conventional facility of Muenster University under controlled temperature 2-AG ameliorates cardiomyocyte insulin resistance (23°C) and humidity (50%) with a 12:12-h dark-light cycle. The mice had ad libitum access to water and food (5K52 LabDiet, 22 kcal% protein, 16 kcal% fat, 62 kcal% carbohydrates). The animals were sacrificed by cervical dislocation under isoflurane anesthesia, and the heart was excised and frozen in liquid nitrogen for quantitative PCR analysis. All procedures conformed to Directive 2010/63/EU of the European Parliament and were approved by the Animal Experimental Committees of Maastricht University (The Netherlands). RNA expression analysis For quantitative PCR analysis, RNA was isolated using TRIzol (Amresco), and 1 g was used to synthesize cDNA (high-capacity cDNA reverse transcription kit, Life Technologies). Amplification was performed using iTaq Universal SYBR Green Supermix (Bio-Rad) and the ABI 7900HT fast real-time PCR system (Life Technologies). The genes Cyclophilin A (for HL-1 and primary rat cardiomyocytes) and TBP (for hESC-CMs) were used for normalization, and the average of the control cell values was used for calculating relative expression using the ⌬⌬ CT method. Glucose uptake Following the indicated treatments, the medium was replaced with Krebs-Ringer bicarbonate buffer with 0.5% BSA, and cells were treated with PBS and 100 nM insulin for 30 min and then incubated with [ 3 H]2-deoxyglucose (1:20 in cold 10 mM 2-deoxyglucose) for 10 min. Cells were lysed in 1% Triton X-100, and radioactivity was determined by ␤ scintillation counting. Values were normalized to protein content (BCA). Biochemical intracellular glycogen measurement Extraction of intracellular glycogen was performed as described previously (29). Final hydrolysates were used for glucose determination using a glucose (GO) assay kit (Sigma) according to the instructions of the manufacturer and expressed as microgram of glucose per milligram of protein. GLUT4-myc translocation assay (GLUT4-myc cell surface staining) Plasmalemmal GLUT4 was detected using a GLUT4 variant carrying a myc tag on its first extracellular epitope, as described previously (30). Cardiomyocyte differentiation from hES cells All experiments were performed using the H7 (NIHhESC-10 Ϫ0061 ) hESC lines, kindly provided by the WiCell Research Institute (Madison, WI). hESCs were maintained and differentiated according to the protocol of the manufacturer (Gibco) and as described previously (31). Briefly, undifferentiated hESCs were detached from the culture plate with 0.5 mM EDTA (UltraPure 0.5 M EDTA (pH 8.0), Invitrogen) and seeded onto Geltrex-coated plates (Geltrex LDEV-free hESC-qualified reduced growth factor basement membrane matrix, Gibco). Cells were refed daily with Essential 8 medium (Essential 8 basal medium containing 15 mM HEPES, L-glutamine, and sodium bicarbonate at 1.743 g/liter, supplemented with Essential 8 supplement (Gibco)) to expand the culture. When the cells reached a 30 -50% confluent state (Ϯ 4 days of culture), cardiac differentiation was induced by replacement of Essential 8 medium with cardiomyocyte differentiation medium (PSC cardiomyocyte differentiation kit (Gibco) containing differentiation medium A, differentiation medium B, and maintenance medium). Cells were kept in differentiation medium A for 2 days, which was replaced with differentiation medium B for the following 2 days, after which cells were further maintained on maintenance medium. Cultures were refed every other day and monitored daily for signs of contractile capacity. After 30 days of in vitro differentiation, cells were ready for experimental purposes. All treatments were performed as indicated for HL-1 and primary rat cardiomyocytes. Statistical analysis Student's unpaired t tests or ANOVA were performed in GraphPad Prism v.6.01. Differences were considered statistically significant at p Ͻ 0.05.
2018-04-03T05:25:16.805Z
2017-03-20T00:00:00.000
{ "year": 2017, "sha1": "74983b0267d273004408d189adb3abbaf7f884dd", "oa_license": "CCBY", "oa_url": "http://www.jbc.org/content/292/17/7105.full.pdf", "oa_status": "HYBRID", "pdf_src": "Highwire", "pdf_hash": "92eaad23b4fa188e2320a68ef4ccb2f1a4df11d1", "s2fieldsofstudy": [ "Biology", "Medicine", "Chemistry" ], "extfieldsofstudy": [ "Medicine", "Chemistry" ] }
72971773
pes2o/s2orc
v3-fos-license
Perceptions regarding tobacco usage among adolescents & young adults in a district of Western Uttar Pradesh (India): a qualitative study In India around 194 million men and 45 million women use tobacco in smoked or smokeless forms as per WHO data. 1 Indians are the second biggest users of tobacco in many forms such as bidi, gutka, khaini, paan-masala, hukka, cigarettes etc. 2,3 The irony in India is that the most dangerous time for starting tobacco use in India is adolescence from age 10 year onwards and early adulthood below 30 years, as nearly 55500 adolescents currently are using tobacco every day in India 4 and that too because of low socio-economic status of adolescents in family. 1-5 High prevalence of adolescents tobacco users across many states of India have been reported in studies conducted in Uttar Pradesh, Chennai and other NorthEastern states of India. 5-8 Moreover these studies also indicate that more than 25% of adolescents aged 13 to 15 years in India had used tobacco in any form, and 17% were current user. 9-10 Tobacco use is therefore increasing among Indian school and college going adolescents as a persistent alarming issue till adulthood. 11 ABSTRACT INTRODUCTION In India around 194 million men and 45 million women use tobacco in smoked or smokeless forms as per WHO data. 1 Indians are the second biggest users of tobacco in many forms such as bidi, gutka, khaini, paan-masala, hukka, cigarettes etc. 2,3 The irony in India is that the most dangerous time for starting tobacco use in India is adolescence from age 10 year onwards and early adulthood below 30 years, as nearly 55500 adolescents currently are using tobacco every day in India 4 and that too because of low socio-economic status of adolescents in family. [1][2][3][4][5] High prevalence of adolescents tobacco users across many states of India have been reported in studies conducted in Uttar Pradesh, Chennai and other North-Eastern states of India. [5][6][7][8] Moreover these studies also indicate that more than 25% of adolescents aged 13 to 15 years in India had used tobacco in any form, and 17% were current user. [9][10] Tobacco use is therefore increasing among Indian school and college going adolescents as a persistent alarming issue till adulthood. 11 This alarming rise may be attributed to factors such as poor perceptions. Studies by qualitative techniques have revealed few critical issues in perceptions that: (a) Parents and peers can strong influence on youth tobacco use; (b) Tobacco chewing in the form of gutkha is considered less harmful and more accessible than smoking cigarettes; and (c) students are positive on role, government can play in tobacco control but this is not the complete picture, which is actually different in community. 12 Studies also reveal that not only traditional forms; such as betel quid, tobacco with lime are very commonly used but also, the use of new products is on increasing level, not only among men but also among children, teenagers, women of reproductive age, medical and dental students. 13 Previous studies in other countries apart from India also indicate that male tobacco users' had belief of smokeless tobacco being less harmful to physical health than cigarette smoking, suggesting that adolescents and young adults often have poor knowledge regarding harmful effects of tobacco. 14 The poor perceptions also indicate that inadequate parental monitoring and association with deviant peers are also the factors operating in adolescent harmful tobacco use. 15 The need of hour is therefore qualitative studies; which can explore assessments and comprehensive measures of tobacco use as well focus on perceptions, so as to provide specific scientific evidence about how decisions on tobacco use are shaped using different tobacco products. 16 17,18 With this background in mind, authors therefore selected this research topic to study on the adolescents and young adults in a rural area of India. All the study subjects (AD+ YA) residing in the field practice area of Rural Health Training Centre (RHTC) of Department of community medicine Muzaffarnagar medical college, Muzaffarnagar (UP) were enrolled and they were selected by using simple random sampling technique. 50% prevalence was assumed as per WHO criteria for calculation of sample size as no clear cut data was available in specified age groups. Sampling was done in such a way that 50% Subjects were selected randomly to constitute at least 200 Adolescents and at least 200 young adults, thus constituting sample size of 400. Therefore total 400 subjects were taken in study from age group 10-30 years for completing the sample size. Inclusion criteria of tobacco usage The criteria for selection of cases were as per working definitions of tobacco usage given by WHO. Tobacco use in our study was defined as any habitual use of the tobacco plant leaf and its products, as per the responses of subjects in any form. Data collection technique Field investigators were adequately trained for the purpose of the collection of data using a semi-structured interview schedule. First a pilot study on 40 subjects was undertaken (10% sample), required correction were also incorporated. Statistical analysis The data was tabulated into Epi-info. version 7.1.3.3 and analyzed by using this software. Nominal Data was analyzed by using Chi-Square test for knowing statistical associations. The 52.4% in AD group as compared to 49.5% in YA group had however some positive attitude towards banning smoking at home (Figure 1). Moreover majority of AD & YA group subjects had also no knowledge regarding harmful effects of the tobacco usage in AD & YA groups (50% & 41.2% respectively) ( Figure 1). The maximum 43.7% of subjects in AD group wanted to quit tobacco products in order to save money, whereas maximum 40% of Subjects in YA group wanted to quit tobacco in older to remain healthy ( Figure 2). DISCUSSION In the present study, the 41% Adolescents (AD) and 54.5% Young Adults (YA) were using any form of tobacco similar to the previous studies in related age groups by Awasthi S et al. (2010) 5 and also near to study finding of 30% of the population 15 years or older -47% men and 14% of women by Rani M et al. (2003). 3 The previous studies have also reported prevalence's of around 33% in Uttar Pradesh. 3 Many studies also indicate that more than 25% of adolescents aged 13 to 15 years in India had used tobacco, and 17% reported current use. [9][10] The higher prevalence obtained in our study may be due their inherent socio-demographic and low socioeconomic factors of adolescents and young adults in district muzaffarnagar (Table 1) for which related factors can also be attributed such as utilization of health services in their area by adolescents and youth as found in previous studies. 17 Many studies have also found that smoking prevalence is higher among disadvantaged groups, and disadvantaged smokers often face higher exposure to tobacco's harms. Thus there is also a strong socioeconomic element in tobacco use in India among both men and women, moreover there is a clear inverse relationship between level of wealth and prevalence of tobacco use. [17,18] Study in Kerala India by Thankappan KR & Thresia CU (2007) also indicated that tobacco use was significantly more among the low Socio-Economic (SE) groups compared to the high SE group 19 and so this fact was also just similar to our study. There is also considerable variation in tobacco use by members of different religions, with Sikhs and Jains reporting the lowest prevalence [NFHS-3 (2005-6)]. 20 However in our study predominantly Muslim population was taking tobacco due to their population in large numbers in our study area Muzaffarnagar (Table 1). In our study, the age of initiation of tobacco were maximum in age group 16-19 years (73.8%) in AD group, whereas in young adults it were maximum in age group 26-30 years (53.2%). According to WHO, the prevalence of tobacco use increases with age. In 2005-2006, the reported prevalence of tobacco use was 3.5% among 15-to 19-year-olds, 9.1% among 20-to 34-yearolds. 18 Results from the 2006 GYTS also show that 1.6% of 13-to 15-year-old students also smoked cigarettes, and another 8.5% used other tobacco products. Further, who were ever-smokers, more than half began before age 10.3. 18 This may be due to the fact that existing factors such as reduced social support for quitting, low motivation to quit, stronger addiction to tobacco and lack of self-efficacy are playing important role. 21 On the whole, the potential risk factors for tobacco as found from previous studies are: age, psychosomatic status and, boring family atmosphere, not living with both father and mother, and health perceptions 22 and these factors are also substantiated from our study. Study by Tylas SL & Pdederson LL (1998) 23 had also found the same facts that adolescent smoking was associated with age, ethnicity, family structure, parental socioeconomic status, personal income, parental smoking, parental attitudes, sibling smoking, peer smoking, peer attitudes and norms, family environment, attachment to family and friends, school factors, risk behaviours, lifestyle, stress, depression/distress, self-esteem, attitudes, and health concerns as possible factors, such as social, personal, economic, environmental, biological, and physiological influences, may also influence smoking behaviour and all these factors are also well highlighted in our present study. In our present study ;the main reason for maximally using tobacco were that -they were more pressurized by Peer group in AD group (79.2%) and whereas in YA group the main reason for maximum use was -when they felt stressed/anxious (61.4% respectively) and both these responses were statistically significant (P <0.001 & P <0.05 respectively). In both AD & YA groups (59.7% & 56.8%) however their families were not using tobacco in any form but this was not statistically significant (P >0.05). But in both AD & YA groups (52.5% & 72.5%) the harmful effects of tobacco products were informed by either family members or educational institutions but this was found to be statistically significant (P <0.05). This finding in our study is also similar to previous study by Song AV et al. (2009) in America in which it was also found that the habit of smoking initiation is directly related to smoking-related perceptions of risks and benefits. 24 The previous study had also found existing associations between attitudinal factors and in not only protection against smoking but also reducing smoking among youth and they also stressed that rather than solely focusing on health risks as a way to deter adolescent smoking, the role of perceived social risks and benefits in adolescents' smoking may be an additional critical focus for intervention for adolescents. 25,26 Moreover in our present study, it was also found that majority of AD & YA group subjects had no knowledge regarding harmful effects of tobacco usage in AD & YA groups (50% & 41.2% respectively) ( Figure 1). When they were asked for quitting tobacco -the maximum 43.7% of subjects in AD group responded to quit tobacco products in order to save money, whereas maximum 40% of Subjects in YA group wanted to quit tobacco in older to remain healthy [ Figure no :02]. This signifies the poor knowledge and attitude of AD & YA group subjects. Studies in past have also found that perceptions of risk decreased with time and adolescents with personal smoking experience report decreasing perceptions of risk and increasing perceptions of benefits over time and changes in risk perceptions may also be influenced by personal and vicarious experience with smoking. 27 Study of Bhimarasetty DM et al. (2013) had also revealed that among the various anti-tobacco measures, the most effective measure can be "teaching the harmful effects of smoking in schools" for adolescents. 28 As signified in our study also. In our study as we found that both the age groups -AD & YA groups the usage of tobacco was mainly in smoked form (62.1% in AD & 54.1% in YA) that too in the form of cigarettes in AD (56.8%) and bidi in YA (69.4%).Study of Nichter M et al. (2004) had also revealed similar fact that the cigarettes were one of the most addictive products because they are made of better quality tobacco, and are milder and smoother to smoke among male college students in India. 29 Social susceptibility to chewing tobacco and social susceptibility to smoking are found to be strong correlates of current tobacco use among government school students. 30 Study of Park HK et al. (2012) on middle school students in Saudi Arabia had also found similar risk factors affecting tobacco use as found in our study such as perceptions; family, friend, teacher; pressure to use tobacco from friend and; easy access to tobacco to adolescents. 31 The clear social influences and attitudes towards tobacco use and religious beliefs and access to tobacco products were significantly associated with attitudes towards tobacco use and future intention of use in study of Park HK et al. (2012) 31 as similar to given findings of our study. We also suggest protective factors for tobacco use such as parents' help; support from family, friends, and teachers; accessibility to tobacco; school& college performance and family income, father's education, and district of residence as similar to Study of Park HK et al. (2012). 31 CONCLUSION The tobacco usage scenario of adolescents and young people are not on healthier side in India; as health risks clouds constantly prevail on them, despite the presence of solutions in the form of many health and nutrition programmes targeted towards them. Tobacco usage is emerging as a rising tip of iceberg problem among adolescents and young adults in rural area, as their perceptions are not healthy in terms of tobacco usage, for which serious efforts from government, medical colleges, primary health care system and also from family level are constantly required to safeguard the health of our future generations. In this area more in-depth qualitative and interventional studies also needs to be done in future.
2019-03-11T13:03:32.175Z
2015-01-01T00:00:00.000
{ "year": 2015, "sha1": "19f48e18dfad376faf49956ea53f7ddfae5b1dcf", "oa_license": null, "oa_url": "https://ijcmph.com/index.php/ijcmph/article/download/929/800", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "fd942529c76acc7e84d5c558851fb6edeb98c869", "s2fieldsofstudy": [ "Medicine", "Sociology" ], "extfieldsofstudy": [ "Medicine" ] }
246705297
pes2o/s2orc
v3-fos-license
Central nervous system tumors in children under 5 years of age: a report on treatment burden, survival and long-term outcomes Purpose The challenges of treating central nervous system (CNS) tumors in young children are many. These include age-specific tumor characteristics, limited treatment options, and susceptibility of the developing CNS to cytotoxic therapy. The aim of this study was to analyze the long-term survival, health-related, and educational/occupational outcomes of this vulnerable patient population. Methods Retrospective study of 128 children diagnosed with a CNS tumor under 5 years of age at a single center in Switzerland between 1990 and 2019. Results Median age at diagnosis was 1.81 years [IQR, 0.98–3.17]. Median follow-up time of surviving patients was 8.39 years [range, 0.74–23.65]. The main tumor subtypes were pediatric low-grade glioma (36%), pediatric high-grade glioma (11%), ependymoma (16%), medulloblastoma (11%), other embryonal tumors (7%), germ cell tumors (3%), choroid plexus tumors (6%), and others (9%). The 5-year overall survival (OS) was 78.8% (95% CI, 71.8–86.4%) for the whole cohort. Eighty-seven percent of survivors > 5 years had any tumor- or treatment-related sequelae with 61% neurological complications, 30% endocrine sequelae, 17% hearing impairment, and 56% visual impairment at last follow-up. Most patients (72%) attended regular school or worked in a skilled job at last follow-up. Conclusion Young children diagnosed with a CNS tumor experience a range of complications after treatment, many of which are long-lasting and potentially debilitating. Our findings highlight the vulnerabilities of this population, the need for long-term support and strategies for rehabilitation, specifically tailored for young children. Supplementary Information The online version contains supplementary material available at 10.1007/s11060-022-03963-3. Introduction CNS tumors are the most common pediatric solid cancers. The average annual incidence rate is 6.18 per 100,000 in 0-4-year-olds in the U.S. [1]. This is higher than what is reported in age groups 5-9 years (5.49 per 100,000) and 10-14 years (5.83 per 100,000) [1]. In recent years, molecular profiling studies led to major advances in the understanding and classification of pediatric CNS tumors [2][3][4]. Development of new therapies is ongoing and expected to increase further patient survival, mitigate long-term treatment toxicities, and improve the quality of life of survivors. Notwithstanding, CNS tumors remain the most common cause of cancer-related death in children and adolescents. Young children (< 5 years of age) are particularly prone to long-term sequelae of cytotoxic therapies [5]. Radiotherapy (RT) is associated with neurocognitive and psychological impairment, increased risk of stroke, secondary malignancies, hearing loss, and neuroendocrine deficiencies [6]. Due to its impact on the developing CNS, RT is often entirely omitted or delayed in the management of brain tumors in young children, risking suboptimal disease control. Radiation-sparing regimens used as an alternative in this population often comprise high-dose chemotherapy with stem cell rescue as a consolidation therapy or intensified intrathecal chemotherapy. Increasing the intensity of chemotherapy has led to improved disease control, however several studies report serious events including toxic deaths, mostly due to myelosuppression, sepsis and/or organ dysfunction [7][8][9][10][11]. Health care professionals often face the dilemma between augmenting treatment intensity for optimal disease control and minimizing acute toxicity as well as the risk of longterm sequelae [12]. Clinical outcome and quality of life of CNS tumor survivors pose a serious concern to treating physicians as well as parents, but literature focusing on young children is scarce. Here, we describe the long-term survival, health, and academic outcomes of young children at the time of CNS tumor diagnosis and treated at our institution over the last 3 decades. Patient population In this retrospective study, young children aged 0-5 years with a newly diagnosed primary CNS tumor and treated at the University Children's Hospital of Zurich between January 1990 and December 2019 were identified. Date of diagnosis was defined as either date of histological confirmation of tissue sample or, if not available, date of diagnostic imaging. The design of the study was approved by the Ethics Committee of the Canton of Zurich. A general research consent was implemented at our institution in 2015. The need for informed consent was waived for deceased patients, patients diagnosed prior to 2015 and lost to follow-up. Patients with documented refusal to participate in research were excluded. Clinical characteristics and long-term outcome measures The baseline clinical characteristics included age at diagnosis, sex, tumor characteristics (histology, location, dissemination status), hydrocephalus at diagnosis, underlying genetic predisposition and treatment details. Extent of resection was determined based on neurosurgical reports and postsurgical MRI when available; if a residual tumor was described in the neurosurgical report and/or postsurgical MRI, the tumor was considered partially resected. Progression-free survival (PFS) was calculated from date of diagnosis to disease progression leading to change in treatment or death in patients without such a progression, and overall survival (OS) from date of diagnosis to death. In the absence of progression and for patients alive at last follow-up, PFS and OS were censored at the last documented date that the patient was seen by a physician (last follow-up). The long-term health-related outcome information was extracted from the medical charts of all patients and included neurologic status, endocrine function, hearing loss, visual acuity, secondary malignancies, and cerebral vasculopathy. Ototoxicity was graded according to Chang after review of available audiograms [13]. Neurologic status was assessed during exams at regular physician's visits and changes present in the most recent neurological examination were summarized. Neurologic deficits collected included cranial nerve deficit, motor and sensory deficits, coordination, gait, reflexes, and tonus. At our pediatric institution, patients are followed up until 20 years of age and information on schooling and employment is regularly documented at long-term follow-up clinic visits. Academic achievement was categorized into two groups. The first group contains patients who attend regular school or work in a skilled job and the second group includes patients who attend an assisted or modified school program e.g. with smaller student numbers per class and additional assistance or who work in an unskilled or assisted job. A skilled job was defined as student with graduation after vocational training with a Federal Diploma (Fig. S3). An unskilled job includes only training on-site without graduation. Preschool-aged children were excluded from this analysis. Statistical analysis Descriptive analyses were used to summarize the study population. Kaplan-Meier survival curves were generated to estimate OS probability and progression-free survival probability. Log-rank test was performed for comparison between different subgroups. To quantify the association between different treatment modalities and long-term outcomes fisher's exact test was used. Patient cohort characteristics We identified 164 children under 5 years of age and diagnosed with a primary CNS tumor between 1990 and 2019 at the University Children's Hospital of Zurich, the largest pediatric oncology center in Switzerland. Thirty-six patients were excluded: 28 patients due to lack of sufficient information and 8 patients due to refusal to participate in research. Thus, 128 patients were included in the final analysis (Table S1). Sixty-three percent of patients received chemotherapy, most of them enrolled on or treated as per tumor-specific protocols ( Fig. S2; Table S2) [14,15]. Most of the deceased patients died within the first 5 years after diagnosis. Of those patients who died, the majority (25 of 28, 92.8%) died of tumor progression/relapse. One patient initially diagnosed with medulloblastoma died after diagnosis of radiation-induced high-grade glioma. The cause of death was undocumented or unclear for two patients. Fourteen patients were over 18 years of age at last followup. As far as documented none of them conceived a child. Four patients (4 of 8 females) had ovarian insufficiency and two required estradiol replacement. For the male patients (n = 6), spermiograms were not available. Visual outcomes Visual impairment is multifactorial and prevalent also in the general population. Focusing on the subset of patients with optic pathway glioma and tumor-associated visual impairment, all patients (n = 13; 2/13 patients with NF1) with optic pathway glioma showed some degree of visual impairment at last follow-up and four patients were blind in at least one eye. Secondary malignancies Three patients were diagnosed with a secondary malignancy during follow-up ( Table 2). All of them had been irradiated by either proton or photon radiation, received chemotherapy, and underwent surgery. None of them had a known underlying genetic predisposition, including NF1. Importantly, the patient with ATRT and later MPNST had no evidence of germline alterations in SMARCB1, SMARCA4, NF1 or TP53. One patient succumbed to his secondary malignancy. Cerebrovascular disease Two out of 51 irradiated patients (3.9%) presented with a moyamoya vasculopathy during follow-up. Both had been diagnosed with posterior fossa ependymoma and had received focal proton radiation at below 4 years of In summary, among survivors followed for more than 5 years (n = 77), 87% present with any tumor-or treatmentrelated sequelae, 61% had any neurological deficit, 30% presented with endocrine sequelae and 81% of them with need for hormone replacement, 17% with hearing impairment, and 56% with visual impairment at last follow-up. Education and occupational outcomes Thirty-seven patients had not yet reached school age at time of last follow-up. Information on schooling was not available for 6 patients. Of the remaining 85 patients, 71.8% were able to attend regular school and/or work in a skilled job, whereas 28.2% were schooled in a modified program or working in an unskilled or assisted job (Fig. 3). Of the patients over 18 years of age at last follow-up (n = 14), one patient attended university. Fifteen of 68 (22%) patients 1 year or older at diagnosis were schooled in a special school environment or working in an unskilled job, compared to 9 out of 17 (53%) patients under 1 year of age at diagnosis. Despite limited patient numbers, this trend suggests lower academic achievement in children diagnosed in the first year of life. Discussion Among survivors of childhood cancer, the cumulative burden of chronic health conditions is highest in patients diagnosed with CNS malignancies [16]. Providing treatments which are both efficient and of acceptable toxicity is thus a great challenge in pediatric neuro-oncology, especially when treating young children. A report from the Childhood Cancer Survivor Study (CCSS) suggested that treatment transformation over the past decades has already decreased the overall treatment burden [17]. Nevertheless, the long-term impact of therapy in the immature CNS and in the developing child remains a major concern. In addition, CNS tumors in young children often display distinct clinical and biological features when compared to those in older children and adolescents [18][19][20][21]. In this study we report the survival, health-related and educational outcomes of young children with CNS tumors treated at a tertiary pediatric oncology center over a period of three decades. Importantly, our study provides long follow-up and comprehensive clinical outcome data on a large cohort of sequentially diagnosed, unselected patients in this age group. We found that mortality was largely due to tumor progression/relapse and predominantly within the first 5 years after diagnosis. The high prevalence of tumorand treatment-related sequelae (87%) highlights the need for close monitoring and long-term, multidisciplinary support strategies. With a 5-year OS of 78.8%, the survival outcome of our cohort is comparable to previous studies, albeit differences regarding patient characteristics and therapy between studies [1,[22][23][24]. One study including 35 children under 1 year of age at diagnosis reported a considerably lower 5-year OS of 57% [25]. The OS of pLGG in our cohort seems to be better than reported in previous studies [20,21], which were multi-institutional and thus more heterogenous in treatment approach. PFS was defined in our study as time from diagnosis to a progression leading to change in treatment. This may overestimate PFS compared to other studies with other definitions and should be considered when interpreting our results, especially for patients with pLGG. Other studies reported a higher mortality rate in children diagnosed under 1 year of age when compared to older children [20,26,27]. Despite a trend in literature towards worse outcome in children diagnosed in the first year of life, we could not confirm this finding in our cohort, noting however a trend towards worse OS in children diagnosed in the first 6 months of life (Fig. S1) [28]. This may be explained by differences in patient characteristics (age at diagnosis limited to 5 years or younger in this study) and tumor distribution (e.g. lower numbers of patients with ATRT in our cohort). Survival rates of children diagnosed with a CNS tumor seem to have increased over the last years [23,24], while the prognosis for high grade subtypes remains poor. Similarly, despite encouraging survival rates of the whole cohort, patients with pHGG and medulloblastoma had an unfavorable outcome, with a 5-year OS of 35.7% and 55.4%, respectively. A study summarizing the outcomes of a cohort of 20 survivors, previously diagnosed with a brain tumor in the first year of life, reported 70% with neurological dysfunction, 25% with endocrine dysfunction, 15% with hearing impairment, and 45% with visual impairment [25]. These numbers are comparable to our findings, despite differences in age of inclusion. A report from the CCSS showed that patients diagnosed with a CNS malignancy had a significantly higher risk of developing a chronic health condition 5 years after diagnosis when compared to their siblings, including endocrine, neurologic, or sensory deficits [29]. Hearing impairment was associated with radiation therapy and platinum chemotherapy, both known risk factors for sensorineural hearing loss [30,31]. A previous study from the CCSS reported hearing impairment in 12% of childhood CNS tumors survivors [30]. A study conducted at St. Jude Children's Hospital compared survivors of CNS tumors to survivors of non-CNS tumors exposed to high-risk ototoxic cancer therapy and reported a prevalence of 36% of severe hearing loss in CNS tumor survivors. There was no association between age at diagnosis and hearing loss [32]. Hypopituitarism was associated with radiotherapy, as previously described [33][34][35]. A recent study identified younger age, tumor location, and radiotherapy as relevant risk factors for developing hypothalamic-pituitary disease [36]. Due to lack of sufficient information, an analysis on radiation dose-dependent outcomes was not possible in our cohort. A report from the CCSS could not find a dose-dependent risk elevation in radiotherapy for endocrine deficits [37]. The important issue of fertility could not be addressed in a uniform manner in our cohort and should be addressed in future studies. We found that most (71.8%) of our school-aged patients or older at the time of analysis attended regular school or were employed in a skilled job. This is higher than what a previous study reported including children diagnosed with a CNS tumor in the first year of life [25]. A recent study with children aged 4 years or younger at diagnosis and treated with proton radiotherapy described a rate of 90% of children functioning in regular schools, 46% of them followed a specialized educational plan, whereas 23% and 36% had a classroom aid or outside tutor, respectively [38]. Interestingly, recent analyses of childhood cancer survivors showed that especially CNS malignancy, younger age at diagnosis, and radiotherapy increased the risk of unemployment and lower educational attainment [39][40][41]. Another analysis showed that childhood survivors of CNS tumors were less likely to complete high school than their siblings but could lower that risk by using special education support [42]. These findings further highlight the need to develop toxicity-sparing treatments and provide long-term support for these patients. The size and heterogeneity of our study population, as well as limitations associated with the retrospective nature of the study, need to be considered when interpreting our findings. Assessment of long-term outcome has not followed a standard procedure in terms of frequency and tools used, limiting interpatient comparisons. The classification of CNS tumors has undergone several important revisions over the decades covered in this study. The main tumor entities have been reclassified and subdivided into new subgroups, reflecting the heterogeneity in tumor biology, especially in pediatric brain tumors in young children [3,43]. Though molecular profiling was beyond the scope of this study, future studies dissecting correlation between tumor biology and healthrelated outcome will be critical to understand the impact of tumor biology on long term functional outcomes. A further limitation of our study is lack of comparison cohort of older pediatric and adolescent patients diagnosed with CNS tumors. Nevertheless, our findings provide a comprehensive overview of an unselected large cohort of patients, diagnosed and followed by a multidisciplinary team in a tertiary center over a period spanning 3 decades. Conclusions Young children are at a high risk for long-term morbidity after diagnosis of a CNS tumor. Encouragingly, though a vast proportion of survivors experience health-related sequelae, most were integrated in regular schools. Our study highlights the importance of long-term support strategies, tailored to young children. These include early screening for visual and hearing impairment, as well as endocrinopathy and neuropsychology assessments, to offer appropriate support. Advances in treatment modalities, including targeted anti-tumor therapies and improvement in high-precision radiation techniques, will hopefully lead to a further reduction in treatment burden and better long-term outcomes in these children. Data availability The datasets generated during and/or analysed during the current study are available from the corresponding author on reasonable request. Conflict of interest The authors have no relevant financial or non-financial interests to disclose. Ethical approval The design of the study was approved by the Ethics Committee of the Canton of Zurich, Switzerland (Nr. 2020-00801). Consent to participate A general research consent was implemented at our institution in 2015. The need for informed consent was waived for deceased patients, patients diagnosed prior to 2015 and lost to followup. Patients who refused general consent were excluded from the study. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/.
2022-02-11T14:44:14.519Z
2022-02-11T00:00:00.000
{ "year": 2022, "sha1": "c4b5386f114dea85ddddff625157b88458799386", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s11060-022-03963-3.pdf", "oa_status": "HYBRID", "pdf_src": "Springer", "pdf_hash": "c4b5386f114dea85ddddff625157b88458799386", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
256839318
pes2o/s2orc
v3-fos-license
Organic materials repurposing, a data set for theoretical predictions of new applications for existing compounds We present a data set of 48182 organic semiconductors, constituted of molecules that were prepared with a documented synthetic pathway and are stable in solid state. We based our search on the Cambridge Structural Database, from which we selected semiconductors with a computational funnel procedure. For each entry we provide a set of electronic properties relevant for organic materials research, and the electronic wavefunction for further calculations and/or analyses. This data set has low bias because it was not built from a set of materials designed for organic electronics, and thus it provides an excellent starting point in the search of new applications for known materials, with a great potential for novel physical insight. The data set contains molecules used as benchmarks in many fields of organic materials research, allowing to test the reliability of computational screenings for the desired application, “rediscovering” well-known molecules. This is demonstrated by a series of different applications in the field of organic materials, confirming the potential for the repurposing of known organic molecules. Measurement(s) excited state energy Technology Type(s) quantum chemistry computational method Measurement(s) excited state energy Technology Type(s) quantum chemistry computational method Machine-accessible metadata file describing the reported data: https://doi.org/10.6084/m9.figshare.17076254 We present a data set of 48182 organic semiconductors, constituted of molecules that were prepared with a documented synthetic pathway and are stable in solid state. We based our search on the Cambridge Structural Database, from which we selected semiconductors with a computational funnel procedure. For each entry we provide a set of electronic properties relevant for organic materials research, and the electronic wavefunction for further calculations and/or analyses. this data set has low bias because it was not built from a set of materials designed for organic electronics, and thus it provides an excellent starting point in the search of new applications for known materials, with a great potential for novel physical insight. the data set contains molecules used as benchmarks in many fields of organic materials research, allowing to test the reliability of computational screenings for the desired application, "rediscovering" well-known molecules. This is demonstrated by a series of different applications in the field of organic materials, confirming the potential for the repurposing of known organic molecules. Background & Summary High Throughput Virtual Screenings (HTVSs) 1,2 have recently been exploited to a great extent to identify promising materials in the domain of organic electronics. This powerful technique has often been used in combination with domain knowledge of the problem, carrying out screenings of modifications of known motifs or architectures known to work for a specific problem e.g. functionalisation for dye-sensitized solar cells 3 , donor-acceptor motifs for thermally activated delayed fluorescence (TADF) 4 , singlet fission (SF) 5 , and for general photovoltaic architectures 6 . This strategy translates in computational terms the process of experimental discovery exploiting chemical intuition 7,8 , and allows the reduction of the chemical space to explore 9 . However, the findings are bound to fall within the domain of what is already known and prevent the discovery of new motifs and design rules. Studies based on exploiting domain knowledge like biradical character for SF 10,11 or donor-acceptor motifs for TADF 12,13 will not find new design rules. Generative models also tend to find motifs similar to those already known 14 . Additionally, the identified candidates may not be easy to synthesise in the laboratory or be stable enough to be characterised, despite recent progresses in introducing measures of synthetic accessibility in HTVSs 15 . In this study, we aim at providing a starting point for computational searches overcoming the mentioned limitations by presenting a data set of 48182 organic semiconductors (OSCs) constituted of molecules that were prepared with a documented synthetic pathway, and are stable in solid-state, enabling their crystallographic characterisation. The data set is therefore an excellent starting point to identify OSCs for various applications that can guide experimental research. We based our search on the Cambridge Structural Database (CSD) 16 , from which we selected OSCs with a computational strategy described in the following sections. The CSD dates back to the 60-70s, and contains crystal structure data for > M 1 samples prepared for various purposes. Excluding polymorphs [17][18][19] or samples measured in different experimental conditions 20 , the vast majority of molecules in the data set has characterisation data available. As it was not built with organic materials applications in mind though, of course, it does contain entries related to this field, any data set derived from the CSD 21 is, therefore, unbiased with respect to the application, though some bias is present due to choices of research groups in the study of a certain molecule or experimental contraints with respect to the ability to crystallise the sample and characterise it. This low bias provides a great potential for novel physical insight: setting different criteria for the ideal candidates based on experimental benchmarks, the more stringent ones (i.e. more rare) can be used to translate results into design principles. Additionally, the fact that it contains molecules used for benchmarks in many fields of organic materials research allows testing the reliability of computational screenings for the desired application, "rediscovering" well known molecules. Studies of OSCs for technological applications exploit the analyses of various electronic properties, ranging from frontier orbital energies to excited state energies and oscillator strengths. For instance, early searches of materials for organic photovoltaics exploited HOMO and LUMO energies [22][23][24][25] , high performance non-fullerene acceptors are known to possess a low LUMO-LUMO + 1 gap 26,27 , luminescent materials for new generation organic light emitting diodes (OLEDs) based on TADF 28,29 as well as singlet fission candidates 30 have been identified by calculating the S 1 -T 1 gap (∆E ST ), and high mobility semiconductors were discovered by looking at electronic couplings, reorganisation energies and electron-phonon couplings [31][32][33] . Providing wavefunctions and basic excited state properties for the first few states will enable other researchers to carry out systematic investigations for applications that, to the best of our knowledge, are yet to be explored through computational screenings, such as aggregation induced emitters (AIEgens) 34 , but also for extremely innovative applications based on higher excited states, for which chemical intuition is still limited, e.g. designing anti-Kasha fluorophores 35 , even displaying delayed fluorescence 36 . The data set presented in this work thus contains a collection of simulated spectroscopic properties on the X-ray geometries of existing organic molecules, showing a simulated HOMO-LUMO gap (E gap ) falling below 4 eV, which we therefore define as organic semiconductors, and can be searched for relevant properties in various technological applications. Some data sets offer interesting properties for OSCs relevant for specific applications, e.g. the HOPV for organic photovoltaics 37 , but they are limited to boundaries within the chemical space, i.e. they exploit domain knowledge about what is known to work. Other data sets offer spectroscopic properties of molecules, such as e.g the QM8 38 or the OE62 21 data sets, but the former is limited in the number and type of heavy atoms and excited states considered, while the latter provides spectroscopic data only for a small fraction of the data set ( K 5 ≈ entries). The data set we present in this work is thus aimed at complementing the currently available ones in these aspects, which we describe in more detail in the following sections. Methods The data set of OSCs we present here has been built starting from the python application programming interface provided within the CSD distribution. To identify OSCs, we started by removing polymeric molecules, disordered solids, and co-crystals from the entries containing X-ray structures. We further reduced the structures to be retained by: 1. including only the most commonly used elements in typical OSCs in the selection of molecules (H, B, C, N, O, F, Si, P, S, Cl, As, Se, Br, I); 2. removing entries with more than one molecule type in the unit cell; 3. removing duplicate entries X-ray geometries include all heavy atoms, while hydrogen atoms are added and normalised (i.e. placed at a typical X-H distance using statistical surveys of neutron diffraction data) using the CSD library's built-in functions exploiting such literature data 39,40 . Due to errors in the procedure, e.g. missing hydrogens in diborane moieties, the structurally erroneous entries are filtered out by comparing the heavy atom connectivity layer of InChI 41 strings of the CSD entry and the extracted geometry, followed by comparison of the chemical formulae between the CSD entry and the extracted geometry. The data set is up to date with the 2020 version of the CSD, thus updates starting from the 2021 version are possible. This procedure resulted in a reduction of the data set from ≈ M 1 to ≈ K 265 molecules. To identify OSCs we adopted a three-step computational funnel strategy in combination with a calibration procedure, aimed at estimating the HOMO-LUMO gap (E gap ) with quantum mechanical (QM) methods of reasonable accuracy. First of all, we selected three methods of increasing accuracy for our computational funnel: PM7, B3LYP/3 -21 G*, and B3LYP/6-31 G*. Second, we picked a subset of 550 molecules on which we performed single point calculations on the X-ray geometries provided within the CSD, obtaining orbital energies with all three methods. This allowed us to compute calibration curves to estimate the B3LYP/6-31 G* HOMO-LUMO gap (E gap ) from low accuracy ones (see panels b), c), e), and f) in Fig. 1), and the associated error distribution. With calibration curves available, we proceeded to compute HOMO-LUMO gaps (E gap ) for the entire data set of K 265 ≈ molecules (panel d) in Fig. 1), estimating the gap that we would obtain if we ran a higher level calculation. Considering the distribution of errors of the calibration curve, at the PM7 level we retained any molecule showing E 5 5 gap ≤ . eV as a potential OSC, reducing the data set from ≈ K 265 to ≈ K 200 molecules. On these molecules, we recomputed the gap at the B3LYP/3-21 G* level (panel g) in Fig. 1), considering any molecule showing E 4 gap ≤ eV as an OSC, resulting in the K 50 ≈ molecules that constitute the data set presented here. 4 eV is a conventional upper limit for semiconductors 42 , and all the best performing molecules across various applications have a smaller gap. On these molecules, we computed excited states properties at TD-DFT/M06-2X/ def2-SVP (see Fig. 2), releasing, as part of the data set, the converged ground state wavefunction, and the results for the first three singlet (S 1 -S 3 ) and triplet (T 1 -T 3 ) states. A calibration of the TD-DFT method for S 1 and T 1 excitation energies for ≈100 data points with available experimental data is presented elsewhere 30 , www.nature.com/scientificdata www.nature.com/scientificdata/ and guarantees the reliability of the method (RMSE 0 05 ≈ . eV). All QM calculations were carried out with the Gaussian16 software 43 , and the data provided as part of this release were extracted from output and checkpoint files using the Multiwfn software 44 and the CClib python library 45 . These calculations allow for interesting analyses regarding the time evolution of the CSD. For instance, since the deposition date of each entry is known, it is possible to follow how many OSCs were deposited over time, both in absolute and fractional terms. From these analyses (see Fig. 3) we see that, while the absolute number is naturally increasing over time, the fractional number of OSCs within the CSD is constant until ≈2010, and since then it has basically doubled, rising from ≈3-4% to ≈7%, which is in agreement with the evolution of research in the organic materials field. Data records The curated data set is available from DataCat, the University of Liverpool repository 46 : 1. data extracted from QM calculations are provided at the University of Liverpool repository 46 in commaseparated values (.csv) format, which can be easily read through common programs or programming languages. A description of the provided data is given in Table 1; www.nature.com/scientificdata www.nature.com/scientificdata/ 2. the wavefunctions for each entry are provided in a set of 31 sequential archives at the University of Liverpool repository 46 allowing for sequential or partial download. Geometries are also given to facilitate analyses. Data are made available in .wfn format; 3. Gaussian16 QM calculations output files are provided at the University of Liverpool repository 46 to allow for additional wavefunction analyses, with the aim to characterise electronic states or transitions, as mentioned in the following sections. Geometries and wavefunctions are provided in .wfn format, the AIM traditional format. We chose this format to provide interested users data for analyses or subsequent calculations that would be independent of the software we used. In fact, .wfn files can be generated or processed with a multitude of tools, among which the popular software Multiwfn 44 , the python library IOData 47 , ORCA 48,49 and others 50,51 . Each .wfn file contains the molecular geometry, as well as occupied molecular orbitals expressed in the atomic basis and their energies. These data can be used for visualisation of e.g. geometries, occupied orbitals, but also to run QM calculations with an initial guess to obtain refined properties for applications of interest. Gaussian16 output files are provided to allow for additional wavefunction analyses on electronic excitations, allowing interested users to avoid repeating calculations that we have already performed. technical Validation The key idea is that new applications of existing molecules can be discovered by searching for useful properties computed for a large data set, thanks to a robust calibration between predicted and experimentally validated data. Crucially, the data set should be totally unbiased and not related to the property of interest: this way, discoveries are truly unexpected and have a large applicative and commercial value. We proved this concept through a range of demonstrations in recent works, covering various applications areas. These demonstrations considered an outdated data set consisting of K 40 ≈ OSCs. The data set presented here is up to date with the 2020 www.nature.com/scientificdata www.nature.com/scientificdata/ version of the CSD, thus containing entries that were not the objects of our previous studies; the same strategies can be used on the fraction of molecules not previously considered to discover more potential candidates, in line with our previous findings. The key applications demonstrated in our previous works are the following: 1. we showed that it is possible to identify completely new molecules that undergo singlet fission (a property of relevance for solar cells) by calibrating a computational method to yield accurate energies of singlet and triplet excited states and found molecules with the ideal energy level alignment 30 . The method rediscovered known molecules for singlet fission (true positives), and identified several different families of known compounds with this desirable property; 2. we proposed a related screening protocol to identify molecules undergoing TADF 28 , a relevant property in the area of display technologies. The protocol indicated without any adjustable parameter that . 0 3% of the ≈ K 40 molecules considered may undergo TADF. About half of them were known TADF emitters, providing great confidence in the quality of the prediction. The other half of the hits were totally unknown to the field, illustrating in parallel how this approach can lead to completely novel design rules; 3. we showed that a similar approach can be used to identify novel electron acceptors to be used in organic solar cells to replace expensive and inefficient fullerene derivatives 52 . Also in this case, about half of the "discovered" molecules were known, the other half being totally novel ones. This work showed that database searching is only the first step and it is possible to modify lead compounds to have other desirable properties, like solubility; 4. we showed that we can screen for luminescent crystals displaying superradiance or near IR emission 53 , properties of interest in the areas of light-emitting diodes, organic lasers, and biological imaging. A common theme of all applications particularly well exemplified by this one is the ability of large screenings to identify plausible optima for any properties; in this case, what is the maximum red shift that can be observed when a particular molecule is studied in its crystal. The basis of similar studies can be laid by analysing properties provided in this database similarly to what is shown in Fig. 4. In the left panel, we report T 1 vs S 1 energies. Potential singlet fission materials fall to the left of the dashed black line, representing the main singlet fission criterion, i.e. S 1 = 2 T 1 . Similarly, potential TADF materials fall in the proximity of the dashed blue line, representing the main TADF criterion, i.e. S 1 = T 1 . Colours encode the S 1 oscillator strength ( f S 1 ) through a logarithmic scale, since one would be interested in materials able to absorb (singlet fission) or emit (TADF) light with a good performance. These types of analyses led us to the work shortly described in points 1 and 2, where we have "rediscovered" well known singlet fission and TADF materials, proving that the starting point, i.e. a reduced version of the data set presented here, is reliable. The same, however, can be done for other properties yet to be studied: for instance, in the right panel of Fig. 4, we report S 1 vs S 2 energies, useful to identify potential anti-Kasha materials, falling in the proximity of the dashed black line, representing S 2 = 2 S 1 . This is a reasonable criterion according to domain knowledge regarding the role of kinetics in anti-Kasha photoreactions 54,55 . In this case, colours encode the S 2 oscillator strength ( f S 2 ) through a logarithmic scale, since in anti-Kasha materials the fluorescence is expected from a higher excited state. www.nature.com/scientificdata www.nature.com/scientificdata/ Usage Notes Above, we have listed some applications deriving from the data presented here. In general, the starting point for each of those applications consisted of a calibration of the computational method used to carry out further analyses with available experimental data. Thanks to the fact that we provide the ground state wavefunction for each of our entries, not only will these calibrations be faster because we provide an initial guess for QM calculations, but also many more analyses are accessible. For instance, electronic states or transitions can be thoroughly characterised with packages such as Multiwfn 44 or TheoDORE 56 , which can provide detailed information regarding the nature of an electronic transition (e.g. Charge Transfer metrics 57,58 , ghost states 59 , electronic density difference 60 , exciton delocalisation 61,62 etc). Additionally, this data set can form the basis for training sets to Machine Learning models aiming at reproducing the electronic density of molecules 63,64 , based on experimental X-ray geometries. The availability of CSD identifiers enables the expansion of analyses to molecules in their crystals 32 , which is fundamental for technological applications of organic semiconductors. Finally, the synthetic approaches that make molecules within the CSD accessible can be easily tracked down thanks to references provided within the data set. This allows not only for a prompt source of synthetic routes to be exploited in case of experimental validation of the results, but is also useful in combination with retrosynthetic planning strategies 65,66 . Code availability Scripts to obtain plots starting from the database are available at the University of Liverpool repository 46 .
2023-02-14T15:47:31.988Z
2022-02-14T00:00:00.000
{ "year": 2022, "sha1": "6356a2e4f3365dda166290de23dd28c699149fdf", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41597-022-01142-7.pdf", "oa_status": "GOLD", "pdf_src": "SpringerNature", "pdf_hash": "6356a2e4f3365dda166290de23dd28c699149fdf", "s2fieldsofstudy": [ "Materials Science", "Chemistry" ], "extfieldsofstudy": [] }
73444248
pes2o/s2orc
v3-fos-license
Progressive Hepatic Cirrhosis Early After Allogeneic Hematopoietic Stem Cell Transplantation in a Patient with Chronic Hepatitis C Infection 6. Graubert TA, Brunner AM, Fathi AT. New molecular abnormalities and clonal architecture in AML: from reciprocal translocations to whole-genome sequencing. Am Soc Clin Oncol Educ Book 2014:334-340. 7. Tiesmeier J, Czwalinna A, Müller-Tidow C, Krauter J, Serve H, Heil G, Ganser A, Verbeek W. Evidence for allelic evolution of C/EBPalpha mutations in acute myeloid leukaemia. Br J Haematol 2003;123:413-419. To the Editor, Hepatitis C virus (HCV)-infected allogeneic hematopoietic stem cell transplantation (allo-HSCT) recipients have a higher incidence of liver cirrhosis over long-term follow-up compared to recipients without HCV infection [1,2]. However, liver dysfunction related to HCV is usually mild in the first 3 months after allo-HSCT [3]. We present the progressive hepatic cirrhosis soon after allo-HSCT in an HCV-infected recipient. The clinical and histopathological features were very similar to fibrosing cholestatic hepatitis (FCH) caused by HCV reactivation. A 50-year-old woman with myelodysplastic syndrome with excess blasts-1 was admitted to undergo allo-HSCT. The patient had a history of hepatitis C positivity (genotype 2a) for more than 20 years. Liver enzyme levels at admission were slightly elevated (aspartate aminotransferase 57 U/L, alanine aminotransferase 61 U/L, alkaline phosphatase 434 U/L, cholinesterase 115 U/L, total bilirubin (T-Bil) 1.2 mg/dL, and hepatitis C viral load 2.5x10 4 IU/mL). The serological tests for hepatitis B virus (HBV) and polymerase chain reaction for HBV-DNA were negative. Computed tomography (CT) demonstrated hepatosplenomegaly. Abdominal ultrasonography (US) showed coarse hepatic echostructure over the entire liver with a dull edge, smooth surface, and straight hepatic vein without ascites or any signs of portal hypertension. Liver biopsy was not performed because of thrombocytopenia. Just before transplantation, no risk factors except for the mild hepatic dysfunction and age were found, the hematopoietic cell transplantation-comorbidity index (HCT-CI) was 1, and the age-adjusted HCT-CI score was 2 [4,5]. Meanwhile, bone marrow examination revealed active disease with 6.7% myeloblasts. Considering the situation, the patient underwent peripheral blood stem cell transplantation from her human leukocyte antigen-identical sibling after myeloablative conditioning with cyclophosphamide (120 mg/kg) and total body irradiation (12 Gy). Considering drug-induced liver dysfunction, we avoided the use of busulfan. Cyclosporine and short-term methotrexate were used for graft-versus-host disease (GVHD) prophylaxis. After neutrophil engraftment, T-Bil was elevated up to 8.3 mg/dL and hepatitis C viral load was noted to have increased to 4.0x10 6 IU/mL on day 36 after allo-HSCT. Methylprednisolone was started at 1 mg/kg/day on day 36 for acute GVHD, with gradual improvement in liver test results. We performed deliberate observation of the patient with weekly US and monthly CT after allo-HSCT, which revealed progressive liver atrophy accompanied with ascites. On day 82 after allo-HSCT, the patient once again became jaundiced and hepatitis C viral load increased over 6.9x10 7 IU/mL. Transjugular liver biopsy showed bridging and pericellular fibrosis with architectural distortion, prominent ballooning, and spotty necrosis, consistent with early cirrhotic changes, and severe hepatocyte damage ( Figures 1A-1D). There was mild portal inflammation without histologic evidence of the small bile duct changes of GVHD. Moreover, there was no sinusoidal obstruction. It was unlikely that the hepatopathy would be caused by cyclophosphamide, considering the timing of administration. From the pathological findings and the increased viral load, HCV reactivation was assumed to be the cause of liver dysfunction. Direct-acting antiviral (DAA) therapy with ledipasvir (90 mg/day) and sofosbuvir (400 mg/day) was started on day 110 after allo-HSCT. Although the viral load decreased, the patient developed liver failure and died on day 126 after allo-HSCT (Supplementary Figure 1). A few case reports have been published on FCH caused by recurrence of HCV in recipients of liver transplantation [6], renal transplantation [7], and allo-HSCT [8]. The histopathological findings of FCH included periportal fibrosis, ballooning degeneration of hepatocytes, prominent cholestasis, and paucity of inflammation [8]. Although cholestasis was not prominent in our case, other pathological findings and the clinical course were very similar to those of FCH. We speculated that this discrepancy may have been due to the timing of liver biopsy, which was performed immediately after the re-elevation of T-Bil and presumably in the early phase of FCH. Generally, the initiation of DAA therapy is recommended at least 3 to 6 months after allo-HSCT in HCV-infected recipients because of the rarity of fulminant hepatitis caused by HCV reactivation in this period and the overlapping toxic effects Figure 1. Photomicrographs of transjugular liver biopsy specimen on day 82 after transplantation, when the patient once again became jaundiced and hepatitis C viral load increased. A) There was extensive bridging and pericellular fibrosis with architectural distortion (silver staining, low power field). B) There was severe damage to hepatocytes. Lymphoid infiltration of the portal region was scarce (hematoxylin and eosin staining, low power field). C) Ballooning degeneration of hepatocytes was evident (hematoxylin and eosin staining, high power field). D) The hepatocytes varied in size with oxyphilic and vacuolated cytoplasm. Scattered focal necrosis was evident (black arrow) (hematoxylin and eosin staining, high power field). or potential drug-drug interactions of DAA with other agents [9]. In this case, we started DAA therapy based on the liver pathology and the increased HCV viral load. However, earlier intervention with DAA soon after the initiation of corticosteroid therapy should be considered, because it is a major risk factor for viral replication. There were some limitations of our clinical practice. First, pretransplant liver status was not fully evaluated. Elastography should be considered for accurate evaluation of the degree of fibrosis [10]. Second, reduced intensity conditioning should be considered to avoid HCV-associated hepatopathy, although in our case the HCT-CI and age-adjusted HCT-CI scores were relatively low. Last, as stated above, earlier diagnosis and intervention with DAA might contribute to good outcomes. In conclusion, the possibility of HCV recurrence should be also considered as a cause of progressive hepatopathy early after allo-HSCT. Conflict of Interest: The authors of this paper have no conflicts of interest, including specific financial interests, relationships, and/or affiliations relevant to the subject matter or materials included. To the Editor, A previously healthy 13-year-old girl presented with a 3-day history of progressive swelling and pain in her left lower limb. She also complained of cough in the last 2 weeks. No trauma, surgery, travel, or medication was noted before this illness. Physical examination revealed significant swelling and tenderness in her left lower limb. The laboratory data showed a high level of D-dimer (13.0 mg/L FEU, reference range <0.55 mg/L FEU). Multidetector computed tomography showed extensive emboli formation from the left calf region to the left ilio-femoral veins and duplication of the inferior vena cava (IVC) (Figure 1). Pulmonary ventilation-perfusion (V/Q) scintigraphy revealed several mismatched areas diagnostic for bilateral acute pulmonary embolism. Tracing the family history, her father had developed venous thromboembolism (VTE) at the age of 40 years and was diagnosed with protein S deficiency. A thrombophilia screening in this patient identified severe protein S deficiency (protein S activity: 2%, reference range: 55%-140%). Other results, including levels of homocysteine, antithrombin III, and protein C activity, were within normal limits; factor II G20210, factor V Leiden G1691A, anti-cardiolipin antibody, and anti-β2-glycoprotein I IgM and IgG were all negative. Her symptoms and signs subsided after treatment with heparin, followed by warfarin for 3 months. The repeated measurement of protein S activity was 7% after discontinuation of treatment with warfarin for one week. Given that two provoking risk factors were present, the patient continued to receive prophylactic therapy with warfarin. Virchow's triad describes the three main factors contributing to thrombosis, which include hypercoagulability, vessel injury, and venous stasis. Congenital anomalies of IVC may predispose to VTE due to resultant venous stasis. Duplication of IVC is usually considered as asymptomatic and an incidental finding while performing retroperitoneal surgery or venous interventional radiology. However, an increasing number of studies suggest that cases of unprovoked VTE were associated with duplication of the IVC [1,2,3,4]. The ages of these patients ranged from 18 to 84 years. No pediatric patient was reported. VTE is long considered to be far less common in children than in adults. Most pediatric VTE is provoked and occurs with multiple risk factors [5]. Genetic risk factors play an important role in children who develop VTE and thrombophilia screening
2019-03-08T14:06:59.133Z
2019-02-04T00:00:00.000
{ "year": 2019, "sha1": "0e8c2121e947fb35c47806e2fc1f4675a64373ab", "oa_license": "CCBY", "oa_url": "https://doi.org/10.4274/tjh.galenos.2019.2018.0224", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "0e8c2121e947fb35c47806e2fc1f4675a64373ab", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
115155699
pes2o/s2orc
v3-fos-license
Die lokale Struktur von T-Dualit\"atstripeln We show that the $C^*$-algebraic approach to T-duality of Mathai and Rosenberg is equivalent to the topological approach of Bunke and Schick. Introduction and Summary String theory [Po] is based on the physical idea to describe nature not only by point particles but with the concept of higher dimensional objects. Although it is still unclear how close string theory is to physics a lot of interesting new phenomena have been observed which have led to many fruitfully new ideas and have made it a rich theory. In particular, a lot of new mathematical ideas have arose from the desire to understand the discovered structures. One of those is the concept of T-duality. T-duality is a duality of string theories (type IIA and IIB) on different underlying space-time manifolds E and 1 E which are (in the simplest case) related by a transformation of type: radius → 1/radius along a compactified space-time dimension. A duality between two theories on different space-time manifolds gives a prescription how the fields and their correlation functions transform under the change of the underlying manifolds. In the present case one may take as an example the charges of the D-branes which take values in the twisted K-theory of E [BM], where the twist is given by a background field on E (a 3-form H called H-flux). Then T-duality must give the answer how the background fields transform and should lead to an isomorphism of the twisted K-theories of the underlying manifolds. We cannot give a summary of the whole subject, but we can try to point out some of its mathematical issues. In literature there are different approaches of a mathematical understanding of T-duality. One is based on the theory of C * -dynamical systems which serve a notion of T-duality using crossed product C * -algebras [BHM2,MR], another is by geometric and topological means [BEM,BHM1,BS,BRS], a third using methods from algebraic geometry [BSST]. We focus our attention to the first and second approach and continue to describe some features of the geometric-topological side in more detail. Let us think of the manifolds E and E as principal circle bundles which have isomorphic quotients E/S 1 ∼ = E/S 1 =: B. In [BEM] it is described in terms of differential geometry how the data of the curvature F of E and of the H-flux H on E are related to the corresponding dual data F and H of E. The result is that integration of H along the fibres of E yields the dual curvature and vice versa. In the case of S 1 -bundles E and E we can identify the classes of the curvatures with the realifications of the first Chern classes c 1 ,ĉ 1 ∈ H 2 (B, Z) of the respective bundles, and there also exist integer cohomology classes h ∈ H 3 (E, Z),ĥ ∈ H 3 ( E, Z) whose realifications are h R = [H] andĥ R = [ H]. In this sense, we forgot geometry and now may only consider these topological data. This is the point of view which was adopted in [BS], wherein among other things the results of [BEM] are restated on a purely topological level. The higher dimensional case, where the circle S 1 is replaced by the n-dimensional torus T n = (S 1 ) ×n , i.e. E is thought of a principal T n -bundle, is described in [BHM1] in terms of differential geometry. Its topological structure is described in [BRS] which we want to discuss in more detail. They introduce so-called T-duality triples and define that a pair (E, h) is dual to a pair ( E,ĥ) if there is a T-duality triple connecting them. The notion of T-duality triples which we are going to call topological triples (Definition 2.9) is central for this work, so let us clarify what it means that a T-duality triple connects the pairs (E, h) and ( E,ĥ) : A T-duality triple is a commutative diagram P × B E } } z z z z z z z z z $ $ I I I I I I I I To a T-duality triple we can associate two C * -algebras, namely the C * -algebra of sections Γ(E, F) and Γ( E, F) of the associated bundles F := P × PU(H) K(H) and F := P × PU(H) K(H). It is the very aim of this work to understand how these two C * -algebras are related to each other. This issue turns our focus on the C * -algebraic approach to T-duality [MR,BHM2] which is based on the understanding of abelian C * -dynamical systems. We shortly summarise some C * -algebraic background. The duality theory of abelian C * -dynamical systems which has been investigated for a quite long time [Pe1] is the foundation to understand T-duality by C * -algebraic means. The dual of an abelian C * dynamical system (A, G, α), i.e. A a C * -algebra with strongly continuous action α : G → Aut(A) of a locally compact, abelian group G, is the crossed product C * -algebra G × α A equipped with the natural actionα of the dual group G, i.e. (G × α A, G,α) becomes again a C * -dynamical system (see [Pe1] or section A.4). A central result is the Takai duality theorem (Theorem A.2) which states in particular that the bi-dual C *algebra is stably isomorphic to the original one, i.e. they are Morita equivalent. Thus, it is completely trivial to understand the structure of the bi-dual and the difficult task is to understand the dual G × α A. In the 80s and 90s big progress has been made to understand the dual in case A is a continuous trace algebra which we assume from now on. The basic structure theorem of Dixmier and Douady (see e.g. [Di]) says that any separable, stable continuous trace algebra A is isomorphic to Γ 0 (E, F) the C * -algebra of sections vanishing at infinity, where E := spec(A) is the spectrum of A and F → E is a locally trivial bundle with each fibre isomorphic to the compacts K(H). Their isomorphism classes are classified byȞ 2 (E, U(1)) ∼ = H 3 (E, Z) (cp. section A.3), and the class in H 3 (E, Z) which determines the isomorphism type of A ∼ = Γ 0 (E, F) is called the Dixmier-Douady invariant of A. A first result [Pe2,RW] for an understanding the crossed product G × α A was that if G is compactly generated and the induced action of G on the spectrum E of A is trivial, then the crossed product G × α A is isomorphic to the balanced tensor product C 0 ( E) ⊗ C 0 (B) A =: p * A, wherein p : E → B is a Gprincipal bundle consisting of the spaces E := spec(G × α A) and B := E. The more general situation wherein the action of G does not fix the spectrum E of A but has constant isotropy group N for each π ∈ E is concerned in [RR]. One of the statements therein is the following. Assume that E with the induced action of G/N is a principal fibre bundle E → B := E/(G/N) and that the restricted action α| N of N on A is locally unitary, then there is a pull back diagram of principal fibre bundles w w n n n n n n n n n n n E := spec(A) ( ( P P P P P P P P P P P P P P E := spec(G × α A) v v l l l l l l l l l l l l l l l B, wherein the down-right arrows have fibre G/N and the down-left arrows have fibre N ∼ = G/N ⊥ . (N ⊥ is the annihilator of N which is the set of characters of G whose restriction to N is identically 1.) Moreover, p * A is isomorphic to N × α| N A and Morita equivalent top * (G × α A). Thus, we have the following schematic situation of C * -algebras over their spectra $ $ I I I I I I I I I I I E x x q q q q q q q q q q q q q B (2) ≀ Moritâ which obviously is similar to diagram (1). The question is whether or not it is possible that both A and G × α A are separable, stable continuous trace algebras. This question has been answered in [ER,Thm. 6]. In particular, this is true if α| N is point-wise unitary and the action of G/N on E × B E ∼ = spec(N × α| N A) is proper, e.g. G/N is compact. This finishes our summary. The approach to T-duality of [MR] considers the following set-up. Let E be a locally compact space (with certain finiteness assumptions) with an action of the torus T n such that E → B := E/T n becomes a principal torus bundle, and let h ∈ H 3 (E, Z). They concern these data as a stable continuous trace algebra Γ 0 (E, F) that has Dixmier-Douady invariant h. Under which circumstances is it possible to lift the T n -action from the spectrum E to an action of R n =: G on Γ 0 (E, F)? If so, is it further possible to obtain an action whose restriction to N := Z n is point-wise unitary for all π ∈ E? These questions are answered in [MR,Thm 3.1]. The general answer is no, but if we make further restrictions to the class h one achieves a positive answer. Namely, a lift α to an R n -action exists if and only if h ∈ F 1 H 3 (E, Z) the first step of the filtration associated to the Leray-Serre spectral sequence, and there exists a point-wise unitary action if even h ∈ F 2 H 3 (E, Z). Consequently, in the latter case there exists a T-dual in the sense that there exists a dual space-time E in diagram (2) which is a Z n ∼ = T n -principal fibre bundle over B and is the spectrum of the stable continuous trace algebra R n × α Γ(E, F) ∼ = Γ 0 ( E, F). In this work we are going to show that the two approaches to T-duality are essentially equivalent, i.e. there is no difference between (equivalence classes of) T-duality triples and (equivalence classes of) abelian C * -dynamical systems which are obtained as described above. To fulfil this task we must develop a technique which enables us to compare such different objects. The methods of [BRS] and [MR] are not applicable for such a manoeuvre as they are too less explicit: The explicit description of the local structure of these two different kinds of objects can be used to describe a transformation as desired. Our method is general enough that we do not have to restrict ourselves to the case of the groups R n and T n ∼ = R n /Z n . -We develop a theory for all second countable, locally compact, abelian groups G with lattice N ⊂ G, i.e. N is a discrete, cocompact subgroup. We give an overview of this work. A basic technique we use throughout the whole of this work is to lift all local, projective unitary families of functions (e.g. the transition functions of a PU(H)-principal bundle P → E) to unitary Borel-or L ∞ -functions in direction of the fibres G/N of E → B and to think of them as new unitary (multiplication) operators in U(L 2 (G/N) ⊗ H). We call this procedure Borel lifting technique. The technical condition we must assume is that the base B is a paracompact Hausdorff space which is locally contractible. We call spaces with these properties base spaces, and the whole theory we develop is a theory over base spaces. In sections 2.1 and 2.2 we introduce the notion of pairs. A pair is a G/Nprincipal fibre bundle E → B over a base space B and a PU(H)-principal fibre bundle P → E which is trivialisable over the fibres of E. We explain their local structure and give a quick result on their classification. In section 2.3 we introduce a twisted version ofČech cohomology on the base B, where the twist is given by the bundle E → B. The Borel lifting technique mentioned above defines a map from (equivalence classes of) pairs into the second twistedČech cohomology (similar to the ordinary definition of the secondČech class of a PU(H)-bundle). In sections 2.4 we extend the same procedure to dynamical triples (ρ, E, P) which are pairs (P, E) equipped with a lift ρ of the G/N action on E to a Gaction on P. Due to this action, group cohomological expressions arise in the description of their local structure and lead finally to a map from (equivalence classes of) dynamical triples to the second cohomology of a double complex which has one (twisted)Čech cohomological direction and a second group cohomological direction. A two cocycle (or two cohomology class) of this complex has three entries: a pureČech part due to the transition functions of the pair, a group cohomological part and a third mixed term. In section 3.1 we focus our attention to those triples for which the group cohomological entry vanishes. Section 2.5 just contains the definition of dual pairs and dual dynamical triples which are nothing more than pairs and dynamical triples, but as underlying groups we take the dual group G of G with dual lattice N ⊥ the annihilator of N. Then in section 2.6 we introduce topological triples. Our definition is a straight forward generalisation to arbitrary locally compact, abelian groups G with lattice N from the notion of a T-duality triple (G = R n , N = Z n ). In section 3.1 we state our first main result. We first single out a specific subclass of dynamical triples which we call dualisable. They are those dynamical triples for which the group cohomological entry of the associated two cohomology class (of the double complex) vanishes. We construct an explicit map [(ρ, E, P)] → [(ρ, E, P)] from the set of equivalence classes of dualisable dynamical triples to the set of equivalence classes of dualisable dual dynamical triples, and show that it is a bijection whose inverse is given by the dual map (defined by replacing everything by its dual counterpart). In this sense dualisable dynamical triples and dual dualisable triples are in duality. Section 3.2 contains two important statements. The first is that we have a map τ(B) from the set of equivalence classes of dualisable dynamical triples to the set of equivalence classes of topological triples (everything understood over a base space B). This map is defined by the duality theorem of section 3.1, i.e. the topological triple we define consists of the pairs of two dynamical triples in duality. Then we try to define a map δ(B) in the opposite direction which generally fails as an obstruction occurs. However, on the subset of those topological triples which have a vanishing obstruction we then find a construction of a whole family of dualisable dynamical triples which is associated to a topological triple. This construction, when restricted to the image of the first map, can be turned into an honest map, i.e. we have a preferred choice of an element of the family, and this map is inverse to τ(B). Section 3.3 is devoted to the special case of the group G = R n with lattice N = Z n . In this situation the construction of the map δ(B) simplifies drastically, the group wherein the obstruction lives vanishes. As a result, we can associate to each topological triple a dynamical triple which is unique (up to equivalence) because the family of dynamical triples degenerates to a family of one single element only. As it turns out the two maps τ(B) and δ(B) are bijections and inverse to each other. Moreover, the four maps mentioned above are natural in the base, so they define natural transformations of functors. Thus, the main result of section 3.3 can be restated as follows. In the case of G = R n and N = Z n , we have a completely explicit construction of equivalences of functors Therein, Dyn † ( Dyn † ) is the functor sending a base space to the set of equivalence classes of dualisable (dual) dynamical triples over it. Top is the functor which sends a base space to the set of equivalence classes of topological (Tduality) triples over it. In section 3.4 we point out that the theory developed so far is connected to the theory of C * -dynamical systems precisely as one expects. Namely, the C *dynamical systems (G × α ρ Γ(E, F), G, α ρ ) and (Γ( E, F), G, αρ) are isomorphic, wherein (ρ, E, P) is the dual of (ρ, E, P) and (ρ, E, P) → (Γ(E, F), G, α ρ ) is the functor which sends a dualisable (dual) dynamical triple to its corresponding C * -dynamical system, i.e. F is the associated K(H)-bundle to P and α ρ is the by ρ induced action on the C * -algebra of sections Γ(E, F). In an appendix we put some technical lemmata and notation we are going to use. The Category of Pairs Definition 2.1 A base space B is a topological space which is Hausdorff, paracompact and locally contractible. The category of base spaces consists of bases spaces as objects and continuous maps between them as morphisms. A typical class of base spaces are CW-complexes [FP,Thm. 1.3.2,Thm. 1.3.5]. By G we will always denote a second countable, Hausdorff, locally compact abelian group and by N a discrete, cocompact subgroup, i.e. the quotient G/N is compact. Let H be an infinite dimensional, separable Hilbert space. Let E → B be a G/N-principal fibre bundle and P → E be a PU(H)-principal fibre bundle. . Therefore E need not to be locally compact and thus need not equal the spectrum of any (continuos trace) C * -algebra such as Γ(E, F) the C * -algebra of bounded sections (or Γ 0 (E, F) the C * -algebra of sections vanishing at infinity [A section vanishing at infinity vanishes already identically on the set of points which don't have a compact neighbourhood.]) of the associated C * -bundle F := P × PU(H) K(H). A morphism (ϕ, ϑ, θ) over B from a pair P→E → B with underlying Hilbert space H to a pair P ′ →E ′ → B with underlying Hilbert space H ′ is a commutative diagram of bundle isomorphisms Pairs over a base space B and their morphisms form a category; composition of morphisms (ϕ, ϑ, θ) Then the associated (stabilised) bundle There is a subcategory of pairs over B consisting of pairs with a fixed G/Nbundle E → B and morphisms of the form (ϕ, ϑ, id E ), and we call pairs (P, E) and (P ′ , E) stably isomorphic over E if there is a isomorphism of this special form between (P H 1 , E) and (P ′ The unit element is given by the class of a trivial bundle and the inverse of [(P, E)] is given by the class [(P # , E)] of the complex conjugate bundle P # which is as space the bundle P but has the action (x, U) → x · U # . U # is here the complex conjugate (not the adjoint) of U ∈ PU(H) (which may be defined by identifying H = l 2 (N) and taking the complex conjugate matrix of u = (u ij ) i,j∈N , for U = Ad(u) ). In this way we just mimic the group structure ofȞ 2 (E, U(1)), i.e. the classification map 3 Par(E, B) →Ȟ 2 (E, U(1)) is turned into a group homomorphism. An automorphism of a pair is a morphism from a pair onto itself. The group of automorphisms of a pair is denoted by Aut(P, E). It becomes a topological group when equipped with the initial topology of the forgetful map Aut(P, E) → PU(H, H ′ ) × Map(P, P) which sends a morphism (ϕ, ϑ, θ) to (ϕ, ϑ), wherein U(H, H ′ ) has the strong topology, i.e. the topology of pointwise convergence. Map(P, P) has the compact open topology. Since G/N is a commutative group mappings of the form θ z : E ∋ e → e · z ∈ E, z ∈ G/N, are bundle morphisms. They give rise to a subgroup Let (P, E) be any pair over B with underlying Hilbert space H. B is a base space, and so we can choose a covering {U i |i ∈ I} of B of open sets such that for each U i there is a commutative diagram with bundle isomorphisms k i , h i . We refer to such a covering as an atlas U The transition from one chart to another is described by a set of transition functions. For a pair this consists of two families of continuous functions g ij : U ij → G/N and It follows that on threefold intersections U ijk the relations g ki (u) = g kj (u) + g ji (u) and are valid; equivalently, the family of functions (u). Now, let EA 1 → BA 1 be the universal A 1 -principal fibre bundle. We call the associated pair P univ the universal pair. Indeed we can choose a CW-model for BA 1 such that the universal pair is a pair in the sense of Definition 2.2. Its name is due to the following universal property. Pairs and TwistedČech Cohomology Let M and G be abelian (pre-)sheaves on a space B and assume that M is a right G module. We are going to twist theČech coboundary operator of M by aČech G 1-cocycle. Fix an open covering U • = {U i |i ∈ I} of B and let g ∈Ž 1 (U • , G). Then for ϕ ∈Č n−1 (U • , M), n = 1, 2, . . . , we define wherein δ is the ordinaryČech coboundary operator 4 and The choice of the sign (−1) n−1 is such that the last term of δϕ and the first term of g ⋆ ϕ cancel. We obtain a sequence Proof : We have to show that the square of δ g vanishes. Let ϕ ∈Č n−1 (U • , M), then and therefore The last equality holds due to the cocycle relation g ij + g jk = g ik . By the last lemma we have well-defined cohomology groupsȞ n (U • , M, g) for n = 0, 1, 2, . . . and any open cover U • = {U i |i ∈ I} of B. And as in the untwisted case we define the twisted cohomology groupsȞ n (B, M, g) of B by passing to the limitȞ wherein the limit runs over all refinements V • of U • . To be precise, consider . This construction defines a functor from the category of coverings with refinement maps as morphisms to the category of cochain complexes (and after taking homology to the category of graded abelian groups). The category of coverings is filtered in the following sense: (i) Any two coverings have a common refinement, i.e. for any two objects (ii) Any two refinement maps become equal finally, i.e. for any to morphisms ι : Due to (i) and (ii) the limit lim V •Ȟ n (V • , M, g|) is independent of the choice of the first covering and independent of the refinement maps. So far, in our construction we referred explicitly to a choice of a cocycle g ∈Ž 1 (U • , G), but up to isomorphismȞ n (B, M, g) depends only on the class of g. In fact, let V • = {V k |k ∈ K} and U • = {U i |i ∈ I} be open coverings of B, and let g ′ ∈Ž 1 (V • , G) and g ∈Ž 1 (U • , G) represent the same element iň H 1 (B, G). If W • = {W m |m ∈ M} is a common refinement with refinement maps ι : M → I and κ : M → K then there are r m ∈ G(W m ) such that g ′ κ(m)κ(n) | W mn = r m | W mn + g ι(m)ι(n) | W mn − r n | W mn . They give rise to the following diagram of cochain complexes One easily derives δ ι * g (r # ϕ) = r # δ κ * g ϕ, for ϕ ∈Č • (V • , M), i.e. r # is a cochain map and even an isomorphism. Thus the corresponding cohomology groups are isomorphic and this isomorphism passes to the limit. Therefore the twisteď Cech groupsȞ n (B, M, [g]) are well defined for the class [g] ∈Ȟ 1 (B, G) up to the considered isomorphism. We now consider the relation between pairs and twistedČech cohomology. One should note at this point that, just as in the untwisted case, the firstČech "group"Ȟ 1 (B, M, g) is a well-defined set even in case the sheaf M is just a sheaf of groups and not necessarily abelian. In that case the additive (commutative) relation of being cohomologous ζ ji ∼ ζ ji + (δ g η) ji is replaced by the multiplica- Proof : Let g .. , ζ .. be the transition functions of a pair over B for an atlas U • . So g ∈Ž 1 (U • , G/N), and the crucial point is to observe that equation (6) is equivalent to δ g ζ = ½ and therefore each pair defines an element inȞ 1 (B, M, g). In fact, this is well defined, because if g ′ .. , ζ ′ .. are transition functions for another atlas then (after choosing a common refinement) the two classes match under the isomorphism r # , i.e. r # ζ ′ is cohomologous to ζ. Similarly, an isomorphism of pairs leads to cohomologous cocycles. Conversely, any ζ ∈Ž 1 (U • , M, g) defines an associated pair, and if ζ ′ ∈Ž 1 (V • , M, g) defines the same class as ζ .. then the two associated pairs are isomorphic. Since each class arises in such a way the assertion is proven. Let g .. , ζ .. be the transition functions of a pair over B. Since G/N is compact and B paracompact we can apply Lemma A.7 and Lemma A.8 for the family of transition functions ζ ij : U ij → Map(G/N, PU(H). I.e. we can find a refined atlas {V k |k ∈ K := I × B}, V k ⊂ U i if k = (i, x), such that on its twofold intersections V kl the restricted transition functions lift to continuous functions ζ kl : V kl → Bor(G/N, U(H)). These lifts are unique up to continuous functions V kl → Bor(G/N, U(1)). Let us denote by g lk the restriction g ji | V lk in case l = (j, y), k = (i, x) ∈ I × B. On threefold intersections the function V klm ∋ u → ζ kl (u)(g lm (u) + ) won't be continuos as a function to Bor(G/N, U(H)) in general, but it will as a function to 5 L ∞ (G/N, U(H)). So equation (6) implies that there are continuous ψ mlk : V mlk → L ∞ (G/N, U(1)) such that and it follows that δ g ψ = 1. The functions ψ klm therefore define a twistedČech 2-cocycle ψ ... ∈Ž 2 (V • , L ∞ (G/N, U(1)), g). Proposition 2.4 The construction of ψ ... defines a homomorphism of groups We do not achieve the statement that the above homomorphism is injective or surjective, so we are far from classifying pairs by this map. Dynamical Triples and their Local Structure Let P → E → B be a pair. The quotient map G ∋ g → gN ∈ G/N induces a G action on E. Definition 2.3 A decker is just a continuous action ρ : P × G → P that lifts the induced G-action on E such that ρ( . , g) : P → P is a bundle automorphism, for all g ∈ G. The existence of deckers can be a very restrictive condition on the bundle P → E. (See e.g. Prop. 2.6 below.) In fact, they need not exist and need not to be unique in general, but the play a central rôle in what follows, therefore we introduced an extra name. In context of C * -dynamical systems, i.e. C * -algebras with (strongly continuous) group actions, concretely, in context of the equivariant Brauer group several notions of equivalence of actions occur [CKRW]. In particular, the notions of isomorphic actions, stably isomorphic actions and exterior equivalent actions are combined to the notion of stably outer conjugate actions. We slightly modify these notion for our purposes. However, we postpone the definition until we made ourselves familiar with the local structure of dynamical triples. Definition 2.4 A dynamical triple (ρ, P, E) over B is a pair (P, E) over B together with a decker ρ : P × G → P. Let (ρ, P, E) be a dynamical tripel over B. Proposition 2.5 i) If we define ρ τ (g) := ρ( . , g) : P → P, we obtain a diagram of topolgical groups Conversely, if B is locally compact then a commutative diagram (8) defines a decker. ii) Locally, i.e. after choosing charts U i , a decker defines a family of continuous cocycles µ i : U i → Z 1 cont (G, Map(G/N, PU(H))) such that on twofold intersections U ij ∋ u the transition functions of the pair and the cocycles are related by Conversely, any family of cocycles {µ i } i∈I that fulfils eq. (9) determines a unique decker. Proof : i) The origin of the diagram is obvious. For the converse, it is sufficient to prove the result locally because the action of G on F preserves charts. Explicitly, over a chart (U i If B is locally compact the exponential law (Lemma A.4) ensures that all functions µ ′ i : (u, g, z) → µ ′′ i (g)(u, z) are jointly continuous. ii) Locally, Lemma A.4 ensures that the transposed functions µ i : U i → Map(G × G/N, PU(H)) made out of µ ′ i are well defined. The cocycle condition and the validity of (9) are immediate as well as the converse statement. It should be mentioned at this point that equation (9) (and its unitary version we consider later) is quite powerful as turns out. A first application is given in the next proposition. It is a complete answer to the existence of deckers in the case of N = 0, i.e. G = G/N. The result is well-known, but we state a proof using (9) for the convenience of the reader. Proposition 2.6 Assume N = 0 and let P q → E p → B be a pair. Then a decker exists if and only if P ∼ = p * P ′ for a PU(H)-bundle P ′ → B. Proof : If P ∼ = p * P ′ for some PU(H)-bundle P ′ → B we obtain a decker by acting on the first entry of the fibered product p * P ′ = E × B P ′ . Conversely, if a decker is given then we define a family ζ ′ ji : Claim 2 : P ∼ = p * P ′ . Proof : The bundle q ′ : p * P ′ → E has transition functions We define an isomorphism f : This is in fact a well defined global isomorphism since form eq. (9) it follows for Thus the local definition of f is independent of the chosen chart. We now introduce the notions of equivalence we mentioned earlier. Definition 2.5 Two deckers ρ, ρ ′ : G × P → P on a pair (P, E) are called exterior equivalent if the continuous function c : P × G → P which is defined by ρ( . , −g) • ρ ′ ( . , g) = c( . , g) : P → P, for g ∈ G, is locally of the following form: There exists an atlas U • such that for each chart for the transition functions g ji , ζ ji of the pair and the cocycles µ i of the decker ρ as above. It is clear that, if one has given a family of continuous unitary functions {c i } which satisfies (E0), (E1) and (E2) for the cocycles {µ i } of a decker ρ, the family of cocycles where · is the right action of Aut 1 (P, E) on Aut 0 (P, E) given by conjugation. This right action lifts to the sections Γ(E, P × PU(H) U(H)) of the to P associated U(H)-bundle in the following diagram, i.e. the vertical map is Aut 1 (P, E)equivariant, Therein the associated bundles are both obtained by the the conjugate action of PU(H) on the respective groups. Now, conditions (E0) -(E2) imply that we can lift c τ to a unitary cocycle c τ : Similar to Proposition 2.5 this global statement is an equivalent formulation of exterior equivalence if the base B is locally compact. Proposition 2.7 Let B be locally compact, and let ρ and ρ ′ be deckers on a pair (P, E). These two deckers are exterior equivalent if and only if c τ as defined above lifts to a unitary cocycle c τ . We do not give a detailed proof of this fact, it is again just an application of the exponential law for locally compact spaces. The next statement gives an important example of exterior equivalent deckers. Example 2.1 Let ρ be a decker on an arbitrary pair (P, E), and let v : P → P be a bundle automorphism. Then the conjugate decker ρ ν is exterior equivalent to ρ if the class [v] ∈Ȟ 1 (E, U(1)) of the automorphism vanishes. Proof : Let us denote by µ i the cocycles of the decker ρ which satisfy (9) on a chosen atlas {U i } i∈I . Because the class of v vanishes, we can assume without restriction that it is locally implemented by unitary functions v i : We check that the conditions (E1) and (E2) also holds. In fact, which proves (E1), and Between two dynamical triples we introduce a notion of equivalence based on exterior equivalence. Two dynamical triples (ρ, P, E) and (ρ ′ , P ′ , E ′ ) are isomorphic if there is a morphism (ϕ, ϑ, θ) of the underlying pairs such that ρ = ϑ * ρ ′ . The triples are outer conjugate if there is a morphism (ϕ, ϑ, θ) of the underlying pairs such that ρ and ϑ * ρ ′ are exterior equivalent on (P, E). Furthermore, we call the triples stably isomorphic (respectively stably outer conjugate) if the triples We can arrange these notions in a diagram of implications. isomorphism of dyn. triples + 3 outer conjugation of dyn. triples stable isomorphism of dyn. triples + 3 stable outer conjugation of dyn. triples The following example shall illustrate an important feature of the notion of stably outer conjugation. Example 2.2 Let (ρ, P, E) be a dynamical triple, and let λ G : G → U(L 2 (G)) be the left regular representation of G. Then the two triples (ρ, P, E) and ((Ad Proof : The triple (ρ, P, E) and its stabilisation (½ ⊗ ρ, PU(L 2 (G)) ⊗ P, E) are stably isomorphic and the triples ( By Dyn we denote the set valued functor that sends a base space B to the set of equivalence classes of stably outer conjugate dynamical triples over it, i.e. Dyn ( Our next goal is to find the link between dynamical triples and the cohomology theory we introduce now. Let M n be the abelian sheaf on B defined by M n (U) := C(U, Bor(G ×n , L ∞ (G/N, U(1)))), for n = 0, 1, 2, . . . . Let U • = {U i |i ∈ I} be an open cover of B and let g ∈Ž 1 (U • , G/N). Note that M n is a right G/N-module, for all n = 0, 1, 2, . . . , by shifting the G/N-variable. We consider the the double complex wherein the horizontal arrows d * are induced by the boundary operator d : (1))) of group cohomology 7 by acting point-wise on functions and the vertical arrows δ g are the twisteď Cech coboundary operators. In fact, all of the squares are commutative, hence we obtain a resulting total complex C we denote the corresponding cohomology groups, and by passing to the limit over all refinements of the open covering U • we obtain We call this group the total cohomology of B with twist g. In the same manner as explained on page 21 the limit does not depend on the covering U • and is independent of the choice of the refinement maps. It is also similar to the discussion on twistedČech cohomology that the total cohomology groups are well defined objects for the class [g] of a cocycle g up to an isomorphism. The connexion of dynamical triples and total cohomology has its origin in the local structure of triples as we explain now. Let (ρ, P, E) be a dynamical triple. Let U • = {U i |i ∈ I} be an atlas for the underlying pair with transition functions g ij , ζ ij and continuous cocycles µ i as in Proposition 2.5. One should realise at this point that when we suppress the non-commutativity of PU(H) for a moment the equations δ g ζ .. = ½, d(µ i (u)) = ½ and equation (9) are equivalent to ∂ g (ζ .. , µ . ) = ½. We lift the transition functions and the cocycles to Borel functions. This will define a 2-cocycle for the total cohomology of B; in detail: Without restriction (see equation (7)) we can assume that the atlas is chosen such that the transition functions can be lifted continuously to Borel valued 7 See A.2 functions ζ ij ; they define a twistedČech 2-cocycle δ g ζ =: ψ ... ∈Č 2 (U • , M 0 ). Further we can assume (if necessary we refine the atlas once more) that all charts are contractible. Therefore we can apply Corollary A.2 to each of the µ i and obtain continuous functions . Now, the three families of functions ψ ... , φ .. and ω . satisfy the algebraic relations i.e. (ψ ... , φ .. , ω . ) is a total 2-cocycle. Of course, one can verify this by direct computation, but indeed it is implicitly clear, because, informally 8 , we have Proposition 2.8 The assignment (ρ, P, E) → (ψ ... , φ .. , ω . ) constructed above defines a homomorphism of groups . Proof : We must check that the defined total cohomology class is independent of all choices. This is simple to verify for the choice of the atlas, and the choice of the lifts of the transition functions and cocycles. As stably isomorphic pairs have the same local description, it is also clear that stably isomorphic pairs define the same total cohomology class. In detail we give the calculation that exterior equivalent triples define the same class: Let ρ and ρ ′ be exterior equivalent deckers on (P, E). Let c i : U i × G × G/N → U(H) be such that (E0), (E1) and (E2) of Defintion 2.5 are satisfied. If µ i and µ ′ i are the cocycles for the deckers, they both satisfy (9) for the same family of transition functions ζ ji . They are related by µ Dual Pairs and Triples Of course, the whole discussion we made so far for (G, N) can be done for ( G, N ⊥ ). I.e. we can replace G by its dual group G := Hom(G, U(1)) and N by the annihilator N ⊥ := {χ|χ| N = 1} ⊂ G of N everywhere. This is meaningful as G is second countable, N ⊥ is discrete and G/N ⊥ compact (see A.1). Definition 2.6 Let B be a base space. i) A dual pair ( P, E) over B with underlying Hilbert space H is a sequence P → E → B, wherein E → B is a G/N ⊥ -principal fibre bundel and P → E a PU(H)-principal fibre bundle, such that the latter bundle is already trivial over the fibres of E → B. ii) A dual deckerρ is an actionρ : P × G → P that lifts the induced G action on E andρ( , χ) : P → P is a bundle isomorphisms for all χ ∈ G. iii) A dual dynamical triple (ρ, P, E) over B is a pair ( P, E) over B equipped with a dual deckerρ It is clear now how we define Par(B), Par( E, B), Dyn(B), Dyn( E, B) and how all statements we have achieved so far translate to dual pairs and triples. Topological Triples We introduce topological triples built out of a pair and a dual one. They were introduced first in [BRS] under the name T-duality triples in the special case G = R n , N = Z. Our definition won't be exactly the same as in [BRS] -we comment on this in section 3.3. There is a canonical U(1)-principal fibre bundle over G/N × G/N ⊥ which is called Poincaré bundle. We recall its definition. Let where the action of N ⊥ is defined by (z, Indeed, U(1) acts freely and transitive in each fibre by multiplication in the third component and local Dually, there is a second U(1)-bundle R : We denote theČech classes inȞ 1 (G/N × G/N ⊥ , U(1)) which these bundles define by [Q] and [R]. Definition 2.7 The class Of course, in this definition we made a choice, but up to a sign there is none. We now turn to the definition of topological triples. Let P → E → B be a pair and let P → E → B be a dual pair with same underlying Hilbert space H. We consider the following diagram of Cartesian squares I I I I I I I I I | | y y y y y y y y y y y s s s s s s s s s s s s B . Assume that there is a PU(H)-bundle isomorphism κ : E × B P → P × B E which fits into the above diagram (12), i.e. it fixes its base E × B E. Let us choose a chart U i ⊂ B common for the pair and the dual pair and trivialise (12) locally. For each u ∈ U i this induces an automorphism κ i (u) of the trivial PU(H)-bundle over G/N × G/N ⊥ , Definition 2.8 We say κ satisfies the Poincaré condition if for each chart U i and each u ∈ U i the equality [κ i (u)] = π + p * 1 a + p * 2 b holds, for the Poincaré class π and some classes a ∈Ȟ 1 (G/N, U(1)) and b ∈Ȟ 1 ( G/N ⊥ , U(1)). Here p 1 , p 2 are the projections from G/N × G/N ⊥ on the first and second factor. Note that in this definition the classes a, b are just of minor importance. They are manifestations of the freedom to choose another atlas as they vary under the change of the local trivialisations. In fact, one can always modify the local trivialisations of the given atlas such that a and b vanish. Definition 2.9 A topological triple κ, (P, E), ( P, E) over B is a pair (P, E) and dual pair ( P, E) over B (with same underlying Hilbert space H) together with a commutative diagram wherein all squares are Cartesian and κ is an isomorphism that satisfies the Poincaré condition. We call two topological triples κ, (P, E), ( P, E) and κ ′ , (P ′ , E ′ ), ( P ′ , E ′ ) (with underlying Hilbert spaces H, H ′ respectively) equivalent if there is a morphisms of pairs (ϕ, ϑ, θ) from (P, E) to (P ′ , E ′ ) and a morphism of dual pairs (φ,θ,θ) from ( P, E) to ( P ′ , E ′ ) such that the induced diagram is commutative up to homotopy, i.e. theČech class of the bundle automor- , E ′ ) are equivalent for some separable Hilbert space H 1 . The meaning of the index H 1 is stabilisation as in equation (4). Stable equivalence will be the right choice of equivalence for us, and we introduce the set valued functor Top which associates to a base space B the set of stable equivalence classes of topological triples, i.e. If we choose a G/N-bundle E → B we can consider all the topological triples with this bundle fixed and those stable equivalences for which the identity over E can be extendend to a morphisms of the underlying pairs. We define Remark 2.4 We already stated in Remark 1.1 that our notion of topological triples in the case of G = R n , N = Z n and the notion of T-duality triples as found in [BRS] do not agree. Nevertheless the two notions lead to the same isomorphism classes, but we postpone this clarification to section 3.3, Lemma 3.7. By definition, the Poincaré class π has a geometric interpretation in terms of the Poincaré bundle (11). For our purposes it will be important that we can give an analytical description of π. Lemma 2.3 Choose a Borel section σ : G/N → G and an arbitrary sectionσ : G/N ⊥ → G of the corresponding quotient maps. Then i) the map 9 is continuous and independent of the choice ofσ. Therefore it defines (a bundle isomorphism of the trivial PU(L 2 (G/N, H))-bundle and) a class and this class is independent of the choice of σ; In fact, the sequential continuity of this map 10 follows by dominated convergence. Therefore the composition To see that the class of κ σ is independent of σ it is sufficient to recognise that κ n , defined by the same formula as κ σ , is unitary implemented for any Borel function n : (1), and the class of κ σ is by definition the class [κ .. ] of the cocycle κ .. . We obtain wherein the last equality identifies N ⊥ with G/N. Therefore the class of κ σ coincides with the negative of the class of the bundle Q. Of course, the situation is symmetric and there is an analogous statement involving the class of bundle R. In that case, if we replace everything by its dual counterpart, we deal with the function Lemma 2.4 The class ofκσ is the negative of the Poincaré class The argument of Ad is a continuous, unitary expression, since the left regular representations λ G/N ⊥ and λ G/N are (strongly) continuous. T-Duality In the last sections we have introduced our main objects (dynamical and topological triples). In the following sections we single out specific subclasses of those and show that they are related to each other. In addition, we show that their relations are precisely those which are obtained from the associated C *dynamical picture. Most part of this C * -algebraic structure been observed in context of continuous trace algebras alone in [RR,Thm 2.2,Cor. 2.5], [OR,Cor. 2.1] and more recent in [MR,Thm. 3.1] with applications to T-duality. However, we establish a different approach by use of the local structure of the underlying objects. In particular, we do not need any assumption about local compactness of the underlying spaces. If we assume our base spaces B to be locally compact, then the bundles E, E over B will be locally compact and spectra of the involved continuous trace algebras; but the proof we present is independent of such an assumption. Moreover, it has the advantage of being explicit enough to point out the connexion between the C * -algebraic approach and the topological approach to T-duality. Notation: In several proofs of the following sections we have to check the validity of local identities. All of these are straight forward computations in general, but to keep the formulas readable we drop the base variable u ∈ B. E.g. an identity like We indicate this with the label u before any of these computations. The Duality Theory of Dynamical Triples We start with the definition of an important subclass of dynamical triples. Let (ρ, P, E) be a dualisable dynamical triple over B. Choose a sufficiently refined atlas {U i |i ∈ I} in the sense that the transition functions ζ ij and cocycles µ i lift continuously to ζ ij and µ i . Then let (ψ ... , φ .. , ω . ) be the associated 2cocycle. Since the triple is dualisable, we can assume that ω . is in the image of d * , let ω . := d * ν . . We can modify µ i by multiplying with ν −1 i . Therefore we can assume without restriction that d(µ i (u)) = ½ and ω i = 1, for all i ∈ I, u ∈ U i . We are going to define a set of transition functionŝ for all u ∈ U ji , n ∈ N and (almost) all z ∈ G/N. For g = n ∈ N the right-hand side vanishes, hence theČech cocycle equation follows. In the next lemma we state some properties of the lifted cocycles µ i . It will be a useful technical tool later. Lemma 3.1 The maps are continuous; and for all u ∈ U ji , z ∈ G/N the formula holds. Proof : (i) Let n ∈ N, g ∈ G and z ∈ G/N, then . Therein the right hand side is continuous in g, hence the left hand side is continuous in z ′ := gN. As the left hand side is symmetric in z and z ′ it is continuos in z. (ii) Let (u α , z α ) → (u, z) be a converging net. Let x α := z α − z and choose g α → 0 ∈ G such that g α N = x α . Such g α exist -take a local section of the quotient (Lemma A.1). Then The second factor converges to µ i (u)(0, z) = ½. For the first factor, note that (iii) We shall show that ν( . ) → ν(σ( )) is a continuous map from the multiplication operators L ∞ (G, U(H)) to L ∞ (G/N, U(H)). Indeed, let f ∈ L 2 (G/N) and let χ be the characteristic function of σ(G/N) (iv), (v) The fourth and fifth statement follow directly from (i) and (ii) and from the definition of φ ji on page 30. We now turn to the definition of the projective unitary transition functionŝ ζ ji for the dual pair, but before we remark on the local definition we make. Remark 3.1 The ad hoc definition of the transition functions of the dual pair by formula (18) below may seem very unsatisfactory, because one cannot even guess the origin of this formula. In Theorem 3.8 we will see that the crossed product G × ρ Γ(E, F) (see appendix A.4) of the associated C * -algebra of sections Γ(E, F), F := P × PU(H) K(H), can be explicitly computed by a fibre-wise, modified Fourier transform. The behaviour of this transformation under the change of charts will lead finally to formula (18) (and shows that the crossed product is isomorphic to the associated C * -algebra of sections of the dual pair). However, we are in the pleasant situation that we can avoid the C * -algebraic apparatus at this point and can formulate the theory in bundle theoretic terms only. Therein λ G/N is the left regular representation on L 2 (G/N), and κ σ is taken from Lemma 2.3. Indeed, this defines a continuous map but its definition involved several choices, namely, the atlas, the liftings ζ ij , µ i and the section σ. Proof : We have to show that δĝζ = ½. We insert (18) and obtain u The argument of Ad has simplified to a multiplication operator, so it remains to show that it is a constant in U(1). By use of δ g φ = d * ψ and d * φ = 1 we continue ∈ N this can be transformed by use of the definition ofĝ ji This is the desired result. To check that all the choices involved have no effect on the class of this cocycle is straight forward, and we skip the tedious computation here. Let us define a pair forĝ ij ,ζ ij explicitly. Let for one easily checks that outer conjugate triples define isomorphic duals. We are going to improve this statement in the next theorem. Let us define a family of projective unitary 1-cocycles by the following simple formula. Let χ ∈ G,ẑ ∈ G/N ⊥ , u ∈ U i , then we definê Therein σ : G/N → G is the same Borel section which we have used to defineζ ji . By dominated convergence 11 , it is clear that Theorem 3.2 The family {μ i |i ∈ I} defines a dual deckerρ on the pair ( P, E), and we obtain a bijection from the set of dualisable dynamical triples to the set of dualisable dual dynamical triples, 11 G is first countable. In formulas (16), (18) and (20) we can replace everything by its dual counterpart, i.e. we interchange the rôle of triples and dual triples, then we obtain a map Moreover, these two maps are natural and inverse to each other, so we have an equivalence of functors Proof : To see that the cocyclesμ i define a decker we have to verify that We just compute the right-hand side u This establishesρ. A careful look at this calculation shows thatφ .. indeed is the second term of the total dual cocycle (ψ ... ,φ .. , 1) defined by our constructed dual dynamical triple. It is easy to see that another choice of the section σ alterŝ µ i andζ ji precisely in such a way that they define an isomorphic pair. So we have established a map Dyn † (B) → Dyn † (B) and, by replacing everything by its dual, a map in opposite direction. To prove the remaining assertions of the theorem, we shall apply our construction twice. We will find that the double dual (ρ, P, E) is isomorphic to (Ad • λ G ⊗ ρ, PU(L 2 (G)) ⊗ P, E). In particular, we already see from the definition ofφ ji that the double dual bundle E has cocycleĝ ji (u) :=φ ji (u)( , 0)| N ⊥ = g ji (u), so there is θ : E ∼ = E. We compute the double dual cocyclê By definition, we have by (18) and (15) With equation (17) and the definitions of κ σ , φ ji , . . . this reads and after some intermediate steps we find This already proves part of the second half of the theorem, for we see that the double dual is isomorphic to a pair with transition functions For general reasons such a pair is isomorphic to a pair with transition function ζ ji , for is a unitary cocycle. However, we can give a concrete isomorphism. Let us denote by F : L 2 ( G/N ⊥ ) → L 2 (N) the Fourier transform, then we obtain a multiplication operator We have to compute the behaviour of the double dual decker under this isomorphism, i.e. we have to compute The last equality is just the cocycle condition for µ i . We can make a further manipulation by use of the following isomorphism (gN), gN). Its inverse is given by (S −1 ( f ))(n, z) := f (n + σ(z)), and it is immediate to verify that This implies for the cocycle that To summarise, we have shown that there is an isomorphism of dynamical triples So, as we discussed in Example 2.2, the double dual triple is outer conjugate to the triple we started with. We finally comment on the naturality of the defined maps. Let f : B ′ → B be a map of bases spaces. Then we have a diagram On the local level pullback with f * is the purely formal substitution of u ∈ B by f (u ′ ) for u ′ ∈ B ′ in all formulas, and it follows that the diagram commutes. This proves the theorem. The Relation to Topological T-Duality So far we have found that dualisable dynamical triples and dualisable dual dynamical triples have the same isomorphism classes. We demonstrate that the theory developed so far is intimately connected to topological T-duality. be given by the formula for (u, z,ẑ) ∈ U ij × G/N × G/N ⊥ . By construction, the Poincaré condition will be satisfied automatically. We are able to do this calculation on the projective unitary level directly, but later we need the same calculation on the unitary level, so we prepare ourselves first with a choice of liftsζ ji . Now, we calculate The variableẑ only occurs in σ(ĝ ji +ẑ) −σ(ẑ), −σ(z) . To the remaining expressinon we can apply equation (17) of Lemma 3.1 which is valid on the unitary level up to a U(1)-valued perturbation α ′ ji : U ji → L ∞ (G/N, U(1)). We obtain This establishes the existence of κ top . It remains to check that the constructed equivalence class of the topological triple only depends on the equivalence class of the dynamical triple, but this not difficult to establish. We just mention that if we start with an exterior equivalent decker which differs locally by c i : U i × G × G/N → U(H) from the cocycle µ i , then κ top differs by the null-homotopic bundle automorphism which is locally of the form c i (u, −σ( + z), z). Finally, we remark that τ is natural. Indeed, let f : B ′ → B be a map of bases spaces. Then there is a commutative diagram because in our local construction pullback with f is just pullback of the underlying locally defined functions by f which corresponds to the formal substitution of u ∈ B by f (u ′ ) for u ′ ∈ B ′ in all formulas. Our aim is to construct an inverse of τ(B) : Dyn † (B) → Top(B). In particular, we wish to construct from the data of a topological triple P × B E $ $ I I I I I I I I y y s s s s s s s s s s s s B a decker ρ on (a pair stably isomorphic to) (P, E). The space E × B P clearly has a G/N-action, and by use of κ we can define a G/N-action on P × B E. The leading idea is that the Poincaré condition enables us to push forward this action down to a G-action on P. We will see that this fails in general as an obstruction against a decker occurs. Since κ satisfies the Poincaré condition each κ i can be written in the form wherein κ σ is from Lemma 2.3 and κ a i , κ b i are some continuous projective unitary functions and v i : is a continuous unitary function. We introduce some short hands to get rid of κ a i and κ b i . We let Let us choose, if necessary after a refinement of the given atlas, lifts ζ for some continuous α ji : U ij → L ∞ (G/N × G/N ⊥ , U(1)) and κ σ (z,ẑ) := σ(ẑ), σ( − z) − σ( ) ∈ L ∞ (G/N, U(1)) ⊂ U( H); σ andσ are both chosen to be Borel. By direct computation, it follows that for the twistedČech cocycles ψ ... · ½ := δ g ζ ′ .. andψ ... · ½ := δĝζ ′ .. . It is our task now to use the Poincaré condition of the isomorphism κ to deduce a more concrete formula for α ji from which we can extract the existence of a decker. It will be sufficient to investigate the structure of Note for the next lemma that β ji (u, z) equalsκσ(g ji (u) + z,ĝ ji (u)) −1 up to conjugation by the unitary implemented Ad(λ G/N ⊥ (ĝ ji (u))), so theČech classes [β ji ] and [(u, z) →κσ(g ji (u) + z,ĝ ji (u)) −1 ] inȞ 1 (U ji × G/N, U(1)) defined by these functions agree. (Cp. Lemma 2.4) The Hilbert space H occurs in the following to force that certain operators which are defined on different tensor factors of H ⊗ L 2 ( G/N ⊥ ) ⊗ H commute. Thus we have equality of theČech classes This implies that A ji ⊗ ½ H equals β ji ⊗ γ ji up to Ad of a continuous unitary function w ji : U ji × G/N → U(L 2 ( G/N ⊥ , H)). w ji takes values in the subgroup L ∞ ( G/N ⊥ , U ab ( H)) only, since A ji , β ji and γ ji are maps into the subgroup PL ∞ ( G/N ⊥ , U ab ( H)) ⊂ PU(L 2 ( G/N ⊥ , H)). Finally, we just interchange the order of the Hilbert spaces in the tensor product L 2 ( G/N ⊥ ) ⊗ H to H ⊗ L 2 ( G/N ⊥ ). This will be convenient later. Remark 3.2 By passing to a refined atlas, we can always achieve that γ ji = ½, for we can apply Lemma A.8 to the functions U ji ∋ u →ζ ′ ji (u)(0). In that case we have w ji (u)(z)(ẑ) ∈ U(1). It is important to note that although w ji depends on the choice of γ ji the term (d * w .. ) ji (u) = d(w ji (u)) does not; here d is the (first) boundary operator of group cohomology The sheaf of continuous functions into the range (or source) of d is a G/N × G/N ⊥module in the obvious way. So the following statement is meaningful for the twistedČech boundary operator δ g .. ×ĝ .. . Proof : We must show that the constructed class only depends on the isomorphism class of the triple. Let us check all choices in reversed order of their appearance. Firstly, the choice of w ji is only unique up to a continuous scalar function U ji × G/N → U(1), so ϕ ... = δ g×ĝ d * w .. changes by a boundary. Secondly, the choice of γ ji , is only unique up to a unitary implemented function U ji → U ab (H), so w ji may change by this function. But this function is independent of z ∈ G/N, so d * w ji does not change. Thirdly, if we choose different lifts of the transition functions ζ ji , this would change α ji by a function U ji → L ∞ (G/N, U(1)), so A ji does not change. Fourthly, if we choose different lifts of the transition functionsζ ji , then α ji is changed by a function U ji → L ∞ ( G/N ⊥ , U(1)), so A ji , thus w ji , changes by a function independent of z ∈ G/N, so d * w ji does not change. Fifthly, if we choose another atlas for our construction, we can take a common refinement and the normalisation procedure ζ ji → ζ ′ ji ,ζ ji →ζ ′ ji , leads to the same equations as above, except we restricted us to the refined atlas, so the class of ϕ ... is not changed. Sixthly, if we start with a topological triple which is isomorphic to (κ, (P, E), ( P, E), then, locally, the isomorphisms of the underlying pairs have the same effect as a change of the atlas which does not change the class of ϕ ... . However, we must take care of the homotopy commutativity of diagram (14). So if κ ′ is homotopic to κ , then they differ locally by a continuous unitary function v ′ i : U i × G/N × G/N ⊥ → U( H), and equation (25) becomes for a scalar α ′′ ji : U i × G/N × G/N ⊥ → U(1). It follows from the three equations for α ji , α ′ ji and α ′′ ji that α ′ ji = α ji α ′′ ji . So w ji changes by α ′′ ji . But α ′′ .. is a cocycle δ g×ĝ α ′′ .. = 1 which is easily computed by its definition (29), so δ g×ĝ w .. and ϕ ... do not change. Finally, we just remark that the defined map is natural with respect to pullback. I.e. if f : B ′ → B is a map of base spaces, then there is a commutative diagram This proves the lemma. The quotient map G → G/N induces a map on the twistedČech groupš and the map of Lemma 3.4 has an obvious factorisation Top(E, B) for all n ∈ N. We denote by Top as (B) respectively Top s (B) the set of all almost strict respectively strict topological triples over B; so we have obvious inclusions Remark 3.3 For the class of a topological triple [(κ, (P, E), ( P, E))] ∈ Top(B) its class in Top(E, B) is only well-defined up to the action of Aut B (E) on Top(E, B). However, the the vanishing of the obstruction class in Definition 3.2 is independent of the possible choices, so Top as (B) and Top s (B) are well-defined. We will see that strict and almost strict play a major rôle in our theory. The following two lemmata give a first feeling. Proof : In equation i.e. we can choose w ji such that w ji (u)(z) does not depend on z. Of course, the choice of w ji is only determined up to a scalar U ji × G/N → U(1), but in any case χ ji · ½ := d * w ji defines a function U ji → Z 1 cont (G, Map(G/N, U(1))), so by construction ϕ kji = (δ g χ .. ) kji is a boundary. The next (technical) lemma will be the crucial point in the construction of a decker from the data of a topological triple. Lemma 3.6 Assume (κ, (P, E), ( P, E)) is almost strict. Then we can find a (sufficiently refined) atlas {U i |i ∈ I} such that for w ji from above there exists a family m i : The proof uses a standard Zorn's lemma argument as it can be found in [Di]. First we note that the space Z := Z 1 cont (G, Map( ) a group homomorphism, then the push-forward H # : [0, 1] × Z → Z preserves the cocycle relation and is a contraction. We can assume that the atlas is sufficiently refined such that, firstly, ϕ ... from (27) is a boundary, i.e. there exist χ ji : U ji → Z 1 cont (G, Map(G/N, U(1))) such that (δ g χ) kji = ϕ kji , and secondly, as B is a paracompact Hausdorff space, even the closed cover i∈I U i = B is locally finite and that w ji , χ ji are well-defined on U ji , for all j, i ∈ I. We let . The constructed family {m i } is the last ingredient to write down an explicit formula for a decker ρ dyn on (a stabilisation of) the pair (P, E). It is not hard to guess that the cocycles m i will be an essential part of the cocycles µ dyn i which we will define to implement the decker ρ dyn locally. But unfortunately the constructed family {m i } is by no means unique, and this non-uniqueness is the origin of the following discussion. As a matter of fact this discussion will simplify drastically when we consider the special case of G = R n with lattice N = Z n in the next section below. However, now we continue with the discussion of almost strict triples from above and work out the general framework. Let ϕ ... be the twistedČech cocycle from equation (27). The triple under consideration is assumed to be almost strict, so (after refining the atlas U • ) we have ϕ kji = (δ g χ .. ) kji for a chain χ .. ∈Č 1 (U • , Z 1 cont (G, Map(G/N, U(1)), g .. ) as in the proof above. Obviously, this chain χ .. is only well defined up to a cocycle . A choice of χ .. determines the family {m i } in equation (32) not completely but up to a family n i : satisfies (δ g×ĝ n . ) ji = ½. We already mentioned that m i will be part of the cocycles µ dyn i . Then it will turn out that the two families {m i } and {m i n i } define exterior equivalent deckers. However, this is not the case when make we another choice of of χ .. , say χ .. χ 1 .. for χ 1 .. as above. Then {m i } must be replaced by {m i m 1 i } for m 1 i being such that (δ g×ĝ m 1 . ) ji = χ 1 ji · ½, and we will find that the corresponding deckers are exterior equivalent if and only if the class [χ 1 .. ] ∈Ȟ 1 (B, Z 1 cont (G/N, Map(G/N, U(1))), g .. ) vanishes. In other words, for each class [χ 1 .. ] we obtain a different class of dynamical triples. It is then the obvious question, whether the different dynamical triples still have something in common. Or if x is the class of almost strict topological triple and x dyn is the class of one of the possible dynamical triples indicated, is there any relation between x and τ(B)(x dyn )? Can we describe the difference, in particuar, when do they equal? A partial answer of this question is already given in Lemma 3.5 which shows that x and τ(B)(x dyn ) only can equal if x is strict. Consider the short exact sequences They both induce long exact sequences in (twisted)Čech cohomology. The relevant part for us is Due to equation (30) restriction χ ji (u)| N defines a class [χ .. | N ] ∈Ȟ 1 (B, N, g .. ) and by diagram (33) a class inȞ 1 (B, G/N ⊥ ), i.e the class of a G/N ⊥ -principal fibre bundle E χ .. → B. Exactness of the columns in diagram (33) implies that this class is only well-defined up to the quotient group Q G /q(Q G/N ), because the class varies with the choice of χ .. . The bundle E χ .. will be connected to the description of τ(B)(x dyn ). Namely, if x = [(κ, (P, E), ( P, E))] and x dyn = [(ρ dyn , P dyn , E)] (P dyn will be a stabilisation of P with a certain Hilbert space), then the result will be If the topological triple is strict, then by definition we can choose χ .. such that χ ji (u)| N = 1, so the class [χ .. | N ] vanishes and the bundle E χ .. is trivialisable. In that case we give a description of P dyn in the proof of the theorem below. If the topological triple is in the image of τ(B), then we even know more as the next theorem makes precise. Therein P(M) denotes the power set of a set M, and we denote the image of τ(B) by Top im (B), so Moreover, δ ? (B) is natural in the sense that it extends to a natural transformation of functors δ ? : Top ? → P • Dyn † , ? = as, s, im. Proof : The proof consists of several steps which are rather technical, so we first give an overview of the proof: In Step 1 we construct from the data of an almost strict topological triple a decker on the underlying pair (after stabilisation). The local construction will depend on many choices and it is the statement of Step 2 that almost all of these choices do not interfere with the equivalence class of triple defined. An exception is the choice of χ .. (above) which will cause the Q G -torsor and the q(Q G/N )-subtorsor structure in (a), (b). Then in Step 3 we compute the composition δ im (B) • τ(B) and find that this is the identity on Dyn † (B), hence τ(B) is injective and δ im (B) is surjective. In Step 4 we compute the reverse composition τ(B) • δ s (B), which will lead us to the result stated in (b). In particular this calculation shows that τ(B) • δ im (B) is the identity on the image of τ(B). As both compositions δ im (B) • τ(B) and τ(B) • δ im are the identity maps the statement of (c) is clear then. In Step 5 we finally comment on the naturality of the maps. In the hole of the proof we maintain the notation introduced above. The idea is simple. All we have to do is to write down an explicit formula for a family of cocycles which satisfies equation (9) for the transition functions of the pair (P, E). Assume the topological triple has underlying Hilbert space . . be as above. We can assume that the atlas U • is sufficiently refined such that ϕ ... is a boundary. So we choose χ .. as above such that ϕ ... = δ g χ .. , then let m i be defined by (32) in the last lemma. (u)). Note that this need not be true for λ G/N ⊥ (ĝ ji (u)) alone as G/N ⊥ may be finite, L 2 ( G/N ⊥ ) finite dimensional and U(L 2 ( G/N ⊥ )) not contractible. We conclude that there exists a family λ i : Let us consider For example we may take λ i (u)(z) = Ad(l 0 i (u) ⊗ ½ ⊗ ½), but it will be important that we allow ourselves to have the freedom of a more flexible form of λ i . One should also note that equation (34) is stated for unitary and not for projective unitary operators. We define a family (of local isomorphisms ) ϑ i : which we use to define another family (of cocycles) The stabilised pair (P dyn , E dyn ) := (PU(H ⊗ L 2 ( G/N ⊥ )) ⊗ P, E) has transition functions g ji , ζ dyn ji Although lengthy, this is a straight forward calculation. Indeed, we have u The expression inside the squared brackets can be rewritten by equation (25). This reads If we insert furthermore the results of the previous lemmata, we obtain As σ(g ji + z + hN) − h and σ(g ji + z) differ by some element in N, we obtain the identity Therein the first factor is a scalar and the second defines β ji . So the main calculation continues and again by equation (25) . This shows that the µ dyn i s define a decker ρ dyn on (P dyn , E dyn ). By construction is a continuous and unitary implemented 1-cocycle. Step 2: The construction of Step 1 defines a map δ as (B) : Top as (B) → P(Dyn † (B)) and δ as (B)(x) is a Q G -torsor, for each x ∈ Top as (B). If x is strict then there is a distinguished q(Q G/N )-subtorsor X dyn , and if x is in the image of τ(B) we single out a specific element x 0 ∈ X dyn . We then just define δ s (B)(x) := X dyn and δ im (B)(x) := x 0 in the respective cases. We have to show that all choices involved do not change the class of the dynamical triple (ρ dyn , P dyn , E dyn ). That the choice of the atlas has no effect on the class of the constructed dynamical triple is rather obvious. It is less obvious for the choices of λ i (or l i ), m i and the homotopy class of κ, i.e. the choice of an isomorphic topological triple. We convince ourselves that three other choices of these define exterior equivalent deckers. Firstly, if we choose another λ i , say λ ′ i = Ad(l ′ i ⊗ ½ ⊗ ½), then by equation defines an automorphism of P dyn , and theČech class of this automorphism [ν . ] ∈Ȟ 1 (B, U(1)) vanishes. Thus we are precisely in the situation of Example 2.1. Secondly, equation (32) shows that m i is unique up to functions n i : ))) such that δ g×ĝ n . = ½. We change the atlas of the the constructed dynamical triple such that we have transi- . Assume m i is changed by n i . Then because of the commutativity of U ab (H) we have n i (u)(h + g, z) = µ 1 i (u)(g, z) −1 (n i (u)(h, z + gN))n i (u)(g, z); and because of the λ i -terms in ϑ i we have that ζ 1 ji (u)(z)(n i (u)(h, z)(..)) = n i (u)(h, z)(.. −ĝ ji (u)) = n j (u)(h, g ji (u) + z)(..). So c i := n i defines an exterior equivalence (Definition 2.5). Thirdly, if κ ′ is homotopic to κ, theČech class of the bundle automorphism κ ′ • κ −1 vanishes, and the change of µ dyn i caused by this automorphism is again covered by Example 2.1. Let us now discuss the choice of χ .. . We always have the freedom to change it by a cocycle χ 1 ∈Ž 1 (U • , Z 1 cont (G, Map(G/N, U(1))), g .. ). The consequence is a change of m i by m 1 i : is a boundary, then the deckers ρ dyn and ρ dyn 1 which correspond to m i and m i m 1 i are exterior equivalent, for we can define n i := χ 2 i m 1 i , so δ g×ĝ n . = ½ which leads us to the case we already discussed above. If the class [χ 1 .. ] does not vanish, then it follows that the corresponding deckers are not exterior equivalent and the corresponding triples are not stably outer conjugate. Thus it follows that for each [χ 1 .. ] ∈ Q G we get a different dynamical triple, and we define δ as (B)(x) ⊂ Dyn † (B) to be the set of these dynamical triples. It is obvious that Q G acts freely and transitively on δ as (B)(x), i.e. it is a Q Gtorsor. If the triple x is strict, then we can choose χ .. such that χ ji (u)| N = 1, and this property is preserved by the action of q(Q G/N ) ⊂ Q G . So we singled out a specific q(Q G/N )-subtorsor X dyn ⊂ δ as (B)(x), and we define δ s (B)(x) := X dyn . If x is in the image of τ(B), then we saw in in the proof of Lemma 3.5 that w ji satisfies d * w ji (u)(z) ∈ U(1) · ½, so it is meaningful to define χ ji := d * w ji . Then we let x 0 ∈ X dyn be the element which corresponds to this particular choice of χ .. , and we put δ im (B)(x) := x 0 . Step 3: The formal calculation is similar to to what we did in the of the proof of Theorem 3.2. Let (ρ, P, E) be a dualisable dynamical triple having transition functions g ji , ζ ji and cocycles µ i . Recall the definition of τ(B) in particular of the topological triple (κ top , (P top , E), ( P, E)) out of which we must compute (ρ dyn , P top dyn , E). and we shall compute µ dyn i . We first give one possible, explicit formula for λ i (u)(z). LetF : L 2 ( G/N ⊥ ) → L 2 (N) be the inverse Fourier transform, then (1)), and the definition ofĝ ji implies Therefore we have which is continuous by Lemma 3.1 (i), hence λ i (u)(z) = Ad(l i (u)(z)). From the local definition of κ top in equation (21) we read off that In the proof of Lemma 3.5 we already observed that in the case of a topological triple constructed out of dynamical one we have d * w ji (u)(z) ∈ U(1) · ½, and by the definition of δ im (B) we choose χ .. := d * w .. . Therefore we can choose m i = ½. As a consequence the first H-slot of the tensor product H ⊗ L 2 ( G/N ⊥ ) ⊗ L 2 (G/N) ⊗ H will contain the identity operator ½ only. We have all ingredients N)) and by the cocycle identity for µ i this transforms to Let S : L 2 (N) ⊗ L 2 (G/N) → L 2 (G) be the isomorphism introduced on page 47. There we discussed its behaviour with respect to the left regular representation on L 2 (G). This leads us finally to Thus we have shown that (ρ, P, E) and (ρ dyn , P top dyn , E) are outer conjugate. Let (κ, (P, E), ( P, E)) be a topological triple with underlying Hilbert space H = L 2 (G/N) ⊗ H. For the first steps we only need the assumption that the triple is almost strict. So let (ρ dyn , P dyn , E dyn ) be the dynamical triple which we constructed in Step 1. As we saw E dyn is nothing but E itself. From this dynamical triple we are going to construct the topological triple (κ top , (P dyn top , E dyn ), ( P dyn , E dyn )) (according to the construction of τ(B)) which we then have to compare with (κ, (P, E), ( P, E)). To compute the dual pair ( P dyn , E dyn ) we with determination of φ dyn ji the second term of the total cocycle (ψ dyn ... , φ dyn .. , 1) of the constructed dynamical triple (ρ dyn , P dyn , E dyn ). By equation (16) for a lift γ ji : U ji → U(H) of γ ji ; therein all notation is as above. Then by definition and if we repeat the calculation of Step 1 on the unitary level, there are four equalities which must be modified by U(1)-valued functions. Namely, equation (36) by δ(u)(z + hN), equation (37) by χ ji from equation (32), equation (39) by the scalar term of (38) and equation (40) by δ(u)(z) −1 . We finally find It follows that E dyn ∼ = E if and only if the topological triple we started with is strict and we have chosen χ .. such that χ ji (u)| N = 1. Indeed, the cocycles of these bundles areĝ dyn To complete the computation of the dual we have to compute ζ dyn ji by equation (18). The reader should not be confused by the two different L 2 (G/N) factors in the tensor product. The first is due to the stabilisation in the definition of the dual (18), and the second is due to the Hilbert space we started with which is L 2 (G/N) ⊗ H. We use the symbols ⌣ and to distinguish multiplication operators on the two Hilbert spaces; ⌣ for the first factor, due to the definition of the dual and for the second factor as we did all the time. The dual transition functions are given by u By equation (25) and if we insert the formulas for κ σ , α ji and φ dyn ji from above, we end up with u It follows from the definition of γ ji that Ad(w ji (u)(0)(ẑ) −1 ) = γ ji (u), so after some further manipulation with all the . . . -expressions this finally transforms to u So far we did not use that the topological triple we started with is strict and that we want to compute the composition τ(B) • δ s (B). We use this now, so the G-slot of χ ji factors through G/N and in particularĝ dyn ji =ĝ ji , so we find wherein we have used the short hands At this point we can read off that the bundle defined by ξ ji is trivialisable if χ −1 .. d * w ji = ½ which is the datum we must consider when we compute τ(B) • δ im (B) (cp. Step 2). In this case let x i : U i → Map( G/N ⊥ , U(L 2 (G/N) ⊗ H ⊗ L 2 ( G/N ⊥ ))) be such that Ad(x j (u)(ĝ ji (u) +ẑ)x i (u)(ẑ) −1 ) = ξ ji (u)(ẑ). Then we have shown that η i (u)(ẑ) := η ′ i (u)(ẑ) Ad(x i (u)(ẑ) ⊗ ½ ⊗ ½) defines a family of local isomorphisms which fit together to a global isomorphism of principal bundles η : P s → P dyn . The subscript s denotes stabilisation with respect to the Hilbert space L 2 (G/N) ⊗ H ⊗ L 2 ( G/N ⊥ ). We claim that the topological triple (κ, (P, E), ( P, E)) and the constructed triple (κ top , (P dyn top , E), ( P dyn , E)) are equivalent; κ top is defined by equation (21) out of µ dyn i . This will prove the identity τ(B) • δ(B) = id im(τ(B) . We show that the diagram commutes up to homotpy. Locally, this means that there exist continuous maps u, z,ẑ)). To prove this we will use that the projective unitary group is homotopy comutative in the sense of Corollary A.1. We have u We see that the terms κ a i (u)(z) and κ b i (u)(z) occur. All terms with l 0 i , m i , v i , x i are continuous unitary maps The only bracket . . . terms not continuous as unitary maps are the two terms in the first and third last line. If we collect them, we shall better pay attention to λ G/N (z) which acts in the ⌣ -variable. Then these two terms equal and the second factor is continuous as a unitary map. Therefore we have for suitable continuous unitary maps V ′ i , V i , and we are done. In the same way as we already stated for τ, we have a commutative diagram (i) (P, E) has an extension to a strict topological triple; (ii) (P, E) has an extension to an almost strict topological triple; (iii) (P, E) has an extension to a dualisable dynamical triple. Proof : This is an immediate consequence of the previous theorem. The Case of G = R n and N = Z n So far we kept our analysis completely general in the sense that we did not specify the groups G, N. We now turn to the important case of G = R n with lattice N = Z n , n = 1, 2, 3, . . . . In the whole of this section we use the notation T n := R n /Z n for the torus andT n := R n /Z n⊥ for the dual torus. There should be no confusion to decide betweenT n and the dual group T n ∼ = Z n . The first thing we should check is that in case of G = R n , N = Z n our definition of topological triples agrees with the one introduced in [BRS]. The definition of T-duality triples in [BRS,Def. 2.8] differs in two points from what we stated in Definiton 2.9. The first point is that they use the language of twists [BRS,A.1] instead of PU(H)-principal fibre bundles to model T-duality diagrams. But the category of PU(H)-principal bundles with homotopy classes of bundle isomorphisms as morphisms is a model of twists, and our notion of stable equivalence of topological triples leads to the same equivalence classes as twists modulo isomorphism. This, because the notion of equivalence we use for topological triples requires the commutativity of diagram (14) only up to homotopy. To explain the second point let us consider the filtration of H 3 (E, Z) associated to the Leray-Serre spectral sequence By definition, an element In our definition of a topological triple (κ, (P, E), ( P, E)) we require that the class [P] ∈ H 3 (E, Z) of the bundle P → E lies in the subgroup F 1 H 3 (E, Z) which is equivalent to the requirement of the triviality of P over the fibres of E → B (see Definition 2.2); analogously for P. In [BRS] the definition of T-duality triples requires that the class [P] is even in the second step of the filtration [P] ∈ F 2 H 3 (E, Z); analogously for P. The following lemma states that in the case of G = R n and N = Z n these two conditions are equivalent. Proof : Let C be any 1-dimensional CW-complex and f : C → B any continuous map. We have to show that the class [P] is in the kernel of the induced map f * : H 3 (E, Z) → H 3 (C × B E, Z). As C is 1-dimensional, H 2 (C, Z) = 0 and there are no non-trivial torus bundles over it. Thus, if we pull back the topological triple (κ, (P, E), ( P, E)) along f , it becomes and the centre part of the corresponding diagram degenerates to the projections We extend this diagram to C × T n w w p p p p p p p p p p p x x p p p p p p p p p p p C | | y y y y y y y y y C × T n C ×T n , with the obvious inclusions and projection, so the diagonal composition is the identity. When we apply the the cohomology functor H 3 ( . , Z) to this diagram the vertical arrow becomes zero as it factors over H 2 (C, Z) = 0, but as f * [P] = [P C ] ∈ H 3 (C × T n , Z) is mapped by the identity from the left lower to the right upper group and as [P C ] and [ P C ] equal when pulled back to H 3 (C × T n ×T n , Z), we conclude it is zero, since its image in the right upper group coincides with the image of f * [ P] = [ P C ] under the vertical arrow. This proves that f * [P] = 0 ∈ H 3 (C × B E, Z) and the same argument shows that the corresponding statement is true for [ P]. So we observe that our functor Top : {base spaces} → {sets} is the same functor which is introduced in [BRS,Def. 2.11] under the name Triple n , and below we are going to use a central result of [BRS] about this functor, namely that the functor Top = Triple n is representable by a space R n (see Lemma 3.9 below). We now want to discuss Theorem 3.4 in the case of G = R n , N = Z n . In this case the question which of the topological triples are almost strict has a trivial answer and also the torsor structure of Theorem 3.4 becomes trivial. The criterion we make use of is the following lemma. Lemma 3.8 Let Z be a topological abelian group which is contractible as topological space and is equipped with a continuous (right) G/N-action. Theň Proof : Similar to the proof of Lemma 3.6, the proof makes use of the standard Zorn's lemma argument. k=1: Let {U i } i∈I be an open cover of B, and let ϕ .. ∈Ž 1 (U • , Z) be a twisted 1-cocycle. We shall construct functions χ i : U i → Z such that (δ g χ) .. = ϕ .. . B is paracompact, hence without restriction we can assume that even the closed cover {U i } is locally finite and all ϕ ij are defined on the whole of U ij . Let such that for all j, k ∈ J and u ∈ U jk : We define a partial order on K such that every chain has an upper bound. We let (J, χ . ) ≤ (J ′ , χ ′ . ) if and only if J ⊂ J ′ ⊂ I and χ j = χ ′ j , for all j ∈ J. By Zorn's lemma, let (J, χ . ) denote a maximal element of K. . Since our cover is locally finite, R is closed, but Z is contractible, therefore an extension χ a exists [DD,Lem. 4]. This contradicts the maximality of (J, χ . ), so J = I. k=2: Let ϕ ... ∈Ž 2 (U • , Z) be a twisted 2-cocycle. We shall construct functions χ ij : U ji → Z such that (δ g χ) ... = ϕ ... . Again by paracompactness of B we can assume that even the closed cover {U i } is locally finite and all ϕ ijk are defined on the whole of U ijk . Let such that for all i, j, k ∈ J and u ∈ U ijk : The set L is non-empty, since for each i ∈ I ({i}, {ϕ iii }) ∈ L. We define a partial order on L such that every chain has an upper bound. We let (J, ij , for all i, j ∈ J. By Zorn's lemma, let (J, χ .. ) denote a maximal element of L. Assume a ∈ I\J. Then let such that for all k, l ∈ K and for u ∈ U lka ψ ka (u) ψ la (u) −1 χ kl (u) · g la (u) = ϕ lka (u) The set M a is non-empty as for each j ∈ J we find ({j}, {1}) ∈ M a . This because we always have 1 · 1 · χ jj (u) · g ja (u) = ϕ jjj (u) · g ja (u) = ϕ jja (u). In the same manner as before, let (K, ψ .a ) ≤ (K ′ , ψ ′ .a ) if and only if K ⊂ K ′ ⊂ I and ψ ka = ψ ′ ka , for all k ∈ K. ≤ is a partial order on M a and every chain has an upper bound. Let (K, ψ .a ) be a maximal element. Assume b ∈ J\K. Let S := By a one line calculation we find that this definition is independent of k ∈ K. So we have a diagram S is closed, since our cover is locally finite. Thus, since Z is contractible, there is an extension ψ ba [DD,Lem. 4]. This contradicts the maximality of (K, ψ .a ), so K = J. We define ψ aa := ϕ aaa and then ψ aj (u) := ϕ aja (u) · g aj (u)ψ aa (u) · g aj (u) −1 ψ ja (u) · g aj (u), for j ∈ J. We let J ′ := {a} ∪ J and extend χ .. to J ′ by It is straight forward to check that (J ′ , χ ′ .. ) ∈ L, and as clearly (J, χ .. ) ≤ (J ′ , χ ′ .. ) we have a contradiction. Hence J = I, and the lemma is proven. We now show that Z is contractible. For a cocycle α ∈ Z we have α(0)(z) = α(0 + 0)(z) = α(0)(z)α(0)(z) for all z ∈ T n , so α(0)(z) = 1 and each α(g) : T n → U(1) is null homotopic as R n is path connected. Thus Z = Z 1 cont (R n , Map(T n , U(1))) = Z 1 cont (R n , Map 0 (T n , U(1))), Further, it is shown in [BRS,Sec. 4] that the space R n has an homotopy action of the so-called T-duality group O(n, n, Z) which is the group of 2n × 2nmatrices that fix the form Z 2n ∋ (a 1 , . . . , a n , b 1 So each element of O(n, n, Z) defines a homotopy class of maps R n → R n . In particular, the element 0 n 1 n 1 n 0 n ∈ O(n, n, Z) defines (the homotopy class of) a function T : R n → R n . T is constructed such that T • T = id R n and such that the pullback T * : Top(R n ) → Top(R n ) exchanges the underlying torus bundles 13 . To be precise, the construction of T is such that if [(κ, (P, E), ( P, E))] ∈ Top(R n ) is a topological triple, then T * maps this triple to a triple [(κ ′ , (P ′ , I I I I I I I I I } } z z z z z z z z z is a topological triple, wherein the superscript # denotes the complex conjugate 14 bundles and isomorphism. We claim that In fact, by [BRS,Prop. 7.4] the set of topological triples over R n with fixed torus bundles E ′ , E ′ is a torsor over H 3 (R n , Z) and by [BRS,Lem. 3.3] H 3 (R n , Z) = 0. In other words, there only exists one triple over R n which has underlying torus bundles E ′ ∼ = E and E ′ ∼ = E, hence equation (42) is valid. The properties of the map T which is called universal T-duality enables us to prove the statement indicated above: Proof : It suffices to show that J(B) = id, for all B ∈ CW. J is a natural transformation, so is a commutative diagram. But by construction J(B) does not change the underlying pairs of the corresponding topological triples, for any B, so by the commutativity of the diagram J(R n ) at least does not change the underlying pairs and underlying dual pairs of the corresponding triples. That J(R n ) is the identity, i.e. it does not change the equivalence class of the isomorphisms κ, follows from the trivial H 3 (R n , Z)-torsor structure of the set of topological triples with fixed torus bundles. Hence J(R n ) = id. Now, let B ∈ CW and x ∈ Top(B) be any topological triple. By the universal property of x univ , there is f : B → R n such that x = f * x univ and, by naturality of J, So J(B) = id, for all B ∈ CW. 14 Complex conjugation may be defined by taking H = l 2 N and then defining the complex conjugate bundles and isomorphism by complex conjugation of the local transition functions and local isomorphisms. We end this section with a remark on homotopic deckers. Its content is based on the fact that the functor Top| CW ∼ = [ . , R n ] is homotopy invariant. The Structure of the Associated C * -Dynamical Systems Let (ρ, P, E) be a dualisable dynamical triple and let us denote by F := P × PU(H) K(H) the associated C * -bundle. The decker ρ induces a G-action on F by [x, K] · g := [ρ(x, g), K] x ∈ P, K ∈ K(H). This action defines another action α ρ of G on the C * -algebra of sections Γ(E, F) such that (Γ(E, F), G, α ρ ) becomes a C * -dynamical system. This action is given by (α ρ g s)(e) := s(e · gN) · (−g), e ∈ E, g ∈ G, s ∈ Γ(E, F). In the same manner we obtain a dual C * -dynamical system (Γ( E, F), G, αρ) for the associated C * -bundle F := P × PU(L 2 (G/N)⊗H) K(L 2 (G/N) ⊗ H) of the dual triple (ρ, E, P) of (ρ, E, P). The essence of this section is that we can establish an isomorphism of C * -dynamical systems from the crossed product 15 of the first to the dual C * -dynamical system (Thm. 3.8) We are going to calculate the crossed product under a series of isomorphisms which again will be a local calculation. We start with the description of the simplest case, namely B being a point. In the situation of the trivial pair over the point B = { * } the sections can be identified with the continuous functions Γ(triv. pair) ∼ = C(G/N, K(H)), 15 See the section on crossed products on page 92 for notation. since any section s : G/N → G/N × K(H) is uniquely given by a function f such that s(z) = (z, f (z)). Then the action of g ∈ G on such a function f is obtained from for the 1-cocycle µ determined by ρ. For the C * -algebra of the dual trivial pair over the point we have Γ(dual triv. pair) ∼ = C( G/N ⊥ , K(L 2 (N ⊥ , H))). It will be convenient at some point to deal with the Hilbert space L 2 ( G/N, H) rather than with L 2 (G/N) ⊗ H. So we have to transform the cocycleμ of the dual deckerρ from equation (20) by Fourier transform F : and as in eq. (43) this gives us an action The next lemma is a simple link between F * μ and the left regular representation λ G/N on L 2 ( G/N). λ G/N ∈ Hom( G/N, U(L 2 ( G/N))) is a continuous homomorphism. Proof : First note that naturally We assume the dynamical triple to be dualisable, so we can lift µ to a unitary (Borel) cocycle µ : We now define a unitary isomorphism by the composition u := shift • (Fourier trans.) • mult µ( , ) , explicitely for χ ∈ G, α ∈ G/N. The next step is to calculate u( f × )u −1 . This is straightforward, but to keep the calculation readable we first introduce some short hands: µ F(g, z) := µ(−g, z)F(g, z), f µ (g, z) := f (g, z)µ(g, z) −1 andf 2 the Fourier transform in the second, the G/N variable only. Now and we think of f µ as a continuous family of Hilbert-Schmidt operators f µ : G → K(L 2 ( G/N, H)), i.e. we do not decide in notation between the operator f µ (χ) and its integral kernel. From the definition of the kernel f µ we obtain f µ (χ + β ⊥ )(α, γ) = f µ (χ)(α + β, γ + β), α, γ, β ∈ G/N, so the operator f µ (χ) satisfies the identity for the left regular representation λ G/N . We now use the chosen extension Λ ∈ Hom( G, PU(L 2 ( G/N, H))) from Lemma 3.10 to define T µ : It is now a lengthy but straight forward calculation to check that, firstly, T µ commutes with the * -operation, i.e. It remains to show that By definition α µ χ is just the multiplication with the character χ, so the Fourier transform gives us simply a shift in the argument by χ. We get The discussion in the previous theorem will serve as a description of the local situation of the general case. Let (ρ, P, E) be a dualisable dynamical triple, and let (ρ, P, E) be its dual. We defined the associated C * -bundles F, F above. Recall the definition ofĝ ji out of φ ji (p. 39). A priori there need not exist a continuous lift ϕ ji in the diagram but by Lemma A.8 we assume without restriction that our atlas is sufficiently refined such that ϕ ji exists. We define ϕ ′ ji (u)(gN) := φ ji (u)(g, 0) ϕ ji (u), g −1 . Although the function u → ϕ ′ ji (u) ∈ L ∞ (G/N, U(1)) need not to be continuous, the function u → Ad(ϕ ′ ji (u)) ∈ PL ∞ (G/N, U(1)) is continuous by Lemma 3.1, and we have the identity hN). In view of equation (47), equation (50) shows that the family {T i f i } i∈I defines a section in a K(L 2 ( G/N, H))-bundle over E with transition functions F * ζ ji . Up to Fourier transform, this bundle is nothing but F itself, for we find (F * ζ ji )(u)(ẑ) So the the family {T i f i } defines a section T f ∈ Γ( E, F), and we have constructed a map T : C c (G, Γ(E, F)) → Γ( E, F) which extends to an isomorphism of C * -algebras T : G × α ρ Γ(E, F)) → Γ( E, F) Then the relation T( α ρ χ f ) = αρ χ (T f ) is established by local calculation in the same manner as over the point in equation (46). Thus T is in fact an isomorphism of C * -dynamical systems. A.1 Groups Let G be a Hausdorff locally compact abelian group and N some discrete, cocompact subgroup, i.e. G/N is compact. Lemma A.1 (i) The quotient map G → G/N has local sections. (ii) The quotient map G → G/N has a Borel section. Proof : (i) N ⊂ G is discrete, i.e. there exists an open neighbourhood U of 0 ∈ G such that U ∩ N = {0}. Let + : G × G → G be the addition. + is continuous so + −1 (U) is an open neighbourhood of (0, 0) ∈ G × G. So there is an open neighbourhood V ⊂ G of 0 ∈ G such that V × V ⊂ + −1 (U). Let W := V ∩ (−V). Then W is an open neighbourhood of 0 ∈ G, and for all x ∈ W and n ∈ N\{0} the sum x + n / ∈ W, for x ∈ W implies −x ∈ W and in case x + n ∈ W we would find (x + n) + (−x) = n ∈ U -a contradiction. Therefore W maps injectively to G/N, and as the quotient map is open it defines a homoeomorphism from W to its image W/N. This defines a local section from W/N to G, and using addition in G/N we can move W/N all over G/N to get a local section in the neighbourhood of each point in G/N. The dual group of G is G := Hom(G, U(1)). With compact-open topology it becomes again a Hausdorff, locally compact group, and if G is second countable, then also G is. For parings of a group and its dual we will use bracket notation χ, g , α, n , · · · ∈ U(1), for g ∈ G, χ ∈ G, n ∈ N, α ∈ N. We recall part of the classical duality theorems [Ru]. Pontrjagin Duality states that the canonical map G to G is an isomorphism of topological groups. Moreover, if N ⊥ := {χ ∈ G | χ| N = 1} is the annihilator of N, then there is a canonical isomorphism G/N ∋ α → g → α, gN ∈ N ⊥ , and by the same means G/N ⊥ ∼ = N. Further, the dual group of a discrete group is compact and vice versa, so N ⊥ ⊂ G is a discrete cocompact subgroup, thus the situation is completely symmetric under exchange of N, G by N ⊥ , G. Let us denote the integration of a (compactly supported, continuous) function f : G → C against the Haar measure of G simply by G f (g) dg. For the Fourier transformf of f we use the conventionf (χ) := G χ, g f (g) dg, χ ∈ G. It extends to an isomorphism L 2 (G)→ L 2 ( G). A.2 Group andČech Cohomology For a topological G-module M let us denote by C k cont (G, M) (resp. C k Bor (G, M)) the continuous (resp. Borel) maps G k → M, k = 0, 1, 2, . . . and we use the standard notation for the cohomology groupsȞ k (U • , F). We also use the notation A to denote the locally constant sheaf of continuous functions to A, for any abelian topological group A. A.3 The Unitary and the Projective Unitary Group Let H be some infinite dimensional, separable Hilbert space with unitary group U(H) which we equip with the strong (or equivalently weak) operator topology. We denote by is continuous. Proof : A standard ε/3-argument. Let f α → f be a converging net in Bor(G, U), i.e. for each compact K ⊂ G and any w ∈ H we have We have to show that for all v ∈ L 2 (G, H) f α v − f v L 2 (G,H ) → 0. Recall that in case of the Haar measure the compactly supported functions C c (G) are dense in L 2 (G). So for any v ∈ L 2 (G, H) and ε > 0 there exist compactly supported functions h i ∈ C c (G), vectors w j ∈ H and numbers a ij ∈ C such that v − ∑ N i,j=1 a ij h i ⊗ w j L 2 (G,H ) < ε/3. Choose K := N i=1 supp h i ⊂ G and C := 1 + ∑ N i,j=1 |a ij | 2 K |h i (g)| 2 dg > 0. Then there exists an α 0 such that f α − f K,w j < ε For PU(H)-principal bundles we have the following well-known classification theorem. (See e.g. [Di,Thm. 10.8.4] for the first and of [PR] for the second statement; although therein it is not stated as below, we can carry over the proofs.) To state the theorem we introduce some notation. Let us denote by Iso(E) the set of isomorphism classes of PU(H)-principal bundles over E, and if P → E is a PU(H)-principal bundle, we denote by Aut 0 (P, E) the group of bundle automorphisms (over the identity of E). There is the subgroup Null(P, E) ⊂ Aut 0 (P, E) which consists of all null-homotopic bundle automorphisms. (1)). In particular, if P → E is the trivial bundle P = E × PU(H) we identify Aut 0 (P, E) with the continuous functions C(E, PU(H)). If we keep this in mind, a corollary of the classification theorem of bundle automorphisms is the following statement which is sometimes called homotopy commutativity of the projective unitary group. Proof : x → f (x)g(x) and x → g(x) f (x) define the sameČech class. We do not give a proof of Theorem A.1, but we remark that it depends heavily on the the fact that the unitary group U(H) is contractible. For the strong topology on U(H) this is not difficult to prove and may be found in [Di,10.8]. We line out the proof for convenience. Assume H = L 2 ([0, 1]), then let ϕ t : L 2 (0, t) ∼ = L 2 (0, 1) be the isometric isomorphism defined by ϕ t ( f )(x) := √ t f (tx), for t > 0. We define H : [0, 1] × U(L 2 (0, 1)) → U(L 2 (0, 1)) by then H is a homotopy connecting the identity on U(H) and the constant function with value ½ ∈ U(H), thus H is a contraction. again defines an element of C c (G, A). Thus C c (G, A) ֒→ L(L 2 (G, H)) is a *subalgebra. The crossed product of G and A is then defined as the norm completion G × α A := C c (G, A) . , ×, × . (This is in fact well-defined, since the operator norm of f × is independent of the faithful representation (π, H).) For χ ∈ G, f ∈ C c (G, A) we setα χ ( f )(g) := χ, g f (g) which extends to a strongly continuous actionα : G → Aut(G × α A), so (G × α A, G,α) again defines a C * -dynamical system. Going once more through the process of building the crossed product gives a C * -dynamical system ( G ×α (G × α A), G,α), and a key statement in the analysis of crossed products is the following Takai Duality Theorem (see e.g. [Pe1]).
2007-12-03T11:12:16.000Z
2007-12-03T00:00:00.000
{ "year": 2007, "sha1": "ac87c130dc85637887e54b114a0c42b8201db45e", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "ac87c130dc85637887e54b114a0c42b8201db45e", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
259937517
pes2o/s2orc
v3-fos-license
Normalized bound state solutions for the fractional Schr\"{o}dinger equation with potential In this paper, we study the following fractional Schr\"{o}dinger equation with prescribed mass \begin{equation*} \left\{ \begin{aligned}&(-\Delta)^{s}u=\lambda u+a(x)|u|^{p-2}u,\quad\text{in $\mathbb{R}^{N}$},\\&\int_{\mathbb{R}^{N}}|u|^{2}dx=c^{2},\quad u\in H^{s}(\mathbb{R}^{N}), \end{aligned} \right. \end{equation*} where $02s$, $2+\frac{4s}{N}0$, $\lambda\in \mathbb{R}$ and $a(x)\in C^{1}(\mathbb{R}^{N},\mathbb{R}^{+})$ is a potential function. By using a minimax principle, we prove the existence of bounded state normalized solution under various conditions on $a(x)$. When we are looking for solutions of problem (1.1), a possible choice is to consider that λ ∈ R is fixed, and to look for critical points of the functional F λ : H s (R N ) → R (see e.g. [13,15,34,40]) where H s (R N ) is a Hilbert space with the inner product and norm respectively Another interesting way is to search for solutions with prescribed mass, that is, (1.2) holds, and λ ∈ R appears as a lagrange multiplier. This type of solution is called normalized solution, and can be obtained by looking for critical points of the functional From the physical point of view, the normalized solution is particularly meaningful, since in addition to there is a conservation of mass, the mass has often an important physical meaning. For s ∈ (0, 1) and f (u) = |u| p−2 u, the associated energy functional of (1.7) is given by From the variational point of view, F V is bounded from below on S c for p ∈ (2, 2 + 4s N ) (L 2 -subcritical), and unbounded from below on S c for p ∈ (2 + 4s N , 2 * s ) (L 2 -supercritical). Here 2 + 4s N is called the L 2critical exponent, which comes from the Gagliardo-Nirenberg inequality [30]. There are many results about problem (1.7), see [28,39,41] for normalized solutions without potential, [31] for normalized solutions with the vanishing potential, [19] for normalized solutions with the trapping potential, [27] for normalized solutions with a ring-shaped potential, [42] for normalized solutions with a weak form of the steep well potential. Note that, all the results mentioned above are concerned with the problem (1.7) with autonomous nonlinearities. For the study of normalized solutions to the nonautonomous equations, i.e., the following [38]. For more related results, see [12,18,37] for more details. Nowadays, the study of problem (1.8) has attracted a lot interest. However, as far as we know, there are only a few papers dealing with problem (1.8) (even for s = 1) besides the one already mentioned above [1,10,11,38]. With regard to this point, inspired by [11] and [25], in this work, we study [11], where the authors considered a(x) ≥ a ∞ and the existence of ground state solutions, while, in this paper, we focus on the case a(x) ≤ a ∞ , and obtain a bound state solution. Remark 1.3. Let us give a brief illustration of our proof. Step 1: Under the conditions of a(x), we prove that F has a linking geometry; Step 2: Borrowing the idea of [23], and using a new minimax principle introduced in [21], we construct a bounded Pohožaev-Palais-Smale sequence; Step 3: By using the splitting Lemma, we prove the convengence. The paper is organized as follows. Section 2 contains some preliminaries. In Section 3, we use a minimax principle to construct a Pohožaev-Palais-Smale sequence. Section 4 is devoted to the proofs of Theorems 1.1 and 1.2. Throughout the paper, we will use the notation · q := · L q (R N ) , q ∈ (1, ∞), constants possibly different from line to line. Preliminaries By [20], for p ∈ (2 + 4s N , 2 * s ), the following fractional Schrödinger equation: has an unique solution w ∈ H s (R N ), which is radial and radially decreasing. Then for c > 0, one by scaling: where λ c is determined by and H s rad (R N ) is the subset of the radially symmetric functions in H s (R N ). The solution w c of (2.2) is a critical point, in fact a mountain pass critical point of constrained to S c . For any c > 0, setting Now we recall the notion of barycentre of a function u ∈ H s (R N )\{0} which has been introduced in [6,9]. Setting We can observe that ν(u) is bounded and continuous, and then, the function is well defined. Moreover, u is continuous and has a compact support. Therefore, we can define The map β is well defined, since u has a compact support, and one can verify that it satisfies the following properties: (ii) if u is a radial function, then β(u) = 0; (iii) β(tu) = β(u) for any t = 0 and u ∈ H s (R N )\{0}; (iv) letting u z (x) = u(x − z) for any z ∈ R N and u ∈ H s (R N )\{0}, there holds β (u z ) = β(u) + z. If τ n ∈ R such that τ n → 0 and G n ∈ G is a sequence such that Then there exists a sequence {u n } ⊂ M such that for some constant C > 0. To study the behavior of a Palais-Smale sequence, we introduce a splitting Lemma. For λ < 0, let and to the limit equation Proof. The proof of Lemma 2.2 can be found in [14,Lemma 3.1]. The only difference is that [14] deals with exterior domains, not with in the whole space, however the proof is exactly the same with λ < 0 and thus we omit it here. The minimax approach For h ∈ R and u ∈ H s (R N ), we introduce the scaling which preserves the L 2 -norm: h ⋆ u 2 = u 2 for all h ∈ R. For R > 0 and h 1 < 0 < h 2 , which will be determined later, we set For c > 0 we define We want to find a solution of problem (1.1)-(1.2) whose energy is given by In order to develop a min-max argument, we need to prove that at least for some suitable choice of Q. For this purpose, we will prove Lemma 3.3, which gives a lower bound of m a,c , and Lemma 3.4, which gives an upper bound of F • γ on the boundary ∂Q, for any given γ ∈ Γ c . The values of R > 0 and h 1 < 0 < h 2 will be determined in Lemma 3.4. By a similar argument as [28], we obtain In order to prove that l c ≥ m c and m c ≥ l r c , arguing by contradiction we assume that l c < m c . Then max is open and connected, so it is path-connected. Therefore there exists a path σ ∈ Σ c such that max t∈(0,1) F ∞ (σ(t)) < m c , which is a contradiction. On the other hand, letD : Hence, m c ≥ l r c . Then we have Borrowing the idea of [23, Lemma 2.4], we consider the functional constrained to M := S c × R. We apply Lemma 2.1 with Observe thatl In fact, since D × {0} ⊂ G, hence l c ≥l c , and for any G ∈ G, we have D := {h ⋆ u : (u, h) ∈ G} ∈ D and max (u,h)∈GF hence l c ≤l c . Therefore, Lemma 2.1 yields a sequence (u n , h n ) ∈ S c × R such that In particular, let v n := h n ⋆ u n , differentiation shows that{v n } ⊂ S c is a Palais-Smale sequence for F ∞ on S c at level m c satisfying the Pohožaev identity for F ∞ , that is there exist Lagrange multipliers µ n ∈ R such that as n → ∞. Moreover, {v n } is bounded in H s (R N ). In fact, from the first and third relations above, we can infer that Thus {v n } is bounded in H s (R N ), and the second relation implies that which is equivalent to Therefore, after passing to subsequences, {v n } converges weakly in In order to see this we first observe that, using again the second relation above, for every ϕ ∈ H s (R N ), hence {v n } is also a Palais-Smale sequence for F ∞,µ at the level m c − µ 2 c 2 . As a consequence, Lemma 2.2 implies in H s (R N ), where m ≥ 0 and u j = 0 are solutions to (−∆) s u j = µu j + |u j | p−2 u j and |y j n | → ∞. Moreover, setting γ := v 2 and α j := u j 2 , then there holds and thus at least one of the limit functions must be non-trivial. In addition, we have which yields that Using Appendix in [31] and (2.6), we have that, if v is non-trivial, then which implies that v = 0. In the following, we always assume |h 1 |, h 2 and R large enough but fixed. (3.4) implies that F has a linking geometry and there exists a Palais-Smale sequence of F at level m a,c . The aim of the next parts is to prove that m a,c is a critical value for F . Moreover, for future purposes, we prove the following lemmas. where h 2 is given by Lemma 3.4, then m a,c < 2m c . Proof. This follows from provided that |h 1 |, h 2 are large enough. Now we will construct a bounded Palais-Smale sequence {v n } ⊂ H s (R N ) of F at level m a,c . Adapting the approach of [23], we introduce the following C 1 -functional Lemma 3.7. If {(u n , h n )} is a (P S) c ′ sequence forF and h n → 0, then {(h n ⋆u n )} is a (P S) c ′ sequence for F . Applying Lemma 2.1, we immediately have the following proposition. Then there exist a sequence (u n , h n ) ∈ S c × R and C > 0 such that The last inequality means: Lemma 3.9. There exists a bounded sequence {v n } in S c such that and as n → ∞. Moreover, the sequence of of Lagrange multipliers admits a subsequence λ n → λ with − δ 0 c 2 < λ < 0, (3.17) where Proof. By the definition of m a,c , we choose a sequence g n ∈ Γ c such that max (y,h)∈Q F (g n (y, h)) ≤ m a,c + 1 n . Applying Lemma 3.8, we can prove the existence of a sequence We also note that so that h n → 0 as n → ∞ and there exists ( Observe that, since g n (y n ,h n ) ≥ 0 a.e. in R N , then u − n 2 ≤ u n − g n (y n ,h n ) 2 = o(1) and we can deduce that u − n → 0 a.e. in R N . So v − n 2 → 0 as n → ∞. Moreover, by Lemma 3.7, {v n } is a Palais-Smale sequence for F , that is and (3.20) where H(u) = 1 2 R N |u| 2 dx. Since h n → 0 as n → ∞ and This together with (A 2 ), (A 3 ) and (A 4 ) yields that x · ∇a(x)|v n | p dx. Then from (3.21), (3.22), (3.23) and the definition of λ n , up to a subsequence, we can assume that Passing to the limit in (3.15), (3.19) and (3.20), then Now, we claim that λ < 0. In fact, from the above three equalities, we have where we have used Lemmas 3.2 and 3.3, from 2 + 4s N < p < 2 * s , we have λ < 0. On the other hand, Thus, we have (3.17) holds. Moreover, recalling the definition of θ, we obtain It remains to prove that v 2 = c. Since for every ϕ ∈ H s (R N ), then {v n } is a Palais-Smale sequence for F λ at level m a,c − λ 2 c 2 . Therefore, by Lemma 2.2, we have with u j being solutions to (−∆) s u j = λu j + |u j | p−2 u j and |y j n | → ∞. We note that, if k = 0, then v n → v strongly in H s (R N ), hence v 2 = c and we are done, thus we can assume that k ≥ 1, or equivalently ρ := v 2 < c. First, we exclude the case v = 0. In fact, if v = 0 and k = 1, we would have u 1 > 0 and u 1 2 = c, so that (2.11) gives m a,c = m c , which is not possible due to Lemma 3.3. On the other hand, if k ≥ 2, by Lemma 2.2, we have Let α 2 j = u j 2 2 , since F ∞ (u j ) ≥ m α j and m α > m β , if α < β, (4.1) we have 2m c ≤ m a,c , which contradicts Lemma 3.5. Therefore, from now on we assume that v = 0. From F (v n ) → m a,c , we deduce that Using F ∞ (u j ) ≥ m α j , (4.1) and where α = max j α j . Moreover, using the equation for v, it is easy to check that Let β > 0 be such that λ β = λ, according to (2.3). By Appendix in [31], w β satisfies the limit equation with multiplier λ and β ≤ α. Since m β ≥ m α > m c and λ < 0, we deduce that This ends the proof.
2023-07-18T01:01:16.598Z
2023-07-16T00:00:00.000
{ "year": 2023, "sha1": "0f7c76dbc4595cdf8717ce16333d7ed0bb0d706e", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "0f7c76dbc4595cdf8717ce16333d7ed0bb0d706e", "s2fieldsofstudy": [ "Physics", "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
260797642
pes2o/s2orc
v3-fos-license
SsCak1 Regulates Growth and Pathogenicity in Sclerotinia sclerotiorum Sclerotinia sclerotiorum is a devastating fungal pathogen that causes severe crop losses worldwide. It is of vital importance to understand its pathogenic mechanism for disease control. Through a forward genetic screen combined with next-generation sequencing, a putative protein kinase, SsCak1, was found to be involved in the growth and pathogenicity of S. sclerotiorum. Knockout and complementation experiments confirmed that deletions in SsCak1 caused defects in mycelium and sclerotia development, as well as appressoria formation and host penetration, leading to complete loss of virulence. These findings suggest that SsCak1 is essential for the growth, development, and pathogenicity of S. sclerotiorum. Therefore, SsCak1 could serve as a potential target for the control of S. sclerotiorum infection through host-induced gene silencing (HIGS), which could increase crop resistance to the pathogen. Introduction Sclerotinia sclerotiorum is a phytopathogenic fungus which has a wide host range and can infect more than 600 species of plants, including rape, potato, cotton, tomato, soybean, and other important crops [1][2][3].S. sclerotiorum causes stem rot, resulting in the death of host tissues.Its virulence involves the release of toxins (such as oxalic acid (OA) and cell wall-degrading enzymes (CWDEs) for infection initiation, followed by the extraction of nutrients from host cells [4].As a necrotrophic pathogen, S. sclerotiorum, however, has a brief biotrophic phase that begins approximately 12-24 h after infection [3,5].During this stage, S. sclerotiorum establishes compatibility with the host by suppressing or disrupting its defense barriers [6].Subsequently, the subcutaneous hyphae of S. sclerotiorum spread to multiple cell layers.With successful colonization by branched hyphae, S. sclerotiorum enters a necrotrophic phase, producing large amounts of reactive oxygen species, toxins, and CWDEs, leading to the development of host cell death and necrotic symptoms [5,7,8]. The complex appressorium is a key infection structure for establishing infection.It is formed by hyphae that undergo swelling, slow growth, and continuous branching.Previous studies have shown that these cells can enhance the pathogen's adhesion to the host surface [9] and help penetrate the host epidermal barrier through mechanical pressure or enzymatic degradation [10].The hyphae forming complex appressoria often become flattened and increase in diameter, from which narrow penetration pegs are formed to complete penetration [9,11].Subsequently, subcutaneous infection hyphae are produced and differentiated.These hyphae grow horizontally beneath the epidermis, forming the colonization leading edge [3]. To date, host-induced gene silencing (HIGS) has become a promising way to control fungal diseases, including sclerotinia stem rot.Andrade et al. [12] demonstrated for the first time that the HIGS-mediated chitin synthase gene (CHS) enhanced T1 generation resistance to S. sclerotiorum in tobacco.In subsequent independent studies, by selecting Sscnd1, Ssoah1, ABHYRDOLASE-3 and SsTrxR1 as the target genes of HIGS, the resistance of plants to S. sclerotiorum was successfully improved [6,[13][14][15].Therefore, conducting in-depth research on the pathogenic mechanisms of S. sclerotiorum and screening of key genes, along with enhancing the host's resistance to the pathogen through cross-kingdom RNA silencing, can potentially lead to effective control of sclerotinia stem rot (SSR). Recently, Xu et al. developed a method combining forward genetic screening with high-throughput next-generation sequencing for the rapid discovery of new genes involved in sclerotia development.Some of these genes can be used as HIGS targets for disease control [16,17].Here, it was performed as a genetic screen for virulence-related genes in a mutagenized population of S. sclerotiorum and to isolate a mutant strain with defects in growth, sclerotia development and pathogenicity.Then a candidate gene, SsCak1, was identified by NGS.Knockout and complementation experiments confirmed its involvement in the development of S. sclerotiorum mycelium, sclerotia, complex appressoria formation, and pathogenicity, making it a potential target for HIGS to enhance crop resistance against S. sclerotiorum. Identification of a Pathogenicity-Attenuated Mutant in S. sclerotiorum through Forward Genetic Screening To study the pathogenic mechanism of S. sclerotiorum in depth, we carried out a forward genetics screen on a UV-mutagenized population of S. sclerotiorum [18].Using leaves of lettuce (Figure S1), we aimed to identify mutants with virulence defects.Here we report on a mutant strain M14-9.On tobacco leaves, wild-type (WT) S. sclerotiorum caused severe leaf maceration 48 h after inoculation, whereas M14-9 did not (Figure 1A,B).Moreover, the M14-9 mutant exhibited markedly impaired growth (Figure 1C).The growth rate of M14-9 was much lower than that of WT (Figure 1D), and a large number of aerial hyphae were produced at the early stage of colony formation.In addition, the hyphae of the M14-9 appeared dense and short under the microscope (Figure 1E).M14-9 is deficient in sclerotia formation, with fewer sclerotia numbers per dish and lower individual sclerotia weight than WT (Figure 1F,G).Thus, M14-9 shows dual defects in pathogenicity and growth. Sscle_11g085070 Is the Candidate Mutated Gene of M14-9 To determine the causal mutation responsible for the phenotype of M14-9, the whole genome of M14-9 was re-sequenced and analyzed by NGS.Gene sequences of two other mutants, 49-23 and A14-11, which were identified in the same screen and sequenced, were used as negative controls to exclude background mutations in M14-9.The NGS data were then analyzed with a modified NGS sequence analysis pipeline in S. sclerotiorum [19].After removing background mutations, synonymous mutations, intron mutations, and intergenic mutations, there were three remaining SNP mutations that became the main candidate mutations for subsequent analysis (Figure S2A). Sscle_11g085070 Is the Candidate Mutated Gene of M14-9 To determine the causal mutation responsible for the phenotype of M14-9, the whole genome of M14-9 was re-sequenced and analyzed by NGS.Gene sequences of two other mutants, 49-23 and A14-11, which were identified in the same screen and sequenced, were used as negative controls to exclude background mutations in M14-9.The NGS data were then analyzed with a modified NGS sequence analysis pipeline in S. sclerotiorum [19].After removing background mutations, synonymous mutations, intron mutations, and intergenic mutations, there were three remaining SNP mutations that became the main candidate mutations for subsequent analysis (Figure S2A). To further narrow down the candidates, protein sequences encoded by these genes were analyzed to examine the functions of their orthologs in closely related species.Literature research revealed that sscle_11g085070, whose ortholog, FgCak1, from Fusarium graminearum, is involved in hyphal development and pathogenicity [20].Therefore, SsCak1 was initially tested in S. sclerotiorum.Sanger sequencing of SsCak1 using DNA from M14-9 confirmed a single nucleotide mutation in the second exon of the gene, resulting in a premature stop codon, consistent with the NGS sequencing results (Figure S2B,C).To further narrow down the candidates, protein sequences encoded by these genes were analyzed to examine the functions of their orthologs in closely related species.Literature research revealed that sscle_11g085070, whose ortholog, FgCak1, from Fusarium graminearum, is involved in hyphal development and pathogenicity [20].Therefore, SsCak1 was initially tested in S. sclerotiorum.Sanger sequencing of SsCak1 using DNA from M14-9 confirmed a single nucleotide mutation in the second exon of the gene, resulting in a premature stop codon, consistent with the NGS sequencing results (Figure S2B,C). Furthermore, phylogenetic analysis results revealed that the homologous proteins of SsCak1 (protein accession number, APA13737.1/XP_001590945.1)are relatively conserved in fungi, and widely present in Botrytis cinerea and other pathogenic fungi (Figure 2A).All Cak1s have two conserved eukaryotic protein kinase domains, VIB and VIII (Figure 2B).According to the traditional classification method, SsCak1 belongs to the CMGG group of eukaryotic protein kinases.To gain a preliminary insight into the role of SsCak1 in fungal development, RT-qPCR analysis was conducted to determine the abundance of SsCak1 mRNA in different growth stages of S. sclerotiorum.As shown in Figure 2C, SsCak1 was expressed constitutively in different developmental stages.However, the highest expression was observed in the mature sclerotium stage (7 dpi) (Figure 2C).When inoculated onto leaves of N. benthamiana, the expression of SsCak1 was significantly upregulated from nine hpi to twenty-four hpi (Figure 2D), indicating that SsCak1 was strongly induced during S. sclerotiorum infection.These results suggest that SsCak1 may play an important role in the growth, development, and pathogenicity of S. sclerotiorum, and the mutated phenotype of M14-9 may be caused by the truncation of SsCak1.eukaryotic protein kinases.To gain a preliminary insight into the role of SsCak1 in development, RT-qPCR analysis was conducted to determine the abundance of S mRNA in different growth stages of S. sclerotiorum.As shown in Figure 2C, SsCak expressed constitutively in different developmental stages.However, the highest e sion was observed in the mature sclerotium stage (7 dpi) (Figure 2C).When inoc onto leaves of N. benthamiana, the expression of SsCak1 was significantly upregulated nine hpi to twenty-four hpi (Figure 2D), indicating that SsCak1 was strongly induce ing S. sclerotiorum infection.These results suggest that SsCak1 may play an importa in the growth, development, and pathogenicity of S. sclerotiorum, and the mutated p type of M14-9 may be caused by the truncation of SsCak1. SsCak1 Deletion Impairs Mycelial Growth and Sclerotia Development To test the gene function of SsCak1, we generated a knockout mutant Sscak1 in the background of the WT strain by homologous recombination, and obtained two independent complementation strains, Sscak1-C1 and Sscak1-C5, by introducing a WT copy of SsCak1 into the knockout mutant (Figures S3A and 3A).PCR and RT-qPCR showed that SsCak1 was completely deleted in the knockout mutant, and its transcript was absent, while the two complementation strains restored the transcript levels (Figures S3B and 3D).The mycelial morphology of Sscak1 was largely similar to that of M14-9, showing dense mycelial branching and slower growth rate (Figure 3B,E), while the complementation strains Sscak1-C1 and Sscak1-C5 regained the WT phenotypes.After 15 days of growth on PDA medium, the SsCak1 knockout mutant exhibited abnormal sclerotia development on the colony surface (Figure 3C).Although Sscak1 successfully formed sclerotia, the number of sclerotia produced per plate was significantly reduced (Figure S4A), and the average mass of each sclerotium was also lighter than that of WT (Figure S4B), indicating that loss of SsCak1 impairs growth of mycelium and sclerotia development in S. sclerotiorum. Knockout of SsCak1 Results in Complete Loss of Pathogenicity in S. sclerotiorum For pathogenicity tests, the WT, mutant M14-9, Sscak1, and complementation strains Sscak1-C1 and Sscak1-C5 were inoculated onto intact and wounded leaves of A. thaliana (Figure 4A) and N. benthamiana (Figure 4B).After 48 h, on intact leaves, neither M14-9 nor Sscak1 was able to infect, whereas the complementation strains caused large areas of infection similar to that of WT (Figure 4C,D).On wounded leaves, M14-9 and Sscak1 caused mild infection damage compared to the WT and complementation strains (Figure 4C,D).These results confirmed that SsCak1 is essential for the pathogenicity of S. sclerotiorum, Knockout of SsCak1 Results in Complete Loss of Pathogenicity in S. sclerotiorum For pathogenicity tests, the WT, mutant M14-9, Sscak1, and complementation strains Sscak1-C1 and Sscak1-C5 were inoculated onto intact and wounded leaves of A. thaliana (Figure 4A) and N. benthamiana (Figure 4B).After 48 h, on intact leaves, neither M14-9 nor Sscak1 was able to infect, whereas the complementation strains caused large areas of infection similar to that of WT (Figure 4C,D).On wounded leaves, M14-9 and Sscak1 caused mild infection damage compared to the WT and complementation strains (Figure 4C,D).These results confirmed that SsCak1 is essential for the pathogenicity of S. sclerotiorum, which is likely to involve both pre-and post-penetration regulations. SsCak1 Is Essential for Appressorium Formation and Penetration To further investigate the details of the pathogenicity defects in Sscak1 mutants, their oxalate production, appressoria development, and penetration ability were examined.First, fresh mycelium blocks were inoculated on the PDA medium containing bromophenol blue as a pH indicator.After 24 h, all areas of the mycelium inoculated, including WT, M14-9, Sscak1, Sscak1-C1, and Sscak1-C5, had turned from blue to yellow (Figure S5), suggesting that SsCak1 does not affect oxalate production.However, M14-9 and Sscak1 failed to produce appressorium on glass slides, while the WT strain formed mature appressorium cell structures upon contact with the slide (Figure 5A,B).Onion epidermal penetration assays further revealed that the WT strain formed numerous appressoria and invasive hyphae, whereas M14-9 and Sscak1 failed to produce them, and the hyphae were unable to penetrate the onion epidermis (Figure 5C).Therefore, SsCak1 is an important factor in appressoria development, and the defect in appressorium development and permeability is likely to be the main reason for the loss of pathogenicity in Sscak1. SsCak1 Is Essential for Appressorium Formation and Penetration To further investigate the details of the pathogenicity defects in Sscak1 mutants, their oxalate production, appressoria development, and penetration ability were examined.First, fresh mycelium blocks were inoculated on the PDA medium containing bromophenol blue as a pH indicator.After 24 h, all areas of the mycelium inoculated, including WT, M14-9, Sscak1, Sscak1-C1, and Sscak1-C5, had turned from blue to yellow (Figure S5), suggesting that SsCak1 does not affect oxalate production.However, M14-9 and Sscak1 failed to produce appressorium on glass slides, while the WT strain formed mature appressorium cell structures upon contact with the slide (Figure 5A,B).Onion epidermal penetration assays further revealed that the WT strain formed numerous appressoria and invasive hyphae, whereas M14-9 and Sscak1 failed to produce them, and the hyphae were unable to penetrate the onion epidermis (Figure 5C).Therefore, SsCak1 is an important factor in appressoria development, and the defect in appressorium development and permeability is likely to be the main reason for the loss of pathogenicity in Sscak1. HIGS of SsCak1 in N. benthamiana Enhances Resistance to S. sclerotiorum Conserved and functionally important genes can sometimes be used as target genes for the control of pathogenic microorganisms using HIGS.Since sequence alignment and phylogenetic tree analysis showed that there was no gene with a similar sequence to SsCak1 in plants, SsCak1 may be a potential target for stem rot control using HIGS.Thus, tobacco rattle virus (TRV)-mediated transient silencing of SsCak1 was performed in N. benthamiana.Here, three Agrobacterium constructs, including a positive control construct pTRV2:PDS, a negative control construct pTRV2:GFP and an experimental group pTRV2:SsCak1 construct, were agro-infiltrated into N. benthamiana leaves.After seven days, the top leaves of N. benthamiana showed chlorosis with pTRV2:PDS infiltration, which gradually expanded to the entire plant (Figure 6A).At the same time, the expression of the target genes can be quantified by semi-quantitative PCR (Figure 6B), demonstrating the feasibility of the gene silencing system.After 14 days of TRV treatment, plants were inoculated with the WT strain of S. sclerotiorum (Figure 6C).Compared with that in the control leaves (TRV:GFP), the lesion area on TRV:SsCak1 was reduced by 62% at 24 hpi (Figure 6D).Meanwhile, expression of SsCak1 in TRV:SsCak1-treated leaves was also reduced to 60% of control leaves (TRV:GFP) (Figure 6E).These results suggest that silencing of SsCak1 by TRV-HIGS does reduce virulence and enhances host resistance against S. sclerotiorum. HIGS of SsCak1 in N. benthamiana Enhances Resistance to S. sclerotiorum Conserved and functionally important genes can sometimes be used as target genes for the control of pathogenic microorganisms using HIGS.Since sequence alignment and phylogenetic tree analysis showed that there was no gene with a similar sequence to SsCak1 in plants, SsCak1 may be a potential target for stem rot control using HIGS.Thus, tobacco rattle virus (TRV)-mediated transient silencing of SsCak1 was performed in N. benthamiana. Here, three Agrobacterium constructs, including a positive control construct pTRV2:PDS, a negative control construct pTRV2:GFP and an experimental group pTRV2:SsCak1 construct, were agro-infiltrated into N. benthamiana leaves.After seven days, the top leaves of N. benthamiana showed chlorosis with pTRV2:PDS infiltration, which gradually expanded to the entire plant (Figure 6A).At the same time, the expression of the target genes can be quantified by semi-quantitative PCR (Figure 6B), demonstrating the feasibility of the gene silencing system.After 14 days of TRV treatment, plants were inoculated with the WT strain of S. sclerotiorum (Figure 6C).Compared with that in the control leaves (TRV:GFP), the lesion area on TRV:SsCak1 was reduced by 62% at 24 hpi (Figure 6D).Meanwhile, expression of SsCak1 in TRV:SsCak1-treated leaves was also reduced to 60% of control leaves (TRV:GFP) (Figure 6E).These results suggest that silencing of SsCak1 by TRV-HIGS does reduce virulence and enhances host resistance against S. sclerotiorum. Discussion S. sclerotiorum has garnered significant attention from researchers due to its substantial impact on Brassica napus and other economically important crops [3,21].However, prior investigations into the growth, development, and virulence of S. sclerotiorum have typically relied on homologous proteins from extensively studied pathogens to unravel pathogenicity-associated elements within the organism.Consequently, this approach has inherent limitations, impeding the pace of research progress on S. sclerotiorum.In this study, successful identification of a putative CDK-activating kinase was achieved by employing a combination of forward genetic screening technology and NGS sequencing.And found that it is necessary for the growth and virulence of S. sclerotiorum. Reversible phosphorylation is a major mechanism by which environmental and biochemical stimuli affect protein function and gene expression.Most eukaryotic protein kinases (ePKs) phosphorylate serine, threonine, or tyrosine and have a highly conserved catalytic domain consisting of 12 subdomains that make up the ATP-binding lobe (subdomains I-V) and peptide binding and phosphotransfer lobes (subdomains VI-XI).Subdomains VIB and VIII are involved in peptide substrate recognition, and conserved amino acids within them are used to classify ePKs into functional groups [22][23][24].According to the multiple sequence alignment of homologues, Cak1 is relatively conserved in fungi and has a conserved eukaryotic kinase domain, which can be classified into CMGG groups according to their subdomains VIB and VIII.S. sclerotiorum ePKs CMGG group is widely produced during mycelium development, virulence formation, sclerotia development and host penetration.SMK1, another ePKs CMGG group by RNA-silencing, showed impaired sclerotia formation in S. sclerotiorum, while in other pathogens, inactivation of the SMK1 homologous gene resulted in loss of pathogenicity due to the inability to form Discussion S. sclerotiorum has garnered significant attention from researchers due to its substantial impact on Brassica napus and other economically important crops [3,21].However, prior investigations into the growth, development, and virulence of S. sclerotiorum have typically relied on homologous proteins from extensively studied pathogens to unravel pathogenicity-associated elements within the organism.Consequently, this approach has inherent limitations, impeding the pace of research progress on S. sclerotiorum.In this study, successful identification of a putative CDK-activating kinase was achieved by employing a combination of forward genetic screening technology and NGS sequencing.And found that it is necessary for the growth and virulence of S. sclerotiorum. Reversible phosphorylation is a major mechanism by which environmental and biochemical stimuli affect protein function and gene expression.Most eukaryotic protein kinases (ePKs) phosphorylate serine, threonine, or tyrosine and have a highly conserved catalytic domain consisting of 12 subdomains that make up the ATP-binding lobe (subdomains I-V) and peptide binding and phosphotransfer lobes (subdomains VI-XI).Subdomains VIB and VIII are involved in peptide substrate recognition, and conserved amino acids within them are used to classify ePKs into functional groups [22][23][24].According to the multiple sequence alignment of homologues, Cak1 is relatively conserved in fungi and has a conserved eukaryotic kinase domain, which can be classified into CMGG groups according to their subdomains VIB and VIII.S. sclerotiorum ePKs CMGG group is widely produced during mycelium development, virulence formation, sclerotia development and host penetration.SMK1, another ePKs CMGG group by RNA-silencing, showed impaired sclerotia formation in S. sclerotiorum, while in other pathogens, inactivation of the SMK1 homologous gene resulted in loss of pathogenicity due to the inability to form appressorium [25].Disruption of the SMK3, an important ePKs of the CMGG group in S. sclerotiorum, results in an inability to aggregate and form appressorium leading to a severe reduction in virulence.The mutation also results in the loss of ability to produce sclerotia, aerial bacteria, increased filaments and altered mycelial hydrophobicity [26].In this study, Ss-cak1 was observed to exhibit loss of virulence and growth defects, akin to the phenotype observed in cell cycle kinase-associated mutants within the CMGG group.As a result, we hypothesize that SsCak1 might play a role in fungal growth and cell cycle regulation within S. sclerotiorum.However, further verification through subsequent experiments is necessary. As a cell cycle kinase, CDK-activating kinase (Cak) primarily exerts its activation mechanism through phosphorylation of a conserved threonine residue located in the T-loop region of CDK [27,28].In the realm of yeasts, Saccharomyces cerevisiae harbors a vital Cak gene, denoted as Cak1, while Schizosaccharomyces pombe employs two partially redundant Cak systems, namely the MCS6-MCS2 complex and Csk1, to facilitate the activation of Cdc2 during cellular division [28][29][30].Intriguingly, within S. sclerotiorum, the elimination of SsCak1 did not yield lethal consequences, as the resulting SsCak1 knockout mutants exhibited comparable deficiencies in growth and infection levels to the previously reported SsCDC28 mutants [31].Consequently, this observation proposes that Cak1 might actively engage in the cell cycle progression of S. sclerotiorum by effectuating the phosphorylation of CDC28.Nonetheless, the veracity of this proposition necessitates further validation through subsequent experimental investigations.The association of cell cycle-regulated kinases with appressorium formation has been demonstrated in many important pathogenic fungi [32,33].In some cases, specific cell cycle phases must be completed prior to attachment formation, as has been described in M. oryzae, where completion of the S and M phases is mandatory for attachment formation and function [33].In other cases, however, the cell cycle must be stopped at a specific cell cycle stage to allow appressorium to form.In U. maydis, infectious filaments (where the attachment will differentiate) must stop in the G2 phase [34].In M. oryzae, the lack of cyclin-dependent kinase CDC14 hinders appressorium generation [35].Considering that the main role of the appressorium is to facilitate the penetration of the fungal hyphae within its host to proliferate after invading plant tissue, the formation of appressorium is subordinate to the regulation of the cell cycle.This affiliation seems reasonable to ensure that normal genetic information is loaded into the invading hyphae during this process.As expected, appressoria were completely absent in Sscak1 mutants, and when inoculated on wounded A. thaliana and N. benthamiana leaves (with a dissecting needle), SsCak1 deletion mutants produced lesion spots, but the area was much smaller than that of the WT strain.Therefore we hypothesize that the loss of SsCak1 results in disordered cell cycle regulation, thereby affecting the formation of appressorium and leading to the complete loss of pathogenicity in S. sclerotiorum.Nevertheless, its function in other aspects of pathogenicity besides facilitating penetration cannot be excluded. At present, the prevention and control of sclerotinia are almost entirely dependent on chemical fungicides, and the perennial use of medicaments has made sclerotinia produce obvious drug resistance [36].Thus, the breeding of disease-resistant varieties becomes more and more important for environmentally friendly control strategies.However, strong host monogenic resistance to S. sclerotiorum has not yet been found [2].RNAi-based approach HIGS offers a flexible and environmentally friendly solution for crop protection [37][38][39].Hence, HIGS technology provides a species-specific and environmentally safe method for the control of S. sclerotiorum.Here, SsCak1 was chosen as the target gene, and the RNAi construct of SsCak1 was transiently expressed in N. benthamiana using HIGS.The results indicated that the silencing of SsCak1 through HIGS substantially elevated the plants' resistance to S. sclerotiorum.Nevertheless, a limitation of RNAi technology lies in potential off-target effects [40].However, the expression of SsCak1 decreased by 60% in the WT strain after infection, which further verified the correctness of HIGS.Therefore, in the genome of S. sclerotiorum, genes similar to SsCak1 that are conserved in pathogens without homology in host plants are expected to be target genes of HIGS. In summary, a putative SsCak1 (sscle_11g085070) was identified by a combination of forward genetic screening and NGS, and demonstrated that it was persistently expressed during hyphal development and highly expressed during S. sclerotiorum infection.A knockout strain of SsCak1 was subsequently generated by split-tagging and then transformed with a WT copy to generate two complementary strains.Deletion of SsCak1 led to defects in mycelium development, sclerotia development, appressorium formation, and penetration, resulting in complete loss of virulence, suggesting that SsCak1 is essential for both growth and pathogenicity regulation, and thus can serve as a potential target for enhancing crop resistance to S. sclerotiorum through HIGS. Fungal Strains, Plants and Culture Conditions The WT S. sclerotiorum 1980 was cultured and maintained on potato dextrose agar (PDA).The deletion mutant strain and the complementary strain were grown on PDA containing 150 µg/mL hygromycin B (Roche) in a 20 • C culture room, as previously described [41].WT A. thaliana (Col-0), N. benthamiana for virulence test were grown in a growth room at 22 • C with a 16 h light/8 h dark cycle. Inoculation and Virulence Assessment For inoculation of S. sclerotiorum with mycelial suspension, 6 fresh mycelium agar plugs with a diameter of 5 mm were put into a 250 mL flask containing 150 mL PDB and incubated at 22 • C and 150 rpm for 24 h.The resulting mycelial spheres were collected by filtering the medium with filter paper and washed 3 times with ddH 2 O and PDB, respectively.The mycelium balls were ground on ice to homogenize them.The resulting liquid mycelial suspension was then adjusted to OD 600 = 1.0 with PDB solution, and inoculation was referred to previous standard techniques [42,43].The expanded detached or un-detached leaves were inoculated with actively growing mycelial agar plugs (1 mm diameter for A. thaliana leaves and 6 mm diameter for N. benthamiana leaves).Ten leaves each of A. thaliana and N. benthamiana were inoculated, in each replication.The experiments were replicated at least three times.The inoculated leaves were incubated at 22 • C with 95-100% relative humidity.The lesion size was photographed at 24 hpi. Screening for Mutants Defective in Virulence Ascospores were collected from the ascus of the WT strain S. sclerotiorum 1980.Ascospores were subjected to UV mutagenesis, as previously described [18].Screening of mutagenized populations used lettuce leaves and multiple replicates to confirm putative mutants. Genomic DNA Extraction and NGS The agar block with a diameter of 5 mm was selected and fresh hyphae were inoculated on PDA covered with cellophane.After 2 days, the mycelium was scraped off with a sterile tip, quick-frozen in liquid nitrogen, and ground into powder.and then genomic DNA was extracted using the cetyltrime-thylammonium bromide method [44].The crude extract was further purified for NGS with a commercial service (Novogene, Bioinformatics Technology Co., Ltd., Beijing, China).DNA degradation and contamination were evaluated on a 1% agarose gel.The paired-end library was built by Novogene, using Illumina sequencer NovaSeq 6000, and clean reads were aligned to the S. sclerotiorum genome (ASM185786v1). Candidate Genes Identification The sequence reads from NGS were mapped to the reference genome of WT strain S. sclerotiorum 1980.Mutations were identified by SAMtools with default parameters [45].Annotation of the mutations was performed with germline short variant discovery (SNPs + INDELs) based on GATK best practices [46].False mutations in repetitive sequences were manually removed as previously described [19]. Target Gene Knockout and Transgene Complementation Using the genomic DNA of WT S. sclerotiorum as a template, the flanking sequences of 1032 bp upstream and 998 bp downstream of SsCak1 were amplified by PCR and fused with the left and right parts of the hygromycin expression cassette to obtain the split marker fragments by overlapping PCR.The clone primer sequences are shown in Table S1.The resulting split marker fragments were transformed into WT S. sclerotiorum by PEG-mediated protoplast transformation, as previously described [47].Transformants were selected three times on PDA medium containing 150 mg/L hygromycin.And the transformants were purified by transferring at least three times with a mycelial tip.The deletion of the SsCak1 gene was verified by amplifying the sequence of SsCak1 from the cDNA of transformants. For the genetic complementation of the SsCak1 deletion mutant, a genomic region containing the full-length fragment of SsCak1, including upstream and downstream of the coding sequence, was cloned from WT S. sclerotiorum gDNA.This fragment was then cloned into the modified pCH-NEO1 vector [48].Complementary transformants were selected on PDA medium containing 100 µg/mL G418 and then verified by PCR. RNA Extraction and cDNA Synthesis Total RNA from S. sclerotiorum or N. benthamiana leaves with S. sclerotiorum inoculated was extracted using the EastepTM Super Total RNA Extraction Kit (Promega, Madison, WI, USA).According to the manufacturer's instructions, first-strand cDNA was synthesized by the GoScript™ Reverse Transcription System Kit (Promega, Madison, WI, USA). RT-qPCR Real-time PCR was performed on a StepOneTM Real-time PCR Instrument Thermal Cycling Block using SYBR ® Green Premix Pro Taq HS qPCR Kit II (AG11702, Accurate Biotechnology (Hunan) Co., Ltd., Changsha, China).Use the following PCR program: 40 cycles of 2 min at 94 • C, 15 s at 94 • C and 1 min at 58 • C. The internal reference gene for S. sclerotiorum is Tubulin1.Relative gene expression levels were analyzed using the 2 −∆∆CT method [49]. 4.9.Compound Appressoria Observation S. sclerotiorum mycelium plugs of 5 mm were placed on glass slides and cultured for 16 h to observe the formation and number of appressorium.After 16 h of inoculation with S. sclerotiorum, onion epidermises were soaked in a 0.5% trypan blue solution for 30 min and then destained using a bleach solution (ethanol: acetic acid:glycerol = 3:1:1).Samples were examined and photographed under an optical microscope (Axio Imager 2, ZEISS, Oberkochen, Germany). Construction of TRV-HIGS Vectors and Agro-Infiltration in N. benthamiana Short cDNA fragments of SsCak1 in S. sclerotiorum were amplified by PCR using genespecific primers with EcoRI and BamHI linkers.The resultant PCR product was digested with EcoRI and BamHI and cloned into a linearized pTRV2 vector.A TRV2-based construct harboring GFP was used as a negative control of VIGS and HIGS [50,51].A TRV2-based construct harboring phytoene desaturase (PDS) from N. benthamiana (pTRV2:PDS) was used as a positive control for VIGS efficiency (Senthil-Kumar and Mysore, 2014).Finally, pTRV1 and these pTRV2-based constructs were separately transformed into Agrobacterium tumefaciens GV3101 by electroporation. The mixed Agrobacterium solution was then infiltrated into the first and second leaves of 2-week-old N. benthamiana using a needle-free syringe as previously described [36].N. benthamiana plants were then grown in a growth chamber for at least 14 days. Figure 1 . Figure 1.Pathogenicity and mycelial growth phenotypes of M14-9 mutant.(A) Necrotic areas caused by wild type (WT) and M14-9 on tobacco leaves at 48 hpi; The experiment was repeated at least three times.Bar = 1 cm.(B) Quantification of lesion size of WT and M14-9 on tobacco leaves.Image J was used to quantify the lesion size.** p < 0.01, one-way ANOVA test.(C) Mycelial morphology of WT and M14-9 strains after 2 d or 14 d on potato dextrose agar (PDA) media (D) Colony diameter of M14-9 and WT strains cultured on PDA plates.(E) Morphology of mycelium under light microscope of M14-9 and WT strains.Bar = 200 μm.(F,G) Sclerotia number (F) or weight (G) per plate of M14-9 and WT strains.** p < 0.01, one-way ANOVA test. Figure 1 . Figure 1.Pathogenicity and mycelial growth phenotypes of M14-9 mutant.(A) Necrotic areas caused by wild type (WT) and M14-9 on tobacco leaves at 48 hpi; The experiment was repeated at least three times.Bar = 1 cm.(B) Quantification of lesion size of WT and M14-9 on tobacco leaves.Image J was used to quantify the lesion size.** p < 0.01, one-way ANOVA test.(C) Mycelial morphology of WT and M14-9 strains after 2 d or 14 d on potato dextrose agar (PDA) media (D) Colony diameter of M14-9 and WT strains cultured on PDA plates.(E) Morphology of mycelium under light microscope of M14-9 and WT strains.Bar = 200 µm.(F,G) Sclerotia number (F) or weight (G) per plate of M14-9 and WT strains.** p < 0.01, one-way ANOVA test. Figure 2 . Figure 2. Sequence and expression analysis of the candidate gene in M14-9.(A) Phylogenetic analysis of S. sclerotiorum sscle_11g085070 and other homologous Cak1 from Botrytis cinerea, Monilinia laxa, Rutstroemla sp.NJR-2017a WRK4, Lachnellula suecica, Hyaloscypha variabilis F, Cadophora malorum, Fusarium graminearum and Saccharomyces cerevisiae.Phylogenetic analysis was performed using MEGA 6.0 software with the maximum likelihood method.SsCak1 was marked with .(B) Multiple alignments of sscle_11g085070 with homologous sequences of B. cinerea, M. laxa, R. sp.NJR-2017a WRK4, L. suecica, H. variabilis F, C. malorum, F.graminearum and S. cerevisiae; (C) Quantitative real-time reverse transcription-polymerase chain reaction (RT-qPCR) analysis of SsCak1 expression during different developmental stages of S. sclerotiorum grown on potato dextrose agar (PDA) plates.The quantity of tubulin1 (Sstub1) was used to normalize the expression levels of SsCak1 in different samples.Error bars represent ±SD (n = 3).** p < 0.01, one-way ANOVA test.(D) Expression analysis of SsCak1 in S. sclerotiorum after being inoculated on N. benthamiana leaves.The quantity of tubulin1 (Sstub1) was used to normalize the expression levels of SsCak1 in different samples.Error bars represent ±SD (n = 3).** p < 0.01, one-way ANOVA test. Figure 4 . Figure 4. Deletion of SsCak1 leads to loss of pathogenicity in S. sclerotiorum.(A,B) Lesion area caused by each strain on intact and injured A. thaliana (A) and N. benthamiana (B) leaves.The experiment was repeated at least three times.Bar = 1 cm.(C,D) Statistical analysis of the lesion area in panels above (C), A. thaliana; (D), N. benthamiana).Error bars represent ±SD (n = 3).** p < 0.01, one-way ANOVA test. Figure 4 . Figure 4. Deletion of SsCak1 leads to loss of pathogenicity in S. sclerotiorum.(A,B) Lesion area caused by each strain on intact and injured A. thaliana (A) and N. benthamiana (B) leaves.The experiment was repeated at least three times.Bar = 1 cm.(C,D) Statistical analysis of the lesion area in panels above (C), A. thaliana; (D), N. benthamiana).Error bars represent ±SD (n = 3).** p < 0.01, one-way ANOVA test.
2023-08-11T15:15:28.802Z
2023-08-01T00:00:00.000
{ "year": 2023, "sha1": "c2c266b340b1d262e10385f74f71573a30ab541b", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1422-0067/24/16/12610/pdf?version=1691651337", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "3f032a7f3d302cd27fab511d874f59556a526dd3", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
79831450
pes2o/s2orc
v3-fos-license
Lactational Changes Masquerading Features of Malignancy in Breast: A Case Report Introduction: Pregnancy associated breast cancer is defined as “The diagnosis of breast cancer is made during pregnancy or within one year afterward.” The incidences of breast carcinoma are 1 in 3000 pregnancy in west. Risk is increasing because of increased number of late pregnancies. Young age, lactating breast (dense) and paucity of incidences not only delay but confuse the pathologist for accurate diagnosis. Case Report: A 28 year lactating female with 2 months old baby, presented with breast lump. Lump was mobile, non-tender and of size 2x1 cm since 3 months and gradually increasing in size. On USG, it was benign. On FNAC, aspirate was milky mixed with blood, showing monotonous cells with abundant cytoplasm, bland nuclei in a secretary background. Final diagnosis was lactational changes in breast. After six months, lady came back with previous lump which was increased in size up to 6X5 cm. On FNAC, smear was having bizarre cells, reported as malignant and confirmed on histopathology. Patient was responded to chemotherapy and lump reduced in size. Conclusion: Pregnancy associated breast cancer is on rise. High grade of suspicion and complete evaluation including trucut biopsy will lead to correct diagnosis. Introduction Pregnancy associated breast cancer is defined as "The diagnosis of breast cancer is made during pregnancy or within one year afterward." Risk is increasing because of increased number of late pregnancies. 1 Young age, lactating breast (dense) and paucity of incidences not only delay but confuse the pathologist for accurate diagnosis. Moreover, cytologically lactating adenoma mimics infiltrating duct carcinoma.The diagnostic and therapeutic implications in this clinical setting are special as physicians have to balance between aggressive maternal care and fetal protection. Case Report A 28 year old lactating female with 2 months of old baby, presented in the surgical OPD with complaints of breast lump since 3 months. Lump was mobile, non-tender and of size 2X1 cm and was gradually increasing in size. On USG, it was benign. FNAC was performed using the 24 gauge needle attached with 10 ml syringe. Multiple smears were made. They were air dried and fixed in methanol. Smears were stained with May Grunnwald Giemsa stain. Aspirate was milky, mixed with blood in nature. Smear revealed epithelial cells arranged in clusters and singly scattered with abundant cytoplasm in a secretory dirty background. Numerous naked cells were also seen. The nuclei are enlarged with prominent nucleoli but regular nuclear contours and bland chromatin. A scarce number of bipolar naked nuclei can also be found in these smears. Final diagnosis of lactational nodule was rendered on aspiration cytology. However Trucut biopsy was advised for confirmation and to rule out any underlying malignancy, but lost to follow up. After six months, she came back with previous lump which was increased in size up to 6X5 cm. FNAC was repeated. Smears showed singly scattered epithelial cells having pleomorphic hyperchromatic nuclei, irregular contour and moderate cytoplasm. Bipolar nuclei were not evident. On the basis of cell morphology, it was reported as breast carcinoma. Trucut biopsy was done and the diagnosis was confirmed Patient was referred to oncology department for further treatment and management. Discussion Gestational breast cancer is second most common pregnancy associated malignancies, after cervical cancer. This incidence is increasing due to delay in childbearing until later in life. Various studies have shown that women who have their term pregnancy after the age of 30 years have a two to three times higher risk of developing breast carcinoma than women who have their first pregnancy before the age of 20 years. of early cancers (when matched for age and stage), but a poorer prognosis is demonstrated for patients with more advanced disease. Moreover estrogen and progesterone receptor status negativity is consistently greater in these patients. This is due to the fact that high circulating levels ofestrogen and progesterone in pregnancy occupy all of the hormone receptor binding sites and receptor are downregulated. 5,6,7 Unfortunately most of the patients present with advanced disease. This is due to lack of awareness and difficulty in diagnosis. Physiologic changes during pregnancy changes the architecture of the breast. The breast enlarges due to proliferation of ducts and lobules which alter the breast structure, resulting in enlargement, firmness, and increased nodularity. This may also result in diagnostic delay in diagnosis. Lactational changes masquerade the features of malignancy. Lactating adenoma are most prevalent breast masses in pregnancy. 8 In the present case the first diagnosis was rendered lactational adenoma due to bland nature of chromatin and uniform nuclear contour. But it turned out to be malignant after six month. Therefore this case highlights three possibilities: Diagnostic dilemma There are case reports of patients who develop invasive ductal carcinoma in previous excision site of a lactating adenoma and lactating adenoma containing an associated infiltrating carcinoma.Geschicker 9 and Lewis reported a lactating adenoma containing an associated infiltrating carcinoma. Hertelet al. 10 reported a case report of lactational adenoma developed into IDC. Awareness of entity is important as pregnancy associated breast cancerbehaves aggressively, present in late stage but can be cured with appropriate treatment at proper time. Early diagnosis can prevent the morbidity and mortality of the patient. FNAC can result in false positive and negative cases due to pregnancy associated hyperplastic changes atypia. So tissue biopsy and close follow up of the patient is required to diagnose pregnancy associated breast lump. High grade of suspicion and complete evaluation including trucut biopsy will lead to correct diagnosis. Conclusion All cases of pregnancy associated breast lumps showing lactational changes & high cellularity on cytosmears should be kept in close follow-up. A tru-cut biopsy should be done to confirm the absence of malignancy. Since tru-cut biopsy is not done routinely in our setup, diagnosis & management of patient was delayed. A multidisciplinary approach and close coordination between medical oncology, surgical oncology and obstetricians is recommended.
2019-03-17T13:10:52.544Z
2017-12-15T00:00:00.000
{ "year": 2017, "sha1": "8427d3e0bac47c694f4908a6341977918e2f18db", "oa_license": "CCBY", "oa_url": "https://www.pacificejournals.com/journal/index.php/awch/article/download/awch1806/pdf_43", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "474d2f4533626e4804ca40f3ae60aa1fe2544072", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
236514052
pes2o/s2orc
v3-fos-license
Nonlinear Variable Resistor-Based FCL for Fault Ride-Through Performance Enhancement of DFIG-Based Wind Turbines Fault ride-through (FRT) requirement is a matter of great concern for doubly fed induction generator (DFIG-) based wind turbines (WTs). 'is study presents a nonlinear variable resistor(NVR-) based bridge-type fault current limiter (BFCL) to augment the FRTperformance of DFIG-based WTs. First, the BFCL operation and nonlinear control design consideration of the proposed NVR-based BFCL are presented. 'en, the NVR-BFCL performance is validated through simulation in PSCAD/ EMTDC software. In addition, the NVR-based BFCL performance is compared with the fixed resistor(FR-) based BFCL for a three-phase symmetrical short circuit fault at the grid side. Simulation results reveal that the NVR-based BFCL provides a smooth and effective FRT scheme and outperforms the FR-based BFCL. Introduction Increasing the connection of wind turbines (WTs) impose the adverse effects on the grid reliability and stability, especially under voltage sag conditions. Generally, WTs are classified into fixed speed and variable speed. e variable speed WTs are widely used because of variable speed operation. ey include the doubly fed induction generator (DFIG) and permanent magnet synchronous generator (PMSG) [1]. For all of them, even a shallow voltage dip can cause malfunction and operation failure [2]. To ensure secure operation of grid and tackle these problems, grid operators have created new requirements as grid codes. e fault ride-through (FRT) is the main requirement of them for grid connection of WTs. It is the given major focus due to the increasing grid penetration level of wind power. Based on this, WTs must stay connected to grid under grid voltage sag [3]. Compliance with the FRT requirement of WTs differs according to wind generator technology used in them. Recently, DFIG-based WTs are getting more attention due to having partial capacity power converters, in which their ratings are 25-30% of the total rated generator, capability to control both active and reactive powers, power point tracking at variable speed, and low weight and cost compared to the PMSG [4]. However, it is vulnerable to grid voltage sag because the DFIG stator is directly connected to the grid [5]. In literature, different schemes based on software and hardware solutions [6] have been suggested and documented. e software solutions are based on the improved or changed control system of the DFIG converters with less cost [7,8]. However, they are just suitable for handling low voltage sag conditions and cannot ride through sever voltage sag conditions, just relay on the software schemes, and require the hardware protection scheme under this condition [9]. Different types of hardware schemes such as the multistep series braking resistor (MSSBR) [10], reactive power compensators [11], DC link chopper [12], crowbar system [13], series compensators such as dynamic-voltage restorer (DVR) [14] and unified interphase power-controller (UIPC) [15], superconducting magnet energy storage (SMES) [16], and different types fault current limiters (FCLs) [17][18][19] are presented and reported in literature. Generally, voltage sags in grid are mainly due to occurring short circuit faults in grid [20,21]. erefore, researchers are studying on mitigating the adverse effects of grid faults on WTs by using FCLs to enhance the FRT performance of WTs [22,23]. erefore, the installation of FCLs is receiving more attention to increase the FRT capability of WTs [24,25]. Regarding using FCLs to increase the FRT capability of the DFIG, studies focus on bridge-type FCLs (BFCLs) and superconducting-type FCLs (SFCLs) [26]. In [27], resistive SFCL (RSFCL) is utilized for enhancing the FRTcapability of the DFIG. In [28], the RSFCL is employed in the terminal of rotor windings of the DFIG to protect the rotor-side converter (RSC) from spike overcurrent and the DC-link transient overvoltage under voltage sag condition. In [29,30], the active-type and flux-coupling-type SFCLs are used to connect the DFIG to grid and limit the transient fluctuations, respectively. In [31], the different locations of flux-coupling SFCL influence mechanism are analyzed on the DFIG FRT performance. In [32], two voltage booster schemes include DVR scheme, and resistive-type SFCL schemes are compared in terms of FRT capability of DFIGs. Simulation results show that both schemes provide fast voltage sag mitigation, which improves the FRT performance of the DFIG. In [33], a combined protection scheme includes an inductive-type SFCL, and the modified grid side converter (GSC) control system is proposed to ride through the DFIG performance. In this approach, the GSC of the DFIG supply dynamic reactive to support the grid connecting voltage. Implementation of inductive-type SFCL and simultaneous transient voltage control can smooth the recovery process of grid coupling voltage after a fault time. e research results confirm that the application of SFCLs in different structures offers a promising interface for enhancing the FRT and facilities the integration of DFIGbased WTs to grid. But this solution requires a high manufacture cost and cooling system. In [34], the BFCL is employed in wind farm for the first time to enhance the FRT performance. In this BFCL, the bypass resistor is located in the DC side of the bridge circuit. In [35], scholars use the BFCL with a bypass resistor in the AC side to enhance the transient stability of the DFIG. From this view point, different configuration and control systems from the one presented in [34] have been developed and reported. In [36], a fuzzy-logic controller is implemented to a parallel-resonance BFCL to augment transient stability of the DFIG. In [37], a capacitor-based BFCL with capability of reactive power compensation has been proposed. In [38], a fuzzy logic controller, which is optimized by the genetic algorithm, is designed to implement the capacitor-based BFCL for enhancing the transient stability of DFIG-based wind farm. In [39], a dynamic multicell FCL (MCFCL) is presented to compensate the wind farm coupling voltage for three low, medium, and severe voltage sag conditions. It inserts the suitable number of cells in fault path to provide smooth FRT performance under whole voltage sag conditions. On the other hand, the power system has inherently nonlinear characteristics and connection of WTs including nonlinear power electronics based converters that increase this characteristic. As a result, utilization of nonlinear controllers has better performance under grid disturbances [38,40]. It is worthy to state that, there are a few studies regarding implementation of a control system to the BFCL for enhancing a DFIG performance under fault condition. Considering this background, this study presents a simple nonlinear variable resistor-based BFCL (NVR-BFCL) to provide the smooth FRT under voltage sag condition. To realize this objective, a simple nonlinear control based on active power deviation is designed and implemented into the BFCL circuit to generate the nonlinear variable resister under fault condition. e efficiency of the NVR-BFCL is checked through extensive time domain simulation and performance comparison of the FR-based BFCL under 3-phase symmetrical short circuit faults. Simulation studies have been performed in PSCAD/EMTDC software environment. DFIG-Based WT Model e schematic diagram of a grid-connected DFIG-based wind turbine is illustrated in Figure 1. e induction generator, wind turbine, drive train system, and the control system of DFIG converters are the main components of the wind generation system, which are modeled as follows. Aerodynamic Modeling of WT. To model the wind turbine in PSCAD/EMTDC software, there are two models such as MODE 2 and MODE 5 in master library of this software. In this work, to model the wind turbine, the MODE 2 is utilized, in which the mechanical power (P m ) extracted from wind power is given by the following equation [37]: In (1), ρ introduces the air density, and R and V w are the blades radius and wind speed, respectively. C p in (1) is determined based on tip speed ratio (λ) and pitch angle (β) and is known as power coefficient. It is determined as In the MODE 2 wind turbine used in this work, the drive train system is presented by the common two-mass model. e schematic diagram of this model is shown in Figure 2, and related equations of this model are given by the following equation. 2 Mathematical Problems in Engineering DFIG Model and Control System. e DFIG is composed of a wounded rotor IG, a RSC connected to the GSC by a DC link capacitor, and a WT, as shown in Figure 1. e equivalent power circuit of the DFIG in d − q reference frame is shown in Figure 3. According to this figure, the stator and rotor voltage and flux equations are written as follows: In these equations, L s � (L ls L m /(L ls + L m )), L r � (L lr L m /(L lr + L m )). i dqs and i dqr introduce the d − q component of stator and rotor currents, respectively. i dqs and rotor current i dqr are the d − q components of the stator and rotor currents. Considering Figure 3, the GSC dynamic equations are written as [40] V dqs � V dqg + R g i dqg + L g dλ dqg dt + ω s L g i dqg . And also the DC link dynamic equation is expressed as e DFIG active and reactive powers, i.e., P S and Q S are expressed as e control diagram of the RSC and GSC are presented in Figure 3. e RSC regulates P S and Q S by controlling i qr and i dr , respectively. Also, the GSC regulates V dc and V PCC at the determined level by controlling i qs and i ds , respectively. V dc and V PCC introduce the DC link and the PCC voltage. Nonlinear Variable Resistor (NVR)-BFCL e single-phase circuit of the NVR-based BFCL is shown in Figure 4. It includes of following elements: (1) Bridge rectifier includes diodes D 1 -D 4 (2) IGBT switch, which is represented by T ere are two approaches to achieve this. In approach one, when the coupling point voltage falls below the threshold value, the BFCL control system turns off the IGBT switch and inserts the total limiting resistor in fault path. In the second approach, the control system of the BFCL controls the duty cycle (D) of IGBT switching to provide a variable resistor based on the D, where D is defined as T is the period of the PWM currier wave, and t on represents the duration time that the T is ON for each period. In this approach, the BFCL inserts a fraction of the total limiting resistor in fault path to provide a variable resistor (R V ), which is expressed by following equation: Mathematical Problems in Engineering In this study, the BFCL is controlled based on the second approach. Under fault condition, the BFCL control system detects the fault and implements the nonlinear control for switching T to produce a nonlinear variable resistor. To control the switching of T, the active power deviation (ΔP � P G − P T ) is used to implement the nonlinear control to provide adaptive active power-based NVR. erefore, the faulted line short circuit current is limited, and the PCC voltage is boosted as well. Also, (R V ) dissipates the active power generated by the DFIG, which enhances the FRT capability. When the fault is cleared, the PCC voltage returns back to the prefault level and control system turns on T. Nonlinear Control of the BFCL Resistor. To provide the NVR-based BFCL, the active power deviation ΔP is used as input to provide the duty ratio D under fault condition, which is expressed as Considering (10), t on is expressed as Considering (10) and (11), R V is expressed by following equation: It can be seen from (12) that by implementation of a nonlinear control strategy switching, the BFCL provides a NVR as function of the active power deviation. To select a proper time constant (T p ), the active power deviations index is used. It is defined as Mathematical Problems in Engineering Figure 6 demonstrates the active power deviation index for different values of (T p ). It can be seen that for (T p � 1 ms), the system has better performance. NVR-BFCL Control System. e NVR-based BFCL control flowchart is shown in Figure 7. According to this flowchart, the PCC bus voltage (V PCC ) is considered as control signal to detect the short circuit at the grid side. When a short circuit fault occurs in the grid side, V PCC experiences a voltage sag. Once V PCC becomes less than the threshold value V T , the control circuit detects the fault. V T considered to be 0.95 pu is considered in this work. After detection of voltage sag at PCC bus, the active power deviation ΔP is measured and is used as input to provide the duty ratio D under fault condition. Considering ΔP, the duty ratio D is determined and implemented to the IGBT switch to provide a NVR under fault condition. Simulation Results To assess the NVR-based BFCL performance, the system shown in Figure 5 is simulated in PSCAD/EMTDC software. In this simulation study, a three-phased short circuit fault is applied to the transmission line after the FCL location at the grid side. e fault occurs at t � 10 s. After 0.15 s, the circuit breaker isolates the fault. To FRT analysis, the wind speed is considered 14 m/s. Parameters of this system are presented in Table 1. e simulated DFIG produce the rated nominal power at this speed. To investigate the efficiency of the NVR-based BFCL, three scenarios are simulated as follows: Scenario A: without FCL Scenario B: with FR-based BFCL Scenario C: with NVR-based BFCL Figure 8 shows the PCC voltage for all scenarios. e PCC voltage is reduced to 0.18 pu in scenario A, under fault condition. However, the NVR-based BFCL provides the lowest voltage sag during and after the fault period compared with scenario B. Also, the voltage recovery process is considerably smooth and short in scenario C. e DC link voltage response of the DFIG for 3LG fault is shown in Figure 9. According this figure, the DC link voltage has the lowest overvoltage and fluctuation during and after fault time. Figure 10 shows the DFIG active power. It drops to zero during fault in scenario A. In scenarios B and C, the active power of DFIG is prevented from abrupt change under fault condition. But, in scenario C, the proposed BFCL by providing a NVR and insertion in fault path causes the active power fluctuation that is minimized and smoothed under fault condition. Figure 11 shows the DFIG reactive power. It absorbs −2.5 pu reactive power from the grid in scenario A to recover the PCC voltage. However, it is effectively reduced in scenarios B and C. Also, it is lowest in the scenario C, during and after fault clearance. Figure 12 shows the rotor speed of the DFIG under all scenarios. By using FCL in scenarios B and C, the rotor speed is limited during fault. But, the NVR-based BFCL provides lower oscillation and faster stabilization. e three-phase DFIG rotor currents are presented for all scenarios shown in Figure 13. According to this figure, the rotor current experienced two transient conditions in both end of fault time. It is increased to 2 pu at the beginning and end of fault time without using any FCL in series with WT. However, by using FCL in scenarios B and C, the transient overcurrent at both ends of fault time is restricted. However, the NVR-based BFCL provides smooth transient at both ends of fault time. Conclusions In this study, a nonlinear control system is designed and implemented to the conventional BFCL to improve the FRT capability of the DFIG-based WTs. It provides a nonlinear variable resistor (NVR), which provides a variable resistor under fault condition depending on the active power deviations during fault. Simulation results show that the NVRbased BFCL is an extremely competent scheme compared to the FR-based BFCL and has better responses in terms of voltage deviations, active power fluctuation, rotor current mitigation, and DC link overvoltage under short circuit faults. Data Availability ere are no data availability. Conflicts of Interest e authors declare that they have no conflicts of interest. Mathematical Problems in Engineering 9
2021-07-29T20:35:28.353Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "0527bc48faf1062ce170d480c058250c5ced1192", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1155/2021/9934887", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "0527bc48faf1062ce170d480c058250c5ced1192", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [] }
252628418
pes2o/s2orc
v3-fos-license
A Critical Analysis of Regional Program Evaluation Practice in Kazakhstan In recent years the development of public program evaluation has received growing attention in Kazakhstan. An institutional and legal base for program evaluation has been established. However, the examination of literature has evidenced that there have been rudimentary attempts to interrogate evaluation practice, particularly at the regional level. It is still not well known how effective or valuable it is. It is imperative to run a diagnostic and assess the evaluation system to answer this question. This article aims to evaluate the quality of regional program evaluation practice in Kazakhstan. It applies a meta-evaluation tool to understand the extent to which such practice complies with three fundamental and recognized evaluation standards: namely, value, validity, and utility. As a sample, the study used evaluation reports conducted by regional Audit commissions. This research is the first attempt to apply established evaluation standards to the Kazakhstani context. Therefore, it was assumed that some discrepancies with the standards may occur. Having confirmed this hypothesis, the findings indicate that regional program evaluation falls far short of these standards. The paper identified many conceptual and methodological problems, which seriously compromise the validity and soundness of evaluation practice. It is expected that it will stimulate discussion in academic and subject matter expert circles. Furthermore, having identified key areas for improvement, the study may help reform the evaluation field and contribute to better policy-and decision-making, thus saving taxpayers’ money and improving people’s wellbeing. In the end, the research put forward several recommendations for strengthening evaluation practice. Program evaluation in Kazakhstan has become an essential aspect of public management (Nygmetov, 2014).Early in 2020, under the Office of the President of Kazakhstan, the Centre for Analysis and Monitoring was created and assessed the Government's programs and reforms.An analogous structure, named the Centre for Evaluation of Public Programs and Reforms, was founded within the ruling political party Amanat (Nur Otan until March 2022) in 2019.The establishment of such institutions, when there are already evaluation bodies operating at the national (Accounts committee) and regional level (Audit commissions), can be viewed as an indirect indication of the insufficient effectiveness in the existing program evaluation practice. Furthermore Baimbetov, 2019).However, public spending on programs at regional or local (such terms are used interchangeably) levels represents a considerable portion of the country's total national budget.Therefore, investigating the practice of evaluating regional programs deserves considerably more attention, not least for financial accountability purposes (OECD, 2021).Some concerns were expressed by the President of Kazakhstan Kassym-Jomart Tokayev, who said "The Government develops reforms, implements them, and then evaluates the quality itself.This situation needs to be changed" (Akorda, 2020, p.1).Furthermore, Mr. Tokayev recently called for developing a methodology for assessing state expenditures' social and economic effectiveness, thus emphasizing the importance of improving evaluation in the public sector. It should be acknowledged that evaluation is a relatively new field in Kazakhstan's public policy arena, with significant developments in this domain has taken place during the last decade.As with any professional enterprise, it should be subjected to proper scrutiny to identify if practices suffer from flaws or mistakes.Building on the theoretical and methodological literature, this research will attempt to fill the existing gap in the literature and examine the practice of evaluation of regional programs 1 . Thus, this paper aims to critically analyze and identify areas for improvement in the program evaluation at the local level.The author will utilize a meta-evaluation checklist to address it and assess three fundamental assessment standards: values, validity, and utility. To achieve this purpose, the following research questions have been put forward: 1. How well does regional program evaluation conform with established evaluation standards? 2. How justified and appropriate are the values used in program evaluation? 3. How valid are program evaluation design and conclusions? 4. How useful are program evaluation conclusions and recommendations? Literature review The literature examination illustrates a shortage of systematic and in-depth studies devoted to the program evaluation, especially at the regional level.One of the few attempts to question evaluation practice in Kazakhstan was made by Nygmetov (2014), who argues that, although many efforts have been made to establish an evaluation system, their effectiveness is far from satisfactory since evaluation is perceived as a form of control and monitoring and not aimed at assessing a real impact of a program.Kari (2015) believes that the potential of evaluation remains unrealized and notes that there are inconsistencies between existing methodologies and the overlapping functions of evaluation bodies that prevent evaluation from being conducted systematically.Kari (2015) concurs with Nygmetov (2014) and points out that current evaluation practice emphasizes assessing short-term outcomes, while impact evaluation is not well developed. Studies of specific programs usefully illuminate the limitations of evaluation practice.Pritvorova and Bektleyeva (2017) investigated 'Youth Internship' programs providing new graduates with six-month paid internships in stateowned organizations.They argue that while the program has been evaluated based on the number of participants, its longer-term effects (i.e., the job prospects of participants) were not considered. Similarly, dosekova et al. (2018) show that the evaluation of startup commercialization programs needs to be focused not only on input additionality (i.e., the resources spent by firms in addition to state subsidies) but also on outcome additionality.Thus, it is seen that the existing evaluation practice might not adequately address the complexity and multiple aspects of programs. Some discussions have taken place about tailoring evaluation to the context of evaluated programs (Kaldiyarov & Turgambekova, 2019).In healthcare, for instance, Murzaliyeva and Karshalova (2018) argue that medical organizations in Kazakhstan, such as sanatoriums, in-patient facilities, and polyclinics, have different scopes and objectives; therefore, applying 'onesize-fits-all' indicators to assess their programs is not a justifiable way to judge their effectiveness express the importance of reforming evaluation approaches (Rakhmatullayeva et al., 2015).They discuss an alternative method for assessing the social impact of direct foreign investment in Kazakhstan based on the mathematical modeling method.Although they address statelevel interventions, these studies raise important questions about the flexibility and adaptability of evaluation to various settings, including the regional level.To investigate this issue in more depth, it is vital to explore how evaluations are designed and the values that underpin them. In addition to the above, there is also quite substantial literature written by OECD, which constitutes a valuable source of policy advice but says relatively little about evaluation practice.Nevertheless, its recommendations include creating an evaluation research unit within local executive bodies.It also indicates a weak culture of evaluation and continuous improvement in the public sector (OECD, 2021). Audit commissions play a central role in program evaluation at the regional levelstate entities mandated to conduct both audits and assessments.Therefore, it is vital to review the literature devoted to their activities.It has been found that few works deal with the limitations of the audit system regarding the conduct of program evaluation.The literature mainly investigates its potential role.It is agreed that its capacity has been substantially enhanced, but little is known about its effectiveness in practice.In this respect, some researchers (Dosayeva, 2019;Shakirova et al., 2019;Alibekova et al., 2019) state that the audit concept is a fundamentally new area for Kazakh science, and its capacity and weaknesses are yet to be explored.This illustrates the need for empirical and in-depth studies on the program evaluation dimension of the audit system. Overall, the papers discussed provide some insights on evaluation practices in Kazakhstan, but they do not give an in-depth analysis of the problems identified, and more research is necessary.This is true for evaluating national and regional programs -although the former has received more attention in the literature.Nevertheless, examining both levels would not be feasible within a single article; thus, its scope is limited to the local level.Furthermore, since 2015, there have been changes in evaluation methodology and legislature (Adilet, 2020).Consequently, there is a clear need to reexamine the theory and practice of evaluation in light of these developments. Methodology The sample was drawn from publicly available evaluation reports conducted by regional Audit Commissions.It included three forms of evaluations: expert opinions, evaluation reports on budget implementation, and performance audit reports with evaluation sections.In performance audit reports, only the evaluation sections were subjected to the analysis. Evaluation reports are published and openly accessible on the official websites of Audit Commissions.Initial searches demonstrated that some websites contained outdated reports, while several of them were not accessible.Therefore, missing reports were requested from relevant Commissions by completing an online form in the electronic government of Kazakhstan portal. Initial data collection resulted in 87 evaluation reports.The reports represented 16 regions and cities of national significance.To identify which evaluation reports were applicable to answer the research question, the author has applied inclusion criteria, similar to Scott-Little et al.'s research (Scott-Little, 2002).Table 1 presents inclusion criteria and their description.Legislature in the field of evaluation has gone through some amendments; therefore, the author considered only recent reports published after 2018 to reflect those changes.Secondly, reports had to be sufficiently detailed to allow for analysis.After applying these criteria, 39 reports were selected, which formed a sample for the meta-evaluation.Evaluation methodologies in Kazakhstan have not changed significantly since 2020.Therefore, the results of the paper may be relevant to this day. Research methods and instrumentation A systematic review of the evaluation reports was conducted to evaluate their quality by determining their adherence to the evaluation standards described below.As an instrument, the study has used the adapted and synchronized version of Scriven's Meta-evaluation checklist (Scriven, 1991) and Davidson's meta-evaluation tool (1995).Scriven's Meta-evaluation checklist (Scriven, 1995) includes five main criteria of quality: validity, utility, propriety, credibility, and cost.Assessing appropriateness, credibility, and cost standards were not feasible since this information was not reflected in evaluation reports. Thus, two standards remained: validity and utility.Validity consists of multiple aspects, of which one is the values or criteria upon which the quality of the program is measured.Fournier noted that "criteria can make or break an evaluation because they...directly affect the validity of claims" (Fournier, 1995, p.19).Considering the significant contribution of values, it has been decided to examine this dimension separately.Consequently, the resulting checklist consisted of three standards: values, validity, and utility.Each of the standards was strengthened by adding relevant points from the Key Evaluation Checklist (Scriven, 1991) and the Program Evaluation Standards (JCSEE, 2011), particularly accuracy, utility, and evaluation accountability. Results Before presenting the findings, it is important to look at how evaluation is interpreted in the context of Kazakhstan.According to the Government Decree on the system of state planning (Adilet, 2020, p. 13), evaluation is "an instrument of determining the extent to which state programs achieve effectiveness and efficiency".Efficiency is understood as the accomplishment of best outputs and outcomes using the approved budget, while effectiveness implies the achievement of performance indicators prescribed by plans, programs, and strategies (Adilet, 2020).This interpretation is clearly distinct from the widely recognized definition of evaluation, i.e., systematic determination of merit and worth of a thing (Scriven, 1991).The implications of this contrast are illustrated throughout the research. Where do values come from? The data analysis has shown that evaluations of regional programs have drawn upon a minimal set of values, such as program targets, procedural requirements, and institutional and legislative norms.As evidence of failure to address multiple relevant values in evaluations, attention may be drawn to the business development program in Karaganda region, which was evaluated based upon the achievement of program targets, such as the increased number of recipients of entrepreneurship training and microloans for starting businesses (unpublished analytical papers of state bodies).However, the evaluation did not address the values from the perspective of potential impacts; specifically, it might be useful to look at how the program helped to enhance employment opportunities and overcome social and economic problems in the region. The problem with limiting evaluation scope to pre-determined criteria is that evaluations may overlook numerous symptoms and causes contributing to the achievement of program objectives.As an illustration, the Healthcare Development Program of Karaganda region addressed only five targets (unpublished analytical papers of state bodies).However, the list could be extended to include other relevant objectives.For instance, Aymagambetov and Tyngisheva (2019) claim that the region has serious health issues associated with respiratory and circulatory systems. Table 2 describes the criteria used to evaluate the Healthcare development program in Karaganda region. It indicated an array of causes of cardiovascular diseases.However, the scope of evaluation was limited to assessing public education activities, which can be the solution for only one of the causes -specifically, a deficit of awareness about factors leading to cardiovascular diseases.It is important to note that the existing methodologies do not limit evaluators in selecting criteria.Evaluators can develop additional measures to assess programs using various sources (Adilet, 2020).However, the analysis has demonstrated that the potential of this practice has not been fully realized since evaluations have included in their repertoire only those criteria already prescribed by methodologies (for example, program goals and legislative norms).To summarize, the analysis revealed that there had been no evidence of (i) conducting a needs assessment, i.e., identifying and analyzing the priority needs of program impacts, or (ii) scrutinizing causes of problem areas of programs.This produces risks to the validity and accuracy of evaluation findings. Achievement of program goals Program objectives have acted as a primary criterion for determining the effectiveness of programs.Essentially, evaluations examined whether indicators were achieved and then calculated the percentage of achieving targets, which served as the basis for further conclusions.It is important to note that indicators are not differentiated or ranked.Such an objective-based approach can have serious problems since some objectives may be more significant or relevant than others; giving them equal weight may distort the validity of findings (Davidson, 2005;Stufflebeam & Coryn, 2014). The above can be demonstrated in the following example.The Education Program of Karaganda region (unpublished analytical papers of state bodies) includes two different criteria upon which its effectiveness is measured (Table 3). Both targets are relevant, however, the former does not reflect qualitative changes and illustrates only the program's outputs.The first target is more difficult to achieve than the second and more significant since PISA has proved to be an effective and valid knowledge assessment tool internationally (OECD, 2021).Failing to meet the first target and achieve the second does not necessarily mean that the program performed poorly.However, the analysis has illustrated that evaluations did not grade targets depending on their significance, difficulty, or relevance.Another critique concerns the justifiability of criteria.An example could be the criterion of 'life expectancy' found in the evaluation of the healthcare program of West Kazakhstan region (unpublished analytical papers of state bodies).The program is unlikely to have significantly impacted it within a reporting period, as the target is global and influenced by various factors (Ho & Hendi, 2018).It must be assessed comprehensively and from a more long-term perspective.Therefore, applying this criterion to gauge program performance annually is questionable. The research also found that even when objectives are in place, performing evaluation has not always been possible.Evaluations have primarily relied upon official statistical data to assess the achievement of goals and make claims about a program's effectiveness.When statistical data was unavailable, programs were not subjected to further investigation.This has been the case in many evaluations, which had the caveat that assessment of certain aspects of a program was not feasible due to the absence of official data (unpublished analytical papers of state bodies).Similarly, some programs' lack of measurable indicators has prevented evaluators from assessing them (unpublished analytical documents of state bodies).For instance, the program for controlling stray dogs and preventing zoonotic diseases in the Terekti district in West Kazakhstan region lacked any indicators and, therefore, no evaluative activities were undertaken. To conclude, there was no evidence that various programs' goals were checked for relevance and significance.Further, dependence on program goals has seriously impaired the flexibility of evaluators. Legislative norms and standards The analysis of the data has demonstrated that an inordinate emphasis has been placed on the assessment of the adherence to legislative guidelines.This indicates the dilution of evaluation practice with elements of a compliance audit.For example, the expert report on evaluating the microloan and entrepreneurship development program in Mangistau region (unpublished analytical papers of state bodies) was predominantly assessed for conformity of program outputs with program specifications and lending regulations.Evaluators have examined the legality of granting microloans within the program by checking the eligibility of program participants.They then looked at whether gran-tees complied with program conditions in creating new jobs by utilizing the funds received for the intended purpose.However, no inferences have been made regarding the impact and value of the program for the sphere of entrepreneurship and business climate in the region in general.The same trend can be seen in many evaluations, which, apart from assessing program objectives, verified the compliance of programs with provisions of the Budget Code and procedural norms for program planning and implementation (unpublished analytical papers of state bodies). It is argued that this approach may only help to determine a program's merit or intrinsic value.To illustrate this, attention may be drawn to the evaluation of the innovation development program in Karaganda region (unpublished analytical papers of state bodies).It described some activities of the program, such as introducing an electronic ticket system on public transport and installing air pollution control sensors.The program may have conformed with its technical specifications and served its intended purpose; however, the evaluation did not investigate how the program activities had contributed to meeting the needs of the consumer population.The point is that even if legal requirements, technical specifications, or accepted standards of quality are followed, a program nonetheless 'might not be worthy' (Stufflebeam & Coryn, 2014, p. 9). Evaluation logic The author investigated the basic logic underpinning evaluative judgments in assessing the validity of t h e evaluation.To do that, the author relied on the principles of the general logic of the assessment (Stufflebeam & Coryn, 2014).The data analysis evidenced the presence of the first principle of the evaluation logic, i.e., the determination of criteria, although the criteria selection approach has had serious limitations, as shown earlier.The author has identified some problematic issues regarding the second principle, application of standards of quality.Evaluative conclusions have been limited to stating the fact of the achievement or non-achievement of program goals; or labeling programs as effective/noneffective or efficient/non-efficient, mainly based on the assessment of targets.Evidence suggests that no attempt has been made to set gradation or ranking to judge the performance of programs. Utilizing a single cut-off level of performance (for instance, effective/noneffective), can hardly be described as good practice (Scriven, 1995;Davidson, 2005).The approach taken in the program evaluations does not provide a complete picture of the performance of programs and does not allow for explicitly evaluative conclusions. Another problematic point is the difficulty of determining and justifying the cut-off level.For instance, as shown in Table 4, some district programs in Zhambyl region have been evaluated as efficient mainly owing to the highest percentage of achieved indicators, while others fell short of their targets and have been found inefficient (unpublished analytical papers of state bodies).In this regard, a reasonable question may arise as to whether the programs that achieved less than 100% of their indicators performed badly or why programs with over 90% of their targets met cannot be considered efficient.The lack of explicit reasoning and justification of t h e cut-off score seriously weakens the validity and credibility of conclusions. As for the fourth element of evaluative reasoning -synthesizing performance results to make an overall judgment -the analysis has shown that the program evaluations have simply reported findings on all evaluative components, including the assessment of goal achievement and implementation of legislative standards.For instance, the evaluation of the regional program of Zharminsk district (unpublished analytical papers of state bodies) concludes that there had been ineffective use of budget funds, ineffective planning, non-achievement of some indicators, non-compliance with standards of developing programs, but no attempt was made to weigh and synthesize evaluation findings.Given that some aspects of performance may be of less significance, it is essential to synthesize findings to draw overall evaluative claims (Davidson, 2005).Thus, the overall reasoning pattern observed across program evaluations has entailed only one element of the logic of evaluation (Fournier, 1995), i.e., the determination of criteria, while the remaining three principles (developing standards, measuring performance, synthesizing data to make evaluative judgments) are not reflected in the evaluations.This is quite a disturbing message since basing decisions on dichotomy of effective/non-effective or efficient/non-efficient, and failing to frame conclusions in 'the vocabulary of grading, ranking or scoring' (Martens, 2018, p. 27) and adequately synthesizing them, provides only a crude understanding of a program's value (Scriven, 1995). It should also be noted that evaluation methodologies and standards employed in Kazakhstan do not address the principles of the general evaluation logic either -apart from identifying criteria.They are more concerned with describing technical and administrative aspects of performing evaluation (for instance, procedures for communicating between evaluators and other state bodies, requirements for documenting reports, and calculation of indicators) or outlining general principles of conducting evaluation (principles of confidentiality, independence, and others), rather than offering specific guidance or strategies for analyzing programs and making evaluative judgments (Adilet, 2020). Reliability Ensuring reliability is "a cornerstone for validity" of an evaluation (JCSEE, 2011, p.179).Reliability addresses the consistency and stability of findings and can be achieved through triangulation of data sources and research methods (Golafshani, 2003). It has already been noted that program evaluations have relied predominantly on statistical records to assess programs and that, in the absence of relevant statistical data, certain aspects of programs were left uninspected.This contrasts sharply with established good practice of evaluation, which calls for the use of various sources, including direct observation and theoretical logical, analogical, or judgmental sources (Scriven, 1991).The fact that information is dependent upon only one type of data does not allow for a comprehensive understanding of the performance of programs and lessens the validity of interpretations (JCSEE, 2011). In all fairness, it is worth mentioning evaluations that have relied on surveys when assessing the quality of programs.Three of them used surveys among the youth population to determine the level of satisfaction in respect of state measures to support youth (unpublished analytical papers of state bodies).However, t h e lack of information about the design, procedures, participants, and funding of the surveys does not contribute to judgments about the reliability of the evaluation conclusions.The third evaluation was aimed at determining the impact of health promotion activities in Nur-Sultan city (unpublished analytical papers of state bodies).It was evaluated on the increase in the number of people practicing a healthy style.It provided a good description of the survey's processes, design, and tools, conducted within the evaluation.Although the survey simply demonstrated percentages of different age groups who practice some form of healthy living without building any causal links, it formed the basis of the report conclusions.Given the occurrence of the effects of such a campaign may take time, other methods, such as interrupted times series, could be used to collect data and make more defensible evaluation findings. Thus, the analysis has found that there are serious issues which reduce the reliability of evaluation results since these have predominantly relied upon a single source of data; furthermore, apart from in a few cases, they have not utilized any research methods. Causation One of the significant components of validity in evaluation is building causal inferences (Davidson, 2005).The causation issue essentially implies determining whether a program has been at least a significant cause of effects or outcomes.It has been found that, although the evaluation standards used in Kazakhstan highlight the need to identify factors and reasons that have affected the realization of a program (Adilet, 2020), establishing causal links has not been a common practice.Numerous evaluations evidence this.The evaluation of t h e Healthcare proginm of Karaganda region (unpublished analytical papers of state bodies) concluded that, as a result of the program, mortality rate from tuberculosis was reduced by 34.8% and malignant neoplasm by 7.8%, while infectious disease incidence rate was maintained.However, the lack of explanation on how the program, specifically, caused those changes significantly diminishes the validity of such claims. There have been only a few programs that have gone further and attempted to draw links between program and observed changes.For instance, the evaluation of the Healthy lifestyle promotion program of Nur-Sultan city (unpublished analytical papers of state bodies) has calculated a correlation between the volume of program activities and t h e incidence of circulatory system disease.The evaluators have acknowledged the limitations of this technique, as those diseases might be caused by multifaceted factors, and justified this decision by the absence of official statistical data on factors contributing to those incidences.However, the evaluation could have produced more defensible conclusions by applying alternative strategies, such as asking impacts and observers about the impact of a program, examining if the timing of program effects makes sense, and several others (Davidson, 2005). The study also identified some evaluations which appear to have shown a clear impact of a program.However, a careful examination shows that inferring causation is still required.It is best illustrated in the evaluation of the Entrepreneurship program of Karaganda region (unpublished analytical papers of state bodies), which assessed a project for organizing a sixmonth paid internship for new graduates so they could gain initial professional experience and obtain full-time jobs.It has been shown that 867 participants out of 1,047 (83% out of 100%) got hired after completing an internship.However, without developing causal links, it is difficult to credibly argue that it was the program that helped the interns to succeed in getting a job because, potentially, the participants might have done due to their personal skills, background education, and other factors outside the program.To determine this, evaluators could, for example, have interviewed the program participants and asked them explicitly whether the program was the leading cause of their successful employment and in what way it helped them to achieve that if the answer was affirmative. When addressing the causation issue, it is also essential to look at 'rival explanations' (Davidson, 2005, p. 70), i.e., alternative causes.Sources of such explanations may be found in the context of a program.Contextual factors or other parallel programs may either diminish or enhance the program's effects (JCSEE, 2011).The author has found that this principle has not been practiced in regional program evaluations.Overall, the fact that investigating causal relationships is largely unpracticed when evaluating regional programs is probably one of the most significant limitations of the evaluation practice. Efficiency and cost-effectiveness While costs in a broader sense are not limited to monetary costs and, for instance, including human resources, time, a n d training (JCSEE, 2011), the Kazakhstani regional evaluations have only considered financial costs.Therefore, the author could examine this dimension only. It has been observed that the determination of efficiency has been based on the examination of program targets and public funds allocated to it.Staying within the approved budget and meeting the targets have been adjudged indicative of an efficient program, whereas failing to stay within the approved budget and/or meet targets with the same resources demonstrates an inefficient program. Another consideration when evaluating the efficiency of programs was if program costs were used for the intended purposes and were consistent with principles of the Budget Code and the law, in general.This is another indication of a blurring of lines between evaluation and audit since assessing the compliance of execution of programs with the legislature has traditionally been the prerogative of the audit function. It is argued that this approach does not wholly correspond with established good practice.Firstly, the judgments about the efficiency of programs are limited in scope and clearly have not been made in a truly evaluative way, as it remains unclear whether the costs were inexpensive, reasonable, or high.Simply noting that costs were efficient/inefficient, based on achievement/ non-achievement of goals and proportion of utilized funds is not enough because such claims are insufficiently robust and can be easily subjected to criticism (Davidson, 2005). To illustrate this, one may pay attention to the evaluation of the regional development program of Saran city in Karaganda region (unpublished analytical papers of state bodies), which was evaluated as 'efficient' based on the fact that 99.8% of indicators were achieved, while the costs stayed within the budget and amounted for 99.8% of the total budget.However, as discussed in the previous section, the evaluations have not dealt realistically with the causation issue, which does not give grounds for asserting with certainty that the program was the primary cause of changes.It might be the case that some criteria of the program (unpublished analytical papers of state bodies), such as a decline in infant mortality, were achieved mainly by the influence of factors outside its scope.Therefore, the claims about the efficiency of program costs, inferred from the fact that indicators were met, are not sufficiently strong and robust.The validity of claims could be significantly substantiated if evaluations included cost-effectiveness or cost-benefit analysis. Utility The analysis of evaluations, particularly the Conclusions and Recommendations sections, has revealed several significant findings.First, it was found that recommendations are primarily concerned with the technical and legal aspects of programs.For instance, some evaluations have documented violations of financial planning and budgeting procedures, indicated the need to hold accountable those responsible for mistakes made and provided legal training for servants to strengthen financial discipline (unpublished analytical papers of state bodies).Other recommendations emphasize that programs must be (i) brought into line with program development standards by developing measurable indicators and clarifying objectives; and (ii) better aligned with strategic and state-level programs.These are meaningful suggestions since they may help to prevent violations that would disrupt the implementation of programs.However, such recommendations have little to do with the substantive content of programs.For example, evaluations have not adequately addressed how the functioning of programs could be improved or what aspects could be modified to achieve better outcomes. Another set of recommendations refers to the issue of enhancing the overall effectiveness of programs.Nevertheless, most of them are confined to very general statements.For example, program implementers have been encouraged to strengthen coordination between public organizations, enhance monitoring and control over realization of program activities (unpublished analytical papers of state bodies), take measures to improve the efficiency and productivity of programs, etc.It is important to note that these suggestions are already found in the legislature (Adilet, 2020); therefore, program stakeholders would benefit much more from specific, tailored, and actionable recommendations. Clearly, it is not feasible to assess the implications of the evaluations due to the unavailability of data related to the ultimate use of evaluation findings.However, based on the analysis, the author assumes that the functionality and usability of evaluations in terms of the facilitation of decisions-making is limited, as what has been recommended dealt with technical dimensions of programs or has been too vaguely stated to make adequate use of.Despite this, it is worth mentioning some good practices which may contribute to the application of evaluations.Audit commissions have adopted continuous monitoring of the implementation of recommendations.This procedure is described in detail and enshrined by the rules for conducting audits (Adilet, 2020).Although it is difficult to judge how the cooperation between evaluators and other stakeholders takes place within this practice since the issue has not been reflected in evaluations, the very existence of such practice shows that evaluation does not end with handing over a report and that evaluators are, in fact, willing to provide post-report help (Scriven, 1995).This is vital, as evaluations may require additional explanation and program stakeholders may have questions or encounter difficulties in the utilization of evaluation findings. Based on the analysis of the utility component, the author assumes that the usability of evaluation findings in terms of helping in decision-making would be very limited, since there have not been clear and specific conclusions regarding the modification, termination, or enlargement of a program.It is argued that evaluation users could benefit from evaluation results mainly for improving the technical aspects of their programs and bringing them into compliance with legislative norms. Conclusion and discussion How well does regional program evaluation conform with established evaluation standards? The evidence showed that regional program evaluation practice in Kazakhstan has failed to meet all standards applied in this study.Serious discrepancies have been observed both at conceptual and methodological level. It is understood that Kazakhstan, unlike more developed countries such as the United Kingdom or t h e United States, does not have a long tradition of program evaluation and that major developments in this sphere took place only in recent years.The author also realizes that evaluators generally act within certain legal constraints and in their work rely on methodologies, which might not be perfect.However, the study found that the scale of the problem is so massive, that it raises questions about a fundamental overhaul of the evaluation practice.To suggest otherwise would be to run the risk of doing a disservice to the Kazakhstani public and contributing to poor decision-making, which may involve considerable sums of taxpayers' money. To arrive at the conclusions, the author scrutinized regional evaluation practice through the prism of three pillars of good evaluation: values, validity, and utility.The following sections discuss the answers to the research questions. How justified and appropriate are values used in program evaluation? The results of the study show that the evaluation reports addressed the values standard very weakly.Firstly, the study found no evidence of attempts to identify and consider all relevant values needed to assess a program.The programs have been assessed from the point of view of their (i) correspondence to indicators set out in the programs; and (ii) compliance with legislative norms (legality of decisions, fulfillment of technical specifications of programs, and legality of financial costs of programs).The evaluations have not practically considered t h e values of program recipients and impacts.Another important point left unaddressed was the identification of underlying causes of t h e performance of programs. Secondly, the research has shown that program targets have been treated equally without being subjected to scrutiny to determine their relevance and significance; despite the fact that program goals might carry different weights. Finally, and most importantly, the evaluations have tended to see program targets and legislative norms as intrinsically correct and the sole method of judging outcomes of a program.Furthermore, the evaluations rest largely on the assumption that if targets are achieved and legislative norms are met, it will, inevitably, lead to attainment of program aims and expected results. How valid are program evaluation design and conclusions? The research has found several serious issues in this respect which permit the conclusion that the evaluation reports perform very poorly on the validity standard.To demonstrate this, it is worth emphasizing the main findings. Evaluation logic The analysis has illustrated that only one of the key principles of evaluative logic has been addressed by the evaluation reports, specifically the identification of criteria.Evidence shows that evaluators have not attempted to set up standards of performance on those criteria in order to state what is weak, good, or excellent performance.Furthermore, the reports do not make clear the evaluative reasoning employed when making claims about a program's effectiveness or ineffectiveness.Finally, the study found that evaluation findings were reported without being weighted and synthesized.The lack of key elements of the evaluation logic gives the grounds to claim that the evaluation reports are not capable of producing explicitly evaluative conclusions. Reliability The reliability of the evaluation reports is questionable since they mainly use a limited set of data (official statistical data) to assess programs.This can be explained by the fact that the evaluations were primarily oriented at assessing the achievement of program targets; and the information needed to check that is obtained, as a rule, from official statistics.For fairness, it is worth noting that there has been some use of surveys, but this is the exception rather than the rule. Causation Evidence suggests that the practice of establishing causal links in evaluations has been virtually non-existent.This is definitely a serious limitation and evaluation conclusions can hardly be considered valid without addressing the causation issue.This can be illustrated by numerous examples of evaluations attributing changes to the performance of a program without showing logical links between them. Cost-effectiveness The concepts of efficiency and costeffectiveness in the evaluations have been confined to checking if goals were met within a defined budget.This clearly cannot be considered good practice.Firstly, it has already been shown that the program goals might not be valid or justified.Secondly, the assessment of cost-effectiveness cannot be complete without considering alternative ways of spending funds that could produce similar outcomes.The study found no evidence of any tools being employed to achieve this end, such as cost-benefit analysis.Therefore, it is argued that the evaluation reports have performed very poorly on this checkpoint as well. How useful are program evaluation conclusions and recommendations? It has been found that the majority of evaluation conclusions and recommendations have dealt with redressing violations of legislative norms, taking measures to prevent them in the future, or improving certain technical aspects of programs and others.Such recommendations can be useful only for making sure that the implementation of programs adhere to norms.However, it is unlikely that evaluation users would be able to make any use of them for improving or changing the content of programs.Meanwhile, there are recommendations related to strengthening the effectiveness of programs, but they are non-specific and therefore not functional.Nevertheless, the author has indicated some good practices; specifically, the practice of reviewing the implementation of evaluation results, which could be an example of post-report interaction and potentially may help evaluation users to apply them. Practical implications Based on the research, some policy recommendations can be put forward.Firstly, the understanding of evaluation needs to be conceptually reviewed.We have seen throughout the research that over-emphasizing program goals and treating them as a priori true has led to 'tunnel vision' (Youker & Ingraham, 2014).Consequently, the evaluators have failed to see other values involved and other intended or unintended effects of the programs. Secondly, a clear distinction between evaluation and audit should be made.It was found that the evaluation practice has inherited many features of the audit function.A significant aspect of the evaluations has been concerned with checking the conformity of programs to legislative norms and identifying any violations.These are important but the evaluation is much more than that (Chelimsky, 1985). Thirdly, in the light of the research findings, it seems vital to develop single comprehensive guidance on program evaluation, which would address all aspects of evaluation.Today, evaluators are guided by a plethora of methodological documents.This clearly does not contribute to performing an evaluation in a systematic and focused way.More importantly, the evaluation legislation lacks specific techniques and strategies for design and implementation.In this regard, it might be particularly useful to refer to certain specific public program evaluation methodologies.For example, the United Kingdom's HM Treasury's Magenta Book (Open Government, 2020) provides a good example of systematic evaluation guidance for public programs. Finally, to ensure high-quality and sound evaluations, the audit authorities of Kazakhstan should consider developing and adopting evaluation standards.Presently, evaluation practice in Kazakhstan lacks professional and sound evaluation standards.The standards for evaluation and audit used today can hardly be described as such in the classical sense since they either describe (i) how administrative procedures should be performed (for instance, how reports should be drawn up and submitted); or (ii) principles of conduct, such as independence, confidentiality, transparency, credibility, and objectivity, rather than providing criteria of quality and guidance on how to achieve them.In this context, a good starting point would be to review the Program Evaluation Standards and Key Evaluation Checklist and explore the possibility of their adoption. , both mentioned Centers have tended to focus on state-level interventions.The same discourse can be observed in expert and Table 2 - Criteria used in the evaluation of the Healthcare development program of Karaganda region Note -Compiled by the author based on data (unpublished analytical papers of state bodies) Table 3 - Criteria of the Education program in Karaganda region Table 4 - Evaluation of the efficiency of district programs in Zhambyl region
2022-10-01T15:21:05.459Z
2022-09-28T00:00:00.000
{ "year": 2022, "sha1": "271bddc449f3434daa8386850c498a7126bb71b1", "oa_license": "CCBYNC", "oa_url": "https://esp.ieconom.kz/jour/article/download/801/366", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "3d325bb83ba7b7d1111531299110c00dca4f3daf", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [] }
119357211
pes2o/s2orc
v3-fos-license
Subleading shape function contributions to the hadronic invariant mass spectrum in B ->X_u l \nu decay We study the O(Lambda/mb) corrections to the singly and doubly differential hadronic invariant mass spectra d\Gamma/dsH and d\Gamma/dsH dq^2 in b ->u decays, and discuss the implications for the extraction of the CKM matrix element |Vub|. Using simple models for the subleading shape functions, the effects of subleading operators are estimated to be at thefew percent level for experimentally relevant cuts. The subleading corrections proportional to the leading shape function are larger, but largely cancel in the relation between the hadronic invariant mass spectrum and the photon spectrum in B ->X_s \gamma. We also discuss the applicability of the usual prescription of convoluting the partonic level rate with the leading light-cone wavefunction of the b quark to subleading order. I. INTRODUCTION The CKM parameter |V ub | is of phenomenological interest both because it is a basic parameter of the Standard Model and because of the role it plays in precision studies of CP violation in the B meson system. Currently, the theoretically cleanest determinations of |V ub | come from inclusive semileptonic decays, which are not sensitive to the details of hadronization. For sufficiently inclusive observables, inclusive decay rates may be written as an expansion in local operators [1]. The leading order result corresponds to the decay of a free b quark to quarks and gluons, while the subleading corrections, proportional to powers of Λ QCD /m b , describe the deviations from the parton model. Up to O(Λ 2 QCD /m 2 b ), only two operators arise, The B − B * mass splitting determines λ 2 (m b ) ≃ 0.12 GeV 2 , while a recent fit to moments of the charged lepton spectrum in semileptonic b → c decay obtained [2] m 1S b = 4.82 ± 0.07 E ± 0.11 T GeV, λ 1 = −0.25 ± 0.02 ST ± 0.05 SY ± 0.14 T GeV 2 (2) where m 1S b is the short-distance "1S mass" of the b quark [3,4]. (Moments of other spectra give similar results [5,6].) These uncertainties correspond to an uncertainty of ∼ 5% in the relation between |V ub | and the inclusiveB → X u ℓν ℓ width [3,7]. Unfortunately, the semileptonic b → u decay rate is difficult to measure experimentally, because of the large background from charmed final states. As a result, there has been much theoretical and experimental interest in the decay rate in restricted regions of phase space where the charm background is absent. Of particular interest have been the large lepton energy region, E ℓ > (m 2 B −m 2 D )/2m B , the low hadronic invariant mass region, m X ≡ √ s H < m D [8], the large lepton invariant mass region q 2 > (m B − m D ) 2 [9], and combinations of these [10]. The charged lepton cut is the easiest to implement experimentally, while the hadronic mass cut has the advantage that it contains roughly 80% of the semileptonic rate [11]. However, in both cases the kinematic cuts constrain the final hadronic state to consist of energetic, low-invariant mass hadrons, and the local OPE breaks down (this is not the case for the large q 2 region or for appropriately chosen mixed cuts). In this case, the relevant spectrum is determined at leading order in Λ QCD /m b by the light-cone distribution function of the b quark in the meson [12,13], where n µ is a light-like vector, and hatted variables are normalized to m b :D µ ≡ D µ /m b . 1 f (ω) is often referred to as the shape function, and corresponds to resumming an infinite series of local operators in the usual OPE. The physical spectra are determined by convoluting the shape function with the appropriate kinematic functions: where Since f (ω) also determines the shape of the photon spectrum inB → X s γ at leading there has been much interest in extracting f (ω) from radiative B decay and applying it to semileptonic decay. However, the relations (4)(5)(6) hold only at tree level and at leading order in Λ QCD /m b , so a precision determination of |V ub | requires an understanding of the size of the corrections. Radiative corrections were considered in [12,13,14,15], corrections have been studied more recently in [16,17,18,19]. In [16], the nonlocal distribution functions arising at subleading order were enumerated, and their contribution tō B → X s γ decay was studied. In [17], the corresponding corrections to the lepton endpoint spectrum inB → X u ℓν ℓ decay were studied, and it was shown that these effects were potentially large. Similar results were obtained in [19], where the sub-subleading contribution from annihilation graphs was also shown to be large. In this paper, we study the subleading corrections to the hadronic invariant mass spectrum in semileptonic b → u decay, and estimate the theoretical uncertainties introduced by these terms. In addition, we present results for the doubly differential spectrum dΓ/ds H dq 2 at leading and subleading order. II. MATCHING CALCULATION A. The full theory spectrum In the shape function region the final hadronic state has large energy but small invariant mass, and so its momentum lies close to the light-cone. It is therefore convenient to introduce two light-like vectors n µ andn µ related to the velocity of the heavy meson v µ by v µ = 1 2 (n µ +n µ ), and satisfying In the frame in which the B meson is at rest, these vectors are given by n µ = (1, 0, 0, 1), n µ = (1, 0, 0, −1) and v µ = (1, 0, 0, 0). The projection of an arbitrary four-vector a α onto the directions which are perpendicular to the light-cone is given by a α ⊥ = g αβ ⊥ a β , where Choosing our axes such that the momentum transfer to the leptons q is in the − n direction, we can write q µ = 1 2 n · qn µ + 1 2n · q n µ , the decay rate takes a particularly simple form in terms of the variables n · q andn · q: dΓ(B → X u ℓν ℓ ) = 96π Γ 0 W µν L µν (n ·q −n ·q) 2 θ(n ·q)θ(n ·q −n ·q)dn ·q dn ·q (9) where The hadron tensor W µν is defined by where the weak current is J µ L =ūγ µ (1 − γ 5 )b, while the lepton tensor is and P L ≡ 1 2 (1 − γ 5 ). To calculate the hadronic invariant mass spectrum we switch to the variables (s H , q 2 ). These are related to the variables in Eq. (9) by and Here ∆ = m B − m b is the difference between the B meson mass and the b quark mass. It is O(Λ QCD ) and has an expansion in terms of HQET parameters Since ∆ simply enters in the definition of s H , it is unrelated to the 1/m expansion in the OPE, so we will not expand it via Eq. (16). With this change of variables, we define the In Ref. [16] a nonlocal expansion was performed for the hadron tensor W µν , based on the power counting where the heavy quark momentum is defined as p µ b = m b v µ + k µ . However, the limits of phase space integration in Eq. (9) include regions of phase space where this power counting is violated. Hence, to keep our power counting consistent, we do not perform a nonlocal OPE for W µν , but rather for T (s H , q 2 ). In these variables, the shape function region corresponds to the region of low invariant mass, Since ∆ ∼ Λ QCD and k µ ∼ Λ QCD , expanding the light quark propagator in powers of Λ QCD /m b gives at leading order (where∆ ≡Λ/m b ). Since both terms in the denominator are O(Λ QCD /m b ), T (s H , q 2 ) cannot be expanded in powers of k µ and matched onto local operators (unless we also are restricted to large q 2 , such that 1 −q 2 ≪ 1, in which case the second term in the denominator is subleading, and a local OPE may be performed [9,10]). Instead, the OPE takes the schematic form where the O n (ω)'s are bilocal operators in which the two points are separated along the light cone. B. Nonlocal operators In Refs. [16,17], it was shown that up to subleading order in Λ QCD /m b , the following operators were required in the OPE (21): where the h v 's are heavy quark fields in HQET, and we have defined and These definitions differ slightly from the definitions in Refs. [16,17], because we have chosen to normalize all momenta to m b , to keep the resulting formulas simpler. It is convenient to calculate the matching conditions onto a slightly different set of operators, defined in terms of full QCD b quark fields: We have defined so that iD µ acting on the b fields just bring down factors of the residual momentum k µ . The Feynman rules for the O i 's and P i 's are given in [16,17]. The rules for the Q i 's are given in n · A = 0 gauge in Fig. 1, where we have defined It is simpler to match onto the Q i 's initially since this matching does not require us to relate the QCD quark fields to HQET quark fields. However, because the additional symmetries of HQET reduce the number of independent functions needed to parametrize the matrix elements, it is convenient to then express the Q i 's in terms of the O i 's and P i 's. For an arbitrary Dirac structure Γ we havē and we have used the fact that For our purposes, we will only need the case Γ = γ σ P L , which allows us to write where the first line gives the leading order relation and subsequent lines contain the subleading correction. Similar relations may be derived for the subleading operators, though in these cases it is not necessary to consider the subleading terms in the relation between the QCD operator and the HQET operator, such terms being of higher order overall. Thus we have The leading and subleading operators can then be completely parametrized in terms of five functions [16]: (once again, unlike in [16], these are defined here in terms of dimensionless arguments). The matrix elements of the other operators vanish. C. Matching Conditions The Wilson coefficients C i (ω) of the operators in (21) are obtained by taking partonic matrix elements of both sides of the OPE. In particular we take zero-, one-, and twogluon matrix elements, which corresponds to calculating the imaginary parts of the fulltheory forward-scattering diagrams in Figure 2, multiplying by the lepton tensor L µν and appropriate phase space factors and matching them onto linear combinations of the effective diagrams. (The matching conditions may be completely determined from just the zero-gluon and one-gluon matrix elements, but we have calculated the rest as a check of the results.) The lepton tensor has the expansion (where we have used the decompositionq µ = n ·qn µ /2 +n ·qn µ /2), while the phase space factors give (n ·q −n ·q) 2 ((1 +∆) 2 +q 2 −ŝ H ) 2 − 4(1 +∆) 2q2 θ(n ·q −n ·q)θ(n ·q) The zero-gluon diagram in Figure 2(a) gives the amplitude Taking the imaginary part of this amplitude gives where we have expanded the amplitude to subleading order using (19) and we have simplified the expression by integrating by parts. The function h(x) appearing in (37) is Multiplying this result by the lepton tensor (34) and phase space factors (35), and expanding to subleading order we find b|T (ŝ H ,q 2 )|b = n dωC σ n (ω,ŝ H ,q 2 ) b|Q n (ω, γ σ P L )|b In order to determine the other matching coefficients, we calculate the one-gluon amplitude in Figure 2(b). Defining ℓ to be the incoming gluon momentum, we have where (α, a) are, respectively, the Lorentz and colour indices of the gluon field. Taking into account the two cuts which result from taking Im[A 1 ] and scaling the gluon momentum as ℓ α ∼ O(Λ QCD ), we obtain, after expanding to leading order in n·A = 0 gauge, where, in analogy with (27), we have defined δ ± (x) = δ(h(n ·k + x)) ± δ(h(n ·k)). The final matrix element to evaluate is the two-gluon amplitude, Fig. 2(c). The amplitude is so that after cutting the diagrams and expanding to leading order, again in n · A = 0 gauge, we obtain n · (l 1 +l 2 ) + · · · (47) The two gluon matrix element of T (ŝ H ,q 2 ) agrees with the results of (40) and (45) for C 3 and C 4 ; hence, no new operators are required, as expected. Integrating these expressions over q 2 we obtain the OPE for dΓ/ds H where Finally, relating the Q i 's to the O i 's and P i 's via (31) and (32) and taking the matrix elements (33), we obtain the expression for the hadronic invariant mass spectrum: Eq. (50) is the principal result of this paper. It may be checked for consistency with the result obtained via the local OPE by expanding the matrix elements of the operators (22) such that in · D ∼ O(Λ QCD ). This gives [16] f where each term in the expansion is of the same order in the shape function region, but the terms indicated by ellipses are higher order in the local OPE. The λ 1,2 parameters are defined in (1) and the ρ 1,2 parameters are defined by When substituted into the spectrum (50) and integrated over ω we obtain to subleading order 1 Γ 0 where the terms in curly brackets are the leading order result, and the other terms are the subleading order correction. The local OPE spectrum can be obtained from the double-differential spectrum dΓ/ds 0 dE 0 presented in [5] and [20]. After changing variables to (s H , E 0 ) and expanding in powers of Λ QCD /m b (treating s H as order Λ QCD m b ), performing the E 0 integral we obtain the local OPE for dΓ/dŝ H , which exactly reproduces the result (53). III. RELATION TO PREVIOUS WORK At leading order in 1/m b , the effects of the distribution function f (ω) may be simply included by replacing m b in the tree-level partonic rate and then convoluting the differential rate dΓ with the distribution function f (ω) [12], Because of the leading factor of m 5 b in the rate (10), this prescription leads to large subleading corrections if the factor of m 5 b is included in the replacement (54). In Ref. [11] this prescription was applied to the s H spectrum, although the m 5 b term was not included in the replacement. This is perfectly consistent at leading order, but since other subleading effects were introduced in Ref. [11] by the replacement (54), it is instructive to compare our result (50) with the results of Ref. [11], expanded consistently to subleading order in 1/m b . At leading order, the results are identical: 2 At subleading order, the relevant terms in Eq. (50) may be written as where the ellipses denote subleading shape functions, the effects of which cannot be reproduced by the prescription (55). We will refer to these corrections as true subleading corrections, and the terms arising from δA(ŝ H , ω) as kinematic correction. The function The second line of Eq. (59) agrees with the expansion of the results of Ref. [11] to subleading order. The first term in the third line agrees with the expansion if the m 5 b factor is also included in the convolution. Finally, the last term in Eq. (59) arises from the expansion of the quark fields in terms of HQET fields in the relation (28). Thus, we see that to be consistent to subleading order, one must include the m 5 b term in the replacement (54). However, like the subleading shape functions, the subleading effects arising from the expansion of the quark fields cannot be reproduced by this procedure. The relative sizes of each of the terms in Eq. (59) is plotted in Fig. 3, using the simple one-parameter model for f (ω) introduced in [13] f mod (ω) = 32 [11] to subleading order, while the dot-dashed line also includes the contribution from the m 5 b term. The difference between the dot-dashed and solid curves is due to the expansion of the heavy quark spinors. and with∆ = 0.1. Numerically, the most important of these corrections corresponds to smearing the m 5 b term, while the correction from expanding the quark fields is quite small. However, such large corrections may be misleading, since if they are universal they may simply be absorbed in a redefinition of the leading order shape function. Instead, one should look at the corresponding relation between the hadronic invariant mass spectrum and thē B → X s γ photon energy spectrum. One might expect that the effect of convoluting the m 5 b term would cancel in the relation, since both rates are proportional to m 5 b . However, in theB → X s γ spectrum only three powers of m b come from the kinematics, while two arise from the factor of m b in the Wilson coefficient of O 7 , and hence for this rate one should only convolute three powers of m b . This may be verified by writing the results of Ref. [16] as where once again the dots denote additional form factors, and the partonic rate is In the expression (61), the second term corresponds to smearing three powers of m b in the rate, while the third third term arises from the expansion of the quark fields. Thus, there is an incomplete cancellation of the kinematic corrections between the two spectra. IV. PHENOMENOLOGY A. TheB → X u ℓν hadronic invariant mass spectrum and theB → X s γ photon energy spectrum As discussed in the previous section, there are large kinematic corrections to the leading order results, largely due to the m 5 b term in the rate. However, these are reduced in the relation between the hadronic invariant mass spectrum and theB → X s γ photon energy spectrum. Similarly, the T-product t(x) is universal for all processes involving B meson decays (it only differentiates between B and B * , D and D * decays) and so its effects similarly cancel. Hence, it is useful to express the hadronic invariant mass spectrum in terms of the experimentally measurableB → X s γ photon energy spectrum. TheB → X s γ photon energy spectrum is given at tree level to subleading order in 1/m b by [16] 1 Γ s where (Note that at tree level only the operator O 7 contributes. At one loop, effects of other operators must be included [21]). Substituting this into Eq. (50) gives where A(ω,ŝ H ) and δA(ω,ŝ H ) are defined in (56) and (59), and C. Numerical results Both the Wilson coefficients and models for the shape functions depend on the b quark mass m b . While in our formulas we are implicitly using the pole mass, it is well-known that this leads to badly behaved perturbative series, and so we expect that radiative corrections to these results will be minimized if a sensible short-distance mass is used instead. The MS massm b (m b ) is well-defined, but does not lead to small perturbative corrections in B decays [23,24]. The "threshold" masses, including the 1S mass, PS mass and kinetic mass, and (72) are plotted in Fig. 6 for the three models presented in the previous section. From these figures it is clear that, at least for the particular models we have chosen, the subleading shape functions do not contribute a large uncertainty in the extraction of |V ub |, and that the dominant subleading effects are from the kinematic terms. This should not be surprising: since there are no O(Λ QCD /m b ) corrections to the total semileptonic decay rate [1], the subleading corrections must vanish when integrated over the full spectrum. Since the experimental cuts include a large fraction of the rate, the contribution to the integrated rate from the subleading corrections is correspondingly suppressed. This is evident from the plots in Fig. 6, where the fractional correction tends to zero as the cut is increased. It is useful to compare these results with analogous results for the lepton energy spectrum in semileptonic B decays, given in [17]. In this case, only ∼ 10% of the rate is included, and the subleading corrections are substantial. The analogous relation to Eq. (70) is where In Fig. 7 we plot δΓ E ℓ (E ℓ ) for m b = 4.8 GeV and m b = 4.5 GeV in the three models used in this paper. It is clear from the figures that for lepton cuts near the kinematic limit E ℓ = 2.3 GeV, the uncertainty in |V ub | from higher order shape functions is much greater for the lepton energy spectrum than from the hadronic invariant mass spectrum. V. CONCLUSIONS We have calculated the hadronic invariant mass spectrum forB → X u ℓν ℓ in terms of shape functions to subleading order. Introducing some simple models for the shape functions we have studied the dΓ/ds H spectrum numerically. Since we know little about the form of the subleading shape functions, it is difficult to estimate the corresponding theoretical uncertainty in |V ub |. However, using the spread of models as a guide, we can conclude that the largest subleading effects are proportional to the leading order shape function, and so, given a determination of the shape function fromB → X s γ decay, do not increase the theoretical uncertainty. Assuming our spread of models provides a reasonable measure of the theoretical uncertainty, we can conclude that the theoretical uncertainty in |V ub | due to higher order shape functions is at the few percent level. This is substantially less than the corresponding uncertainty in the integrated lepton energy spectrum with the current experimental cuts. This is also much less than the other sources of experimental and theoretical error in the current measurements of the integrated hadronic energy spectrum.
2019-04-14T02:36:15.963Z
2003-12-29T00:00:00.000
{ "year": 2003, "sha1": "6c0320cd761396948dd009b543694ccf7ff56f4e", "oa_license": null, "oa_url": "http://arxiv.org/pdf/hep-ph/0312366", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "6c0320cd761396948dd009b543694ccf7ff56f4e", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
214548514
pes2o/s2orc
v3-fos-license
Quality assessment of the internet portal academic unit of the higher educational establishment with the help of the fuzzle set The article focuses on the importance of education. Education is presented as an educational service regardless of the type and method of receipt. The components of the educational service are indicated as a combination of the educational component, the managerial component and the financial and economic component of the educational institution. A scheme describing the complexity of decision making by the consumer of educational services is presented. It concerns the need to expand positions in the Internet space of the higher educational institution. The components are listed and a diagram is presented containing a list of the necessary components that make up the Internet portal of a higher education institution. The approaches to the evaluation are given, the choice of the apparatus of fuzzy sets is justified, which allows one to evaluate indicators of a different nature in one numerical scale. A system of indicators has been proposed, the value of which is most significant for the consumer of educational services, but cannot be independently assessed due to the complexity of the task. The approaches to assessing the quality indicators of the educational unit of the Internet portal of the educational organization, as a tool to enhance competitiveness in the market of educational services, are considered. The choice of the type of the membership function was determined and an approach was proposed for obtaining estimated values of indicators affecting the quality of the educational unit of the Internet portal of a different nature in a single numerical scale. Introduction The significant role of education in Russia is legislatively enshrined in the National Doctrine of Education of the Russian Federation until 2025, in the Law of the Russian Federation "On Education" and the Federal Law "On Higher Education on Postgraduate Professional Education". Let us consider education regardless of the type and form of obtaining (secondary vocational, higher, full-time education, correspondence, etc.) as an educational service rendered by the school. We define an educational service as an aggregate of the educational component, the managerial component and the financial and economic component of an educational institution, determined to meet the needs of consumers of educational services (obtaining a profession, advanced training, retraining, etc.). The need for evaluation The institution of higher education functions in the conditions of tough competition in the educational services market. The market of educational services has its own specifics, namely, a feature of educational services is that the consumer of educational services is actively involved in the process of receiving the service. Educational institutions enter the market, on the one hand, as producers of educational services in accordance with the requirements of legislation, competencies, labor market requirements, and on the other hand, as consumers of the workforce in terms of management and people directly involved in the educational process. It should be noted that students, on the one hand, are consumers of educational services, and, on the other hand, they are the result of the educational process, i.e. a product that determines the quality of the educational process. Educational institutions, as participants in the educational services market, enter into market relations with other participants for the interaction of educational services, thus, there is competition. The product in this market is an educational service of an educational institution or vocational school, presented in the form of educational materials, tests, various programs, as well as graduates, as a result of various educational services. In addition, the market for educational services is a combination of consumer choice of a profile and type of an educational institution and the right of all citizens to receive vocational education on a competitive basis, as well as retraining and advanced training initiated by employers, employment services and their own initiative. It should be borne in mind that the consumer of educational services, when choosing an educational service, is forced to make a decision based on the variety of alternatives provided ( Figure 1). The consumer of educational services makes a decision on the choice of an educational institution, interacting with the educational market, the labor market, communicating with other consumers of educational services, determining the choice between the type of educational service, method of obtaining, security (in the form of educational content), availability of control, the ability to change educational trajectories, the presence of a way to communicate with teachers, etc. [1][2][3][4][5]. The above emphasizes once again that the competition in the market of educational services in the field of vocational education is quite high, since related services are offered by various actors -educational institutions -and consumed by different actors -legal entities and individuals. The demand, offers and requirements for the quality of educational services on the market determine the level of competitiveness of the market educational environment. An educational institution determines its position in the educational services market with its proposals, monitors an assessment of the level of competitiveness of the environment in which it is to act, and also re-evaluates this situation in the future. The consumer of educational services also assesses the quality of the educational services provided (in various ways), primarily using information and communication technologies [6][7]. Accordingly, the university needs to identify itself in the Internet space, apply the latest methods of attracting the target audience, perform continuous monitoring of the quality of the content of the Internet portal, take into account the needs of the consumer of educational services. Figure 2 offers a fragment of the structural scheme of the Internet portal of a higher educational institution, containing components, in accordance with the requirements of today. where PN is a normative indicator characterizing the compliance of the content of the site with state standards and regulations of the university (law on consumer protection, the charter of an educational institution of higher education, the presence of a license for educational activities, standards of directions according to which training, curricula and work programs are carried out in accordance with accepted competences); PY is an indicator of the quality of an educational-methodical complex, which determines the quality of the materials included in the educational-methodical complex (textbooks, teaching aids, methodological recommendations for laboratory, practical classes, implementation of test, term papers, projects, degree projects); VZS is an indicator of the quality of the speed of loading pages on the site; PPO is an indicator of the availability of software, which includes the characteristics of the site according to the time of execution of transactions, a list of software products, in accordance with academic disciplines and courses. Each educational institution is obliged to use software products only with a license. For distance learning, it is especially important to use computers with the necessary bandwidth of data transmission channels; PT is an indicator of the quality of tests used in training in the discipline under study (preliminary, intermediate, final); PD is an indicator of the quality of accessibility of online teachers (availability of technical means to communicate with students in order to answer the questions received); PEJ is an indicator of the quality of availability and maintenance of an electronic dean's system, which allows you to track student's work schedule. The description of the indicators clarifies that they are all of different nature, both qualitative and quantitative [8][9][10][11]. To obtain the value of the quality indicator of the site learning block (KS) in a numerical representation, we apply the apparatus of the theory of blurry sets, which allows us to estimate indicators of a different nature in a single numerical scale. In the theory of fuzzy (fuzzy) sets, there are no requirements for the appearance of Proceeding from this, we propose the following approach for estimating the indicator PN (indicator of normativity). In our opinion, the existing classes of membership functions (triangular, trapezoidal, s-shaped, z-shaped, etc.) are not quite acceptable, since the indicator is very specific. Namely, the basic set X = {the law on consumer protection, the charter of an educational institution of higher education, the license to carry out educational activities, standards} is discrete and of course, all components of the basic set are necessary to determine the direction for the development of all activities of a higher educational institution. The indicator is legislative, but due to the continuous change in the direction of development of higher education (the next generation generation of education standards, respectively, changes in curricula, work programs, etc.) may not always correspond to the required value 1. Therefore, we consider that the membership function (PN) fuzzy set "good value" for the indicator PN may have the following form: where xi is an element of the base set X, only 0 (one-to-one absence), or 1 (full correspondence) can take on value; n is the number of elements of the base set X. For other indicators, for example, an assessment of the PY indicator (an indicator of the quality of an educational and methodological complex), which in turn is also complex and depends on the set of values of the indicators included in its composition (indicators of the quality of teaching and methodological manuals, guidelines for classes, etc.), suggests the following approach. Let us suppose that you need to assess the quality of the textbook, which is extremely difficult, since assessing the quality of the textbook, making a verdict: "low quality", "medium quality", "high quality" is a serious task. Compliance with the discipline, volume of the presented material, timeliness, completeness and accessibility of the material, examples given, graphic material, etc., all this suggests that experts should make an opinion on this issue (individuals who can make a verdict on a given subject area) from among the staff of the department, or a group of experts appointed by the leadership of the higher educational institution. After receiving the opinions of experts in this paper, it is proposed to use the following analytical relationship. Suppose there is a basic set of benefits P={p1,…,pn}, it is required to obtain an estimate in the numerical representation, the belonging to the fuzzy set A is "good quality of benefits", using the opinions of k experts [5][6][7][8], obtained in numerical representation, functions (p).accessories. We propose to determine by the formula 3 an indicator of the quality of the textbook (KYP): where i is the index of the estimated benefit; j is the index of the next expert. In the process of online learning, situations arise that require intensive work with the site, for example, getting a page with a task, getting answers to asked questions, getting explanations of answers, etc., in this case VZS (quality indicator of the speed of loading pages) becomes significant. When approaching the assessment of the VZS indicator, it becomes clear that the base set is a numeric set and can be set to S = [0.01..5] s, where the time from 0.01 s to 1.5 s can be considered a "good time" for the page to open. As an analytical representation of the membership function, we propose the following dependence: Conclusion In this work, some indicators are highlighted that affect the quality of the training unit that is part of the website (Internet portal) of a higher educational institution, and some approaches have been proposed to develop a system for evaluating the quality of the training unit using the fuzzy sets, which will increase the level of competitiveness of higher education institutions in the market of educational services.
2019-11-22T00:56:09.995Z
2019-10-01T00:00:00.000
{ "year": 2019, "sha1": "10d4feec8abe84031ff88d11333970acc1d980bc", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/1333/8/082010", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "478a78cf2a2e98434d5ea9e1690c005d2078d561", "s2fieldsofstudy": [ "Education", "Computer Science" ], "extfieldsofstudy": [ "Business", "Physics" ] }
240515948
pes2o/s2orc
v3-fos-license
Challenges to Health and Safety Compliance for Construction Projects in South East, Nigeria The aim of this research is to establish the challenges to health and safety compliance for construction projects in South East Nigeria. in form of incentives based approach will equally take care of the identified different challenges to Health and Safety Compliance in South East, Nigeria. Furthermore, the health and safety regulations should not be enforced only through inspections and sanctions, rather there should be economic incentives to encourage and motivate self- compliance. INTRODUCTION Compliance to health and safety standard in the construction industry cannot be overlooked because it is tangential to higher productivity. The development of sustainable health and safety environments is becoming one of the key issues globally. A study by [1]) in [2] aver that the institutional and regulatory framework for construction health and safety is highly fragmented and poorly implemented and call for urgent need for provision of adequate and enforceable health and safety regulations for construction operations as well as the establishment of construction industry training institutes including trade centres in different parts of Nigeria, the South East area of Nigeria is part of this. Consequently, there is a need to develop a functional framework for health and safety compliance in construction projects since [3] concluded that neither the Factories act of 1990 nor the Personal Protective Equipment (PPE) EC directive, 1992, sufficiently captured the construction sites and their operations, which indicated that construction works in Nigeria is unregulated in terms of occupational health and safety. This study also seeks to address the several calls that have been made to stakeholders in the construction industry in the South East to adopt innovative safety and health measures to improve project performance in the construction industry in Nigeria. This corroborates [4] in [5], suggestion that safety management must be thorough, and it must be applicable to all aspects of the job, from the estimating phase of the project until the last worker has left the premise at the completion of the project. This study is to determine the challenges to health and safety standards by indigenous contractors in the South East Area of Nigeria by determining the level of compliance of construction projects by professionals in the industry to existing health and safety regulations in South East Nigeria and identifying the challenges confronting health and safety compliance of construction projects in South East Nigeria. [6] posits that construction contractors and professionals in developing countries of which Nigeria is one do not prioritize H&S; it ranks low on their priority list, [7] in [8], further posits that some of these Nigerian contractors fail to take responsibility for H&S, shifting the operational risks to the workers. 90% of construction workers in a study by [9] in [8], understand the importance of risk control measures such as Personal Protective Equipment(PPE), 81% still fails to wear the PPE provided, noting issues such as discomfort, inadequacy of PPE, as excuses. In some cases, H&S regulations are mentioned in contract documentation but in practice the case is different which is applicable in some of the sites in the South east states visited during research unlike sites in the South South especially Port Harcourt where safety regulations are fully adhered to by construction workers. The level of compliance to existing health and safety regulations in the South East states is low due to lack of proper enforcement by existing authority. The study done by [5] on Nigeria construction sites and Anambra State in particular, examined and found out that the level of health and safety knowledge among construction workers in Anambra State was moderate, the level of health and safety compliance, in the state among the workers was low, the study further established a very weak positive correlation between the health and safety knowledge and compliance of construction workers. It further averred that health and safety knowledge and compliance alone are not enough to cause behavioural changes but safety factors like enforceable regulatory framework, management commitment etc. [8] opines that the Nigerian construction industry like other industries faces challenges which are not limited to: lack of skilled manpower, unstable prices of materials, poor implementation of policies, political instability, corruption, unethical practices but corruption is the major hindrance to the construction industry. According to [10], Nigeria, the largest African country is beleaguered with bribery and corruption, and Transparency International (2012) ranks the country 139 out of 176 in terms of the corruption perception index. Regulatory institutions and the police force have been proven to be corrupt, which prevents effective implementation of legislation in the country as the activities of authorities responsible for enforcing the laws are seen as questionable. For instance, situations where firms with poor H&S practices achieve pass marks after inspection because they have bribed the enforcement officers, it is confirmed that enforcement officers do this due for selfish financial reasons, thereby marginalizing the aims of the regulations, and promoting non-compliance. Lack of skilled personnel is also another major barrier to the effective implementation of H&S in Nigeria. In a similar vein, [11] states that an insufficient number of competent occupational health services experts hinders the development of occupational health services globally. Consequently, implementation of H&S legislation requires funds to be available for effective provision of adequate facilities and recruitment of training officers who enforce the laws. However, [12] argues that the number of technical and transport equipment is inadequate, which hinders the implementation of H&S legislation in Nigeria. The argument here is that if the ministry experiences insufficient funding, adequate enforcement will be farfetched; it may also contribute to corruption. [13] concur with the identification of insufficient funding being viewed as a barrier to the implementation of H&S legislation. Inadequate funding is another factor that can hinder the enforcement of the regulations. [14] argue that lack of resources can hinder occupational safety and health (OSH) management efforts. On the other hand, most enforcement bodies/institutions in the developing world lack the basic tools and amenities, which need funds to promote OSH regulations, educate the society, enforce the regulations, and disseminate information. The contract document as well as the tendering process in Nigeria construction industry does not highlight the importance of OSH compliance and the imposition of fine (penalties) to offenders so this hinders the enforcement. [7], states that compliance with OSH regulations can be standardized not only in tenders as part of contract agreements but also in the instances where it is possible to that safety records and references from previous clients can be prerequisite for tendering for contracts to indicate the OSH performance of contractors. The regulatory institutions i.e. OSH officers (Federal Ministry of Labour and Productivity 2010), trained safety officers, the key Associations, Organizations and Non-Governmental Organizations involved in OSH arrangements and issues in the country are used for political or victimization reasons and therefore appear to fulfil all righteousness, coupled with the weak legal structure. [15] argue that the weak legal structure and absence of law enforcement in Nigeria allow foreign companies to take advantage of the ineffective statutory regulation. The same can be said of the construction industry. That may also suggest that these foreign firms may not have plans to comply fully with the OSH regulations in Nigeria or have an OSH management system similar to those obtained in their countries of origin, as they intend to reduce expenses and added cost to construction outputs. Since some of these multinational construction firms have seen the loop holes and the weak and porous nature of non-compliance to the regulations, the reputable ones even cut corners. Ignorance of the benefits of compliance is another factor that hinders the enforcement of health and safety regulations in Nigeria, some wise and knowledgeable firms comply with OSH regulations to save cost thereby increasing their profit margin, but may not comply if the cost of compliance is too much when compared with the profit margin. There is also a general belief that accidents are unavoidable in the construction industry but this is not true because some accidents that happen could have been avoided if the right safety measures were taken. [15] also posit that some believe accidents are acts of God i.e. accidents occur because God allows them. As a result of the above argument, contractors may do little or nothing to prevent these accidents; they may not take safety guidelines seriously. These therefore suggest that beliefs, be it religious or superstitious often filters into work environments resulting to lack of compliance as well as enforcement with OSH regulations in the construction industry Africa wide. Some other factors like the perception of stakeholders in the industry who feel that compliance to these regulations are costly, time consuming and even unnecessary, inadequate training of staff, non-commitment of the major construction players, unemployment, neglect of human rights and moral values, not having the safety culture are all factors that can hinder the enforcement of the regulations in Nigeria. These factors no doubt affect and hinder the enforcement of the health and safety regulations, so provision of functional health and safety program is paramount. METHODOLOGY This paper evaluates the challenges of compliance to health and safety regulations for construction projects in South East Nigeria. The research sample was drawn from registered professionals in study area (South East area of Nigeria) and indigenous construction firms as shown in Table 1 and Table 2 and structured questionnaires were administered to them. South East of Nigeria is one of the six geopolitical zones in the country. The region consists of the following states; Abia, Anambra, Imo, Enugu and Ebonyi. The data for the study were collected from Umuahia in Abia, Awka in Anambra, Owerri in Imo, Enugu in Enugu, and Abakiliki in Ebonyi (Table 1). A sample size of 1205 was drawn from a population of 1337 for questionnaire distribution, while a total number of 1190 were validated for the study. The structured questionnaire was designed based on a 5-point likert scale. FINDINGS AND DISCUSSION Friedman's Q test was used to assess the health and safety compliance challenges of construction sites in South East Nigeria. Bribery and corruption (with mean rank of 6.53) happens to be the highest challenge to health and safety compliance. This is followed by Ignorance of the benefits of compliance, Lack of Health and Safety culture, Perception of stakeholders, Neglect of human rights and moral values, non-commitment of the major construction players, Inadequate training of staff and Lack of skilled Health and Safety personnel, Noninclusion of Health and Safety in contract document & tendering process and Inadequate funding. Thus inadequate funding is the least constraint while bribery and corruption is the greatest challenge to Health and Safety Compliance in South East, Nigeria. Cursory examination of Table 4 revealed that, of all compliance indicators, the Architects are the most complaint to HSP with count 183, Quantity Surveyors with count 132 ranked second best compliant to HSP and builders with count 110 ranked as the third best compliant with respect to HSP in South East Nigeria. On HSA, Architects with count 120 tops all other professionals in the construction sub-sector with respect to compliance level. This is followed by Quantity Surveyors with count 96 and contractors with count 52 occupying the third position with respect to HSA compliance in South East Nigeria. Quantity Surveyors with count 100, Architects with count 80 and Builders with count 67 occupy first, second and third positions respectively with respect to HST compliance in the South East. Architects with count 192, Quantity Surveyors with count 112 and Engineers with count 92 dominates the compliance level of ALE as first, second and third respectively. Architects with count 256 were the most compliant to PPE, followed by Quantity Surveyor with count 168 and Builders with count 138 occupying second and third positions respectively with respect to compliance level of PPE in South East Nigeria. With respect to FAF, Architects with count 256 has the highest level of compliance followed by Quantity Surveyors with count 156 and builders with count 138 ranking second and third in compliance to FAF in South East Nigeria. Architects with count 184 dominate compliance in WSS, followed by Quantity Surveyors with count 124 and engineers with count 107, in South East Nigeria. On RID measures, Architects with count 116 Quantity Surveyor with count 80 and Builders with count 75 rank first, second and third positions respectively with respect to compliance to RID in South East Nigeria. CONCLUSION AND RECOMMENDA-TION In conclusion, the different Challenges to Health and Safety Compliance in South East Nigeria were identified and bribery and corruption (with mean rank of 6.53) happens to be the highest challenge to Health and Safety Compliance while inadequate funding is the least constraint (with mean rank of 4.86). Level of compliance to H&S regulations among professionals in the construction industry is moderate and this corroborates [5] the level of compliance to existing health and safety regulations in the South East states is low due to lack of proper enforcement by existing authority. Therefore construction workers in the South east area of Nigeria have to understand the need for regular health and safety awareness, trainings, monitoring and enforcement not only through inspections and sanctions, rather there should be economic incentives to encourage and motivate self-compliance. The safety of workers, safety innovations and good equipment will lead to higher levels of output which can be improved through increased adoption of safety innovations like better gear and equipment (also known as PPE-Personal protective equipment), higher quality work, positive mindset, safety and health culture. So the following is recommended: a. The stakeholders in the construction industry (e.g. clients and professionals) should team up to provide enforceable Health and Safety practices and plan that are in tangent with health/safety regulations in the Nigerian construction industry. b. The H&S regulator should not be enforced only through inspections and sanctions, rather there should be economic incentives to encourage and motivate selfcompliance. c. There is also need for further research on Innovative approach to health and safety in the construction industry in South East Nigeria and Nigeria as a whole CONSENT As per international standard or university standard, respondents' written consent has been collected and preserved by the authors.
2021-10-19T15:47:49.033Z
2021-09-21T00:00:00.000
{ "year": 2021, "sha1": "fd19354c9d86a1877e57d794268f09a303cfaaee", "oa_license": null, "oa_url": "https://journaljerr.com/index.php/JERR/article/download/17427/32447", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "b771ebb444b2413a9f009fb6950825509fadce05", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [ "Business" ] }
214011972
pes2o/s2orc
v3-fos-license
A Highly Efficient Bioflocculant Produced by a Strain of Klebsiella sp. 1 . A bioflocculant-producing bacterium isolated from activated sludge, named B3, was identified as Klebsiella sp. The bioflocculant produced by B3 was named MBFB3. The main component of MBFB3 was glycoprotein analyzed by IR. The water quality of sewage and the rate of sludge dewatering were increased after treating with MBFB3, indicating the bioflocculant has a wide application prospect in the sewage biological treatment. Introduction Bioflocculant is a type of biodegradable macromolecular flocculant produced by microorganisms. With the properties of innocuity, high-efficiency, safe and biodegradation, it can become a new kind of product to replace the conventional chemical flocculants [1,2] . The study of bioflocculant has attracted considerable attention over the years. Many bioflocculant-producing microorganisms including bacteria, fungi and yeast had been screened and isolated from activated sludge, soil and sewage [3][4][5][6][7][8][9][10] . Rhodococcus erythropolis was isolated from activated sludge and the bioflocculant from it was highly effective in treating the wastewater generated from industrial processes [11] . Microorganisms such as Bacillus sp. [12] , Enterobacter sp. [13] , and Alcaligenes latus B-16 [14] were all isolated from soil samples. However, Compared to chemical flocculants, the low yields and the high costs of bioflocculants are major limitations to their practical application [15] . In the previous research, a flocculant-producing strain of Klebsiella sp. B3, with high flocculating activity, was screened from the activated sludge. In this paper, the chemical composition of the bioflocculant produced by B3 (MBFB3) was analyzed. And the application of MBFB3 in the treatment of sewage and sludge were discussed. Bioflocculant collection and purification After incubation for 24 hr, Cells were removed from the culture medium, which had been diluted with the two volumes of distilled water, by centrifugation at 8, 000 r/min for 10 min at 4℃. Thereafter, the supernatant was concentrated to 0.1 volumes with a rotary evaporator and precipitated by the addition of 4 volumes of cold anhydrous ethanol. The supernatant was incubated at 4℃ for 24 hr, and then centrifuged at 4, 000 r/min for 5 min. The precipitate was collected, washed twice using anhydrous ethanol, and then dissolved in distilled water. The supernatant was centrifugated at 5, 000 r/min for 10 min at 4℃ to remove the precipitate and then the free protein was removed by the method of Sevag. After dialyzed overnight at 4℃ in deionized water, the supernatant was precipitated by the addition of 4 volumes of cold anhydrous ethanol again. The resulting precipitate was collected by centrifugation at 10, 000 r/min for 15 min, redissolved in distilled water and then lyophilized to obtain purified bioflocculant. Characterization of the bioflocculant The functional groups of the bioflocculant were determined using a Fourier transform infrared spectrometer (Nicolet 8700, Thermo Scientific Instrument Co.U.S.A.) in the frequency range of 4000-400 cm -1 with KBr disks. Elemental analysis was achieved with an elemental analyzer (Vario ELIII, Elementar, Germany). Bioflocculant toxicity test Thirty white mice (20±2 g) at the age of 30 days were divided into three groups randomly. Each group included five males and five females. All the animals were kept in a room with a temperature maintained at 22±2℃, a relative humidity of 55±5%, and illumination of 12 hr light/dark cycle. The mice in the first and second group were fed with the fermented liquor as a dosage of 0.2 mL and 1 mL per day, respectively. The mice in the contrast group were fed with physiological saline. All the mice were raised for 14 days and their posture, bite and sup, movement, and weight were monitored. Effect of MBFB3 on the quality of sewage Taking the inlet sewage of Shiwuli river as the research object, the coagulation experiment was carried out at the condition of 250 rpm 40 s and 40 rpm 120 s in the six mixers. The changes of each water quality index were analyzed with the national standard method. Effect on the Dewatering Rate of Activated Sludge 100 mL sludge, 98 mL sludge with 2 mL CaCl2 (10g/L), 96 mL sludge with 2 mLCaCl2 (10g/L) and 2 mL microbial flocculant were added in three 100 mL colorimetric tubes respectively. The mixture was reversed for 20 times and then allowed to stand for 10 min. The volume and the absorbance of the upper phase at 550 nm using a spectrophotometer were measured after filtering for 5 minutes. The dewatering rate was defined and calculated as follows: dewatering rate = clear water volume / total volume Effect on the Sedimentation Rate of Activated sludge According to the above methods, the volumes of settling sludge and clean water at different sedimentation time (0, 5, 10, 15, 20, 25, 30min) were recorded in two 100 mL colorimetric tubes, respectively. The sedimentation curves of activated sludge under different conditions were drawn. The volume and the absorbance of the upper phase at 550 nm were also measured. Characterization of the bioflocculant From the IR spectrum (The IR spectrum of MBFB3 is shown in Figure 1), the characteristic chemical groups of MBFB3 were observed as followed. The absorption peaks at 3369, 2960, 2925, 2853, 1081, 1039 cm -1 were the characteristic groups of polysaccharide. The absorption peak at 3369 cm -1 was the characteristic of -OH stretching from the alcoholic hydroxyl group. The peak at 2960 cm -1 and 2925 cm -1 indicated the antisymmetric stretching vibration of CH3 and CH2, respectively. The peak at 2853 cm -1 was an indication of the symmetry stretching vibration of CH2. Two weak bands at 1081 cm -1 and 1039 cm -1 indicated C-OH bands of saccharide. The absorption peaks at 1648, 1533, 1239 cm -1 were the characteristic groups of protein. The peak at 1648 cm -1 was an indication of the stretching vibration of C=O from tertiary amide. The peak at 1533 and 1239 cm -1 was attributed to the coupling of N-H bending vibration and C-N stretching vibration. In summary, the major component of the MBFB3 should be glycoprotein. The elemental analysis of MBFB3 revealed that the weight fractions of the elements C, H and N were 39.21%, 6.34% and 7.87%, respectively. MBFB3 toxicity test When white mice were fed with the fermented liquor as a dosage for 14 days, no obvious weight differences were observed between treated and untreated mice. They maintained normal posture, bite and movement, indicating that MBFB3 has no acute toxicity, at least for white mice. Effect of MBFB3 on the treatment of sewage The quality of sewage was changed greatly after coagulation and sedimentation with MBFB3 in the six mixers. The results (table1) indicated that MBFB3 could remove the main pollutants in the sewage with the process of coagulation and precipitation. The clarity of upper phase and the sludge dewatering rate of activated sludge were all improved after be treated with MBFB3( Figure 2). The clarity of activated sludge can be reduced from 0.39 to 0.29. And the sludge dewatering rate can be increased from 20% to 35%. The results showed that MBFB3 can flocculate the floats in the sludge and contribute to the dewatering of activated sludge. Kinetic study on Dewatering of activated sludge The volume of the clean water at different times and the clarity of the supernatant OD550 were records after the activated sludge was treated with the microbial flocculant. The dewatering rate was calculated also. After the activated sludge was treated with MBFB3, the sedimentation rate of the sludge was accelerated and the volume of the sludge decreased from 100 mL to 58 mL, which was lower than that of control group in the initial 0~15 min (Figure 2, 3). The clarity of the supernatant decreased continuously with time and increased 48.15% compared with the blank (Figure 4). The results indicated that MBFB3 was good for the flocculation and the dewatering rate of activated sludge. CONCLUSIONS A flocculant-producing strain B3 of Klebsiella sp. with high flocculating activity was screened from the activated sludge. The chemical composition and structure of the bioflocculant produced by B3 (MBFB3) were analyzed. The pure product of the bioflocculant was attained through the process of ethanol deposition and chloroform-butyl alcohol for getting rid of uncombined protein. The compositions of the flocculants were qualitatively determined as glycoprotein through element analysis and IR. The quality of sewage, such as COD, ammonia nitrogen, SS and chromaticity, was changed greatly after coagulation and sedimentation with MBFB3. The clarity of the supernatant and the dewatering rate of sludge were all increased after treated by MBFB3, indicating the flocculant can flocculate the suspended solids in the settled sludge. So MBFB3 has a wide application prospect in the sewage biological treatment.
2019-12-12T10:31:06.941Z
2019-01-01T00:00:00.000
{ "year": 2019, "sha1": "be8d52777d6b885b51794119e0b922e9198932f5", "oa_license": "CCBY", "oa_url": "https://www.e3s-conferences.org/articles/e3sconf/pdf/2019/62/e3sconf_icbte2019_06026.pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "3b0fa2c88b732d014648683d635e1dbec083e88f", "s2fieldsofstudy": [ "Environmental Science", "Biology" ], "extfieldsofstudy": [ "Chemistry" ] }
53867088
pes2o/s2orc
v3-fos-license
CHARACTERISTICS IMPORTANT FOR ORGANIC BREEDING OF VEGETABLE CROPS The remarkable development and application of new genetic The Institute for Vegetable Crops possesses a rich germplasm collection of vegetables, utilized as gene resource for breeding specific traits. Onion and garlic breeding programs are based on chemical composition improvement. There are programs for identification and use of genotypes characterized by high tolerance to economically important diseases. Special attention is paid to breeding cucumber and tomato lines tolerant to late blight. As a result, late blight tolerant pickling cucumber line, as well as late blight tolerant tomato lines and hybrids are realized. Research on bean drought stress tolerance is initiated. Lettuce breeding program including research on spontaneous flora is started and interspecies hybrids were observed as possible genetic variability source. It is important to have access to a broad range of vegetable genotypes in order to meet the needs of organic agriculture production. Appreciating the concept of sustainable agriculture, it is important to introduce organic agriculture programs in breeding institutions. Zdravković J., N. Pavlović, Z. Girek, M.Zdravković and D. Cvikić (2010): Characteristics important for organic breeding of vegetable Vol 42,No. 2,[223][224][225][226][227][228][229][230][231][232][233] The remarkable development and application of new genetic The Institute for Vegetable Crops possesses a rich germplasm collection of vegetables, utilized as gene resource for breeding specific traits.Onion and garlic breeding programs are based on chemical composition improvement.There are programs for identification and use of genotypes characterized by high tolerance to economically important diseases.Special attention is paid to breeding cucumber and tomato lines tolerant to late blight.As a result, INTRODUCTION The concept of sustainable agriculture implies the application of the acquired knowledge in order to produce healthy food (LAZIC, 2008).The regulation of the World Health Organization, Codex Alimentarius, complies controlling the critical points of the production and the quality in the context of regulation, as well as using the natural resources.The ultimate aim to be achieved is the viable systems of production that are socially justified, economically payable and productive, and at the same time to protect health, improve community and animal welfare, and provide the safe environment.Vegetable production based on sustainable agriculture is of interest to: 1. Small-scale, medium and large producers, through the value added and better conditions of sale at the market 2. Consumers, by better quality and healthy food that is produced in sustainable manners 3. Economy and industry, by greater profit from better products 4. Everybody, by higher quality of the environment Tendencies of future plant selection will be on high production, better quality, optimization of production with low input of fertilizers and pesticides, tolerance to stresses and diseases (VAN WEAS, 2003). Following the principles of the sustainable manners of agricultural production, the manners of work for each production system is defined, regarding the specific qualities of the agrosystem, including the conditions: soil, water, agricultural production, crops protection, breeding cattle, health of the cattle, cattle welfare, harvesting, processing and storing, welfare, health and safety of the population, animals and the landscape. Therefore, the sphere of interest for the vegetable selection researchers is in the field of their contribution to the agricultural production within the scope of selecting the varieties, regarding the needs of the market, and in accordance with the environmental conditions, available resources; all these in order to preserve the fertility of the soil, prevent the development of the weeds, pests and diseases.Crops grown in organic system must have familiar seed origin, so called certified seed (BOLETIN OFICIAL 1999).Markets for this kind of vegetables are in expansion. However, there are no necessary quantities of seed for organic production (THOMPSON, 2000). Agricultural Institutes in Serbia pay special attention to breeding of plant species and monitor the progress in this area.However, the finalization of these projects requires greater Government support.We are facing the lack of certified seed for organic production on the market today.Producers showed special interest for certain vegetable species such as peas, beans, green beans, onions, garlic, cucumbers, tomatoes and lettuce.Also, organic seed is crucial for scientific researches in selection, seed production and organic production of vegetables.Organic seed provides all necessary inputs for entire organic production (VALEMA, 2004). The aim of this research was the characterization of organic vegetable breeding of certain species and seed production through quality, yield and resistance to plant pathogens in the agro-ecological conditions in Serbia.Compared to the methods available for conventional plant breeding, there are some limitations on the choice of the method for organic breeding.These methods can be classified as: permitted methods -intraspecific crossing, backcrossing, mass selection, individual selection, forbidden methods -interspecific crossing, protoplast fusion, genetic modification, induced mutations and conditionally permitted -use of hybrid varieties, somatic embryogenesis, meristem culture, and in vitro micropropagation anther culture.1.The application of genetically modified organisms or their derivatives is banned in the organic production.Such a ban is included in our Organic Production and Organic Products Act ("Official Gazette of the RS", 62/06): "Genetically modified organisms and their derivatives cannot be used in organic production."(BERENJI, 2005).2. Of the modern biotechnology methods, only the method of indirect selection via molecular markers is permitted because this method does not affect the change of the genetic plant construction.3. The compromise for hybrid varieties that can be used in organic production is accepted, if they are fertile and if the sterility in the process of hybrid seed production is not chemically caused.Most probably, in organic production will be developed the three-line (TC) and four-line (DC), and not the two-line hybrids that dominate in the conventional production. 4. The application of the cytoplasmic male sterile products is banned, except if the fertility is permanently restored and it continues in further generations of propagation.5. Neither direct nor indirect application of the genetic material that contains induced mutations is permitted.6.The application of silver nitrate, silver thiosulfate, synthetic hormones, antibiotics and colchicines is banned in organic plant breeding (BERENJI, 2008). Cucumber selection Cucumbers belong to leading species in the vegetable production in Yugoslavia.Continuous dissemination of its breeding range has been possible due to multiple usage and wide agro-ecological adaptability.Besides the fact that fruits are used in nutrition, both fresh and processed, it is also a raw material of the pharmaceutical industry. However, breeding of this crop has been intensified only in recent 30 years in Serbia.Tendency of spreading production and increasing the use of cucumber in Serbian nutrition requires a wide range of varieties for pickling and for salad.Therefore, it is necessary to increase the number of cultivars for market and for consumers.Current trend of cucumber breeding in Serbia is heading towards the selection of genotypes with better quantitative characteristics.In the first place, these are varieties with non-bitter fruits, straight shape and with high yield.The exploiting of parthenocarpy of salad type of cucumber is also a current trend.New parthenocarp cucumber genotypes were intended for production in greenhouses. Cucumber breeding to resistance to diseases is of great importance (PAVLOVIC et al., 2002).Downy (Pseudoperonospora cubensis (Berk and Curt.)Rostow) and Cucumber Mosaic Virus cause the greatest problems in cucumber growing in Serbia as well as in other parts of the world (METWALLY and WEHNER, 1990). Finding and identifying the selection material for these, economically most important diseases is of the greatest priority.The use of chemicals in cucumber protection of downy has reached emergency proportions.This too can be overcome only by breeding resistant genotypes in organic systems of production.Today, the Institute for Vegetable Crops has cucumber lines that are highly tolerant to downy mildew in conditions of spontaneous infection. The Institute for Vegetable Crops in Smederevska Palanka possesses a collection of cucumber germplasm consisted of over 100 divergent genotypes that are included in our breeding programs.Pickling cucumber hybrid characterized with high tolerance to this plant pathogen was created using experimental crossings.Based on the results from the trial at the Institute for Vegetable Crops, the pickling cucumber named Sirano F 1 is marked as the pickling cucumber hybrid that is the most resistant to blight (PAVLOVIC et al., 2006). Bulb vegetables selection Garlic draws more and more attention as an industrial plant and its production is perspective.Garlic is characterized by high contents of dry matter, proteins, fats, carbohydrates, vitamin C, thiamine, and B 6 .Furthermore, garlic is rich with minerals such as: Mg, Zn, Mn, Cu, Mo, and Se.It also possesses high energy values that can be paired with members of Fabaceae family.Through the new breeding programs, the chemical structure of garlic is emphasized.Based on detailed chemical analysis the best ecotypes are selected.The aim of the research was to create the ecotypes with the most favourable chemical composition, which would give valuable contribution to the process of utilization of this vegetable variety (PAVLOVIC et al., 2003a). High biological value of onion is the result of its specific chemical composition dominated by sugars, vitamin C and characteristic ethereal oil.According to the average contents of vitamin C in bulbs (32.46 to 44.03 mg %), onion is a significant natural source of this vitamin.In our research higher genotypic variance and the phenotype variation coefficient is found, as compared to the ecological variance and the genotype variation coefficient.This suggests an important role of the genetic factors in the expression of this trait (PAVLOVIC et al., 2003b).It is confirmed with the broad-sense heritability values (0.75 and 0.76%).For dehydration in food industry only the onion genotypes with high percentage of dry matter contents are used.In many countries today, there are selection programs for the development of genotypes only for drying purposes.Variability of the chemical composition of onion in the field caused by climatic and soil conditions, as well as by the agricultural techniques is characteristic.Dry matter percentage in particular varieties suggests the part of the total bulb mass that can be used in the food production industry.In our research a high genotypic variability of the dry matter contents is found (PAVLOVIC et al., 2007). Tomato selection Tomato represents one of the most important sources of lycopene.It also contents a high level of other carotenoids (β-carotene), vitamins (vitamin C), minerals, flavonoids and phenolic acid.Antioxidant effects of the substances affect on the reduction of the possibility for human to contract diseases; the substances affect on the proliferation of cancerous cells; and act as preventive for cardiovascular diseases.Nutritional estimation of the optimal antioxidant quantity through daily consummation of tomato is 9-18 mg of carotenoids, 175-400 mg of vitamin C, 3-4 mg of vitamin E, 50 mg of flavonoids, 0.4 mg of folate and 25-30 mg of lycopene.Tomato is consider as a rich source (in 100 g) of: vitamin C (20-29 mg), carotene (0.2-2.3 mg) and phenolic acid (1-2000 mg), regarding to the total antioxidant contents.There are also small portions of vitamin E (0.49 mg), flavonoids (0.5-5 mg) and traces of selenium (0.5-10 mg), copper (90 mg) and zinc (240 µg).It is recommended to consume 400 g per day of fresh tomato or other tomato products in five portions.Lycopene is one of the carotenoids that give naturally color to the tomato fruit and rank among the strongest antioxidants among all the other carotenoids.The intensive selection for tomato lycopene content has been performed.With adequate selection of the lines that will be used in hybrid development, it is expected that in process of the gene recombination we will get the most favorable nutritive contents ratio -especially antioxidants, with a particular emphasis on a lycopene (ZDRAVKOVIC et al., 2002a(ZDRAVKOVIC et al., , 2003c(ZDRAVKOVIC et al., , 2007)). Late blight, which is caused by the fungus Phytophtora infestans, emerges in tomato crops almost every year and causes considerable economic damages.Fungicide control of this parasite is not always effective and satisfactorily.The solution to this problem is in growing less sensitive or more resistant tomato varieties or hybrids.The research on tomato resistance to Phytophtora infestans is very complex due to high variability of the pathogen physiological races.Tomato genotypes that are the carriers of Ph-2 gene of resistance to this parasite were crossed with tomato genotypes with good production characteristics (yield and fruit quality) but more susceptible to this parasite.Successfully were selected tomato lines and hybrids that expressed a higher level of resistance than their parents (MIJATOVIC et al., 2007; ZDRAVKOVIC et al., 2004). Special projects were launched aimed to the effect of partial drying part of a root (PRD-treatment) on the growth of tomato plant, photosynthesis, transpiration, water potential, peroxidase activation of the cell wall, yield, sugar contents, lycopene contents, mineral contents and dry matter content.This treatment causes the increase of peroxidase activity and sugar contents in mature tomato fruits (STIKIC et al., 2003). Tomato fruit firmness can be achieved by entering a gene for delayed maturation (rin, nor, alc) in the selection of tomatoes.Therefore, the process stops the ripening on a certain level of maturity.As a result of interrupted maturation process, satisfactory fruit firmness occurs, but with slightly less sugar, lycopene, beta carotene, etc. (CVIKIĆ et al., 2000, ZDRAVKOVIC et al. 2008).Breeding for this trait in this manner are in opposite to the selection requirements for organic production.Fruits with greater firmness can be selected by accumulating firmness traits (ZDRAVKOVIC et al., 2007(ZDRAVKOVIC et al., , 2008)).Genotypes with "fruit firmness" gene cause long shelf life of mature tomato.(ZDRAVKOVIC et al., 2003, MARKOVIC et al., 2008). Investigation of inheritance of yield and yield components in all plant species and in tomato is very important.Gene expression effects nutritional and quality characteristics and therefore the selection may lead to its increase and decrease (ZDRAVKOVIC et al., 2000, ATANASOVA andGEORGIEV, 2009). The purpose of breeding crops with specific features designed for organic or other sustainable production requires researches in the field of seed production, so the results could be available to producers through new varieties.(VAN WEAS, 2003).Important features of research must be prices of seeds and final products.These aspects require a comparative analysis of conventional production and integrated crop management and organic methods (BRUMFIELD et al. 2000). Dry beans selection The project of breeding dry beans resistant to stressful conditions of drought has been set out at the Institute for Vegetable Crops, and aim of the project was controlling the negative effects of a high temperature and low rainfall.In our research 62 genotypes were used.Pure lines suitable for further selection have been chosen.The number of nodes in the pure lines is not dependent on the water quantity for irrigation, which includes them in the next breeding phase for stressful conditions (ZDRAVKOVIC M. et al., 2004a;ZDRAVKOVIC M. et al., 2004b).The effects of two microbiological fertilizers (SOJ 1 and SOJ 2) have been investigated on dry beans, in the variation with and without additional nutrition with mineral fertilizers (KAN).The first pod height was recorded.Microbiological fertilizers do not affect on this trait, whereas they exhibited a significant effect on the bean mass per plant (JARAK et al., 2007). Lettuce selection Aimed at creating lettuce cultivars (Lactuca sativa L.) resistant to pathogens, the causal agents of plant diseases, and especially to virus diseases, research was carried out on the spontaneous flora in the locality of Pomoravlje and Sumadija where the genotypes of the species Lactuca sp. that are resistant to causal agents of virus diseases could be found.The interspecies hybrids Lactuca virosa L. x Lactuca sativa L., L. saligna L. x L. sativa L., were investigated as possible sources of genetic variability.L. saligna L. and L. virosa L. represent only a part of the population related to L. sativa L. Wild varieties of this species belong to the weed flora.After crossing, viable achenes were obtained only in the crossing L. sativa L. x L. saligna L. At initial crossings two populations of L. saligna L. were used, one with and the other without anthocyanin.The seedlings of L. saligna L. without anthocyanin were lost after brought out on the field.In the process of the selection of F 1 generation, 31 plants emerged.After transplantation on the field, only 19 plants survived.In 9 plants the fertility was provoked by colchicine, but the percentage of fertile achenes was low as compared to the number of achenes that were not viable.By collecting more genotypes of the species Lactuca sp. from spontaneous flora in the locality of Pomoravlje and Sumadija and investigating the possibilities of crossing with the cultivated lettuce (Lactuca sativa L.), the selection programs of this kind would be improved.Eventually, the final aim is to obtain the cultivar with the built-in genes of resistance to virus diseases and acceptable morphological characteristics (ZDRAVKOVIC et al., 2003b). Our investigation was based on the problem of anthocyanin and vitamin C contents inheritance in F 1 progeny of lettuce.It was assumed that progenies with increased contents of these substances could be obtained.Diallel crossing of eight lettuce genotypes of different anthocyanin and vitamin C contents, classified into three varieties was performed.Parental and F 1 generations were investigated comparatively, and their mode of inheritance was determined.Concerning the inheritance of anthocyanin, dominant genes prevailed, and a higher content of this substance was succeeded in F 1 generation.Concerning the inheritance of vitamin C content, dominance mode of inheritance was recorded, when it was compared to the parents with the lower vitamin C content.Apart from the dominance mode of inheritance, significant additive gene effects in inheriting vitamin C content was also recorded (ZDRAVKOVIC et al. 2002a;ZDRAVKOVIC et al. 2002b).CONCLUSION Irreplaceability of plants in human diet as well as negative urban and industrial development impact on ecosystems imposes the need for planning the breeding programs for specific purposes. With the respect to the concept of viable agriculture and giving preference to biodiversity, intensifying and supporting such breeding programs at the Institutes becomes a necessity.The tendency of developing a new concept of agricultural production and greater consumption of healthy plants and their derivatives in human diet requires the existence of a broad range of vegetable varieties.It is of considerable importance to change the method of organizing the agricultural production, as well as passing new laws that would support and facilitate the introduction of the concept of viable development of the agro-system.The Institute for Vegetable Crops in Smederevska Palanka possesses a large collection of germplasm of vegetable varieties, which enables selecting the donors with favourable genes for some specific traits.This represents the basis for planning and modeling the ideotypes of vegetable varieties.
2018-11-16T00:27:16.052Z
2010-01-01T00:00:00.000
{ "year": 2010, "sha1": "26901bc80d97b9cab434548fb5db34fda00a8ad4", "oa_license": null, "oa_url": "http://www.doiserbia.nb.rs/ft.aspx?id=0534-00121002223Z", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "26901bc80d97b9cab434548fb5db34fda00a8ad4", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Biology" ] }
1777861
pes2o/s2orc
v3-fos-license
Capital Controls: Mud in the Wheels of Market Discipline Widespread support for capital account liberalization in emerging markets has recently shifted to skepticism and even support for capital controls in certain circumstances. This sea-change in attitudes has been bolstered by the inconclusive macroeconomic evidence on the benefits of capital account liberalization. There are several compelling reasons why it is difficult to measure the aggregate impact of capital controls in very different countries. Instead, a new and more promising approach is more detailed microeconomic studies of how capital controls have generated specific distortions in individual countries. Several recent papers have used this approach and examined very different aspects of capital controls - from their impact on crony capitalism in Malaysia and on financing constraints in Chile, to their impact on US multinational behavior and the efficiency of stock market pricing. Each of these diverse studies finds a consistent result: capital controls have significant economic costs and lead to a misallocation of resources. This new microeconomic evidence suggests that capital controls are not just "sand", but rather "mud in the wheels" of market discipline. Introduction In the early and mid-1990's, most international economists and Washington-based policymakers supported rapid capital account liberalization for emerging markets. Liberalization was expected to have widespread benefits. For example, it was predicted to increase capital inflows, thereby financing investment and raising growth. It could facilitate the diversification of risk, thereby reducing volatility in consumption and income. Liberalization could also increase market discipline, thereby leading to a more efficient allocation of capital and higher productivity growth. Many countries followed this advice and removed their capital account restrictions. The initial results were generally positive -increased capital inflows, investment booms, and impressive growth performance. In the last decade, however, this positive view of capital account liberalization has been widely questioned. Several countries that had recently removed capital account restrictions, such as Mexico, Thailand, Korea, Russia, and Argentina, experienced severe financial crises. These experiences, especially when combined with the recent backlash against globalization, caused many people to question the benefits of unrestricted capital flows in emerging markets. Does capital account liberalization lead to inefficient investment and asset market bubbles? Could controls on capital flows have prevented these crises, or at least reduced their virulence? Even the IMF, formerly the bastion of capital market liberalization, has cautiously begun to support certain capital controls, especially taxes on capital inflows. 1 These concerns have been bolstered by the inconclusive macroeconomic evidence on the benefits of capital account liberalization and the costs of capital controls. Although there is an 1 For example, Fischer (2002), the former First Deputy Managing Director of the IMF, writes: "The IMF has cautiously supported the use of market-based capital inflow controls, Chilean style." Eduardo Aninat, a Deputy Managing Director of the IMF, recently stated: "…in some circumstances, these controls on capital inflows can play a role in reducing vulnerability created by short-term flows…" (Druckerman, 2002) extensive literature on this subject (discussed in more detail in Section 2), the lack of agreement across studies, methodologies, and data sources is remarkable. In a recent survey of capital account liberalization, Eichengreen (2002) summarizes his conclusions: "Capital account liberalization, it is fair to say, remains one of the most controversial and least understood policies of our day...empirical analysis has failed to yield conclusive results." In a recent review of the empirical evidence on globalization, Prasad et al. (2003) conclude: "…if financial integration has a positive effect on growth, there is as yet no clear and robust empirical proof that the effect is quantitatively significant." Many skeptics interpret these inconclusive macroeconomic results as evidence that the theoretical benefits of capital account liberalization may be elusive, possibly due to a range of market imperfections. A closer look at individual countries that have removed their capital controls, however, suggests that capital account liberalization may actually have substantial benefits, but these benefits are extremely difficult to measure at the macroeconomic level (especially in a cross-country framework). Most countries that remove their capital controls simultaneously undertake a range of additional reforms and undergo widespread structural changes, so that it is extremely difficult to isolate the specific impact of removing the controls. Accurately measuring one of the most important benefits of capital account liberalizationincreased competition and market discipline that leads to a more efficient allocation of capital and higher productivity growth-is extremely complicated. Moreover, the benefits of removing capital controls may vary substantially across countries based on factors such as: their institutional development, the strength and depth of their financial system, and the quality of their corporate governance. Instead, a potentially more promising way to assess the effect of capital account liberalization may be to focus on more detailed microeconomic evidence on how capital controls have generated specific distortions in individual countries. Several recent studies have adopted this approach, with much more conclusive results than the macroeconomic, cross-country studies. Johnson and Mitton (2002) show that the Malaysian capital controls provided a shelter for government cronyism and reduced market discipline. Forbes (2003) shows that the Chilean capital controls made it more difficult for smaller firms to obtain financing for productive investment. Desai et al. (2002) show that capital controls reduced the amount of foreign direct investment by U.S. multinationals and created additional distortions as U.S. companies attempted to evade the controls. Li et al. (2004) show that capital controls reduced market discipline and lowered the efficiency of stock market prices. Although this literature examining the microeconomic effects of capital controls is only its infancy, the combination of results is compelling. These papers use diverse methodologies to examine very different aspects of capital controls in a range of countries and time periods, yet each finds a consistent result; capital controls have significant economic costs and lead to a misallocation of resources. Even if it is difficult to capture these effects at the macroeconomic level during periods when countries undergo rapid structural reform, this misallocation of resources is bound to reduce productivity and potential growth rates. Tobin (1978) argued that a tax on currency transactions would act as "sand in the wheels" of international financial markets. In comparison, given this new microeconomic evidence that capital controls may lead to a misallocation of capital through a number of different channels, a more accurate rendition may be that capital controls are not just "sand", but rather "mud in the wheels of market discipline." The remainder of this paper is as follows. Section 2 briefly reviews the inconclusive macroeconomic, empirical evidence on capital controls. Section 3 discusses, in more detail, several recent microeconomic studies showing how capital controls can cause "mud in the wheels of market discipline." Section 4 weighs these costs of capital controls relative to the potential benefit of reduced vulnerability to crises. Section 5 concludes. The Inconclusive Macroeconomic Evidence on Capital Controls The theoretical literature suggests that there are a number of potential benefits from capital account liberalization. Prasad et al. (2003) survey this literature and describe four direct benefits: the augmentation of domestic savings, a reduction in the cost of capital through better global allocation of risk, the transfer of technological and managerial know-how, and the stimulation of domestic financial sector development. It also describes three indirect benefits: the promotion of specialization, the commitment to better economic policies, and a signaling of friendlier policies for foreign investment in the future. Capital account liberalization, however, can also have important costs. For example, by increasing market discipline and integration with global financial markets, removing capital controls can increase a country's vulnerability to banking and currency crises. As seen in the 1990's, these crises can be severe and have substantial economic and social costs. The macroeconomic literature, however, has had limited empirical success in consistently showing that capital account liberalization has any of these effects. 2 The most common testing approach has been to evaluate if reducing capital controls is correlated with higher economic growth. The contrasting results of the two most cited studies in this literature capture the general inconsistency. Rodrik (1998) finds no significant relationship between capital account openness and growth, while Quinn (1997) uses a different measure of capital account openness and finds a significant positive relationship. A recent evaluation of this literature by Prasad et al. (2003) yields the same inconclusive results. Figure 1 replicates a key graph of the paper. It shows no significant relationship between financial openness and the growth in real per capita income across countries-even after controlling for a series of the standard variables in this literature. 3 In fact, of the 14 recent studies on this subject surveyed in Prasad et al. (2003), 3 find a positive effect of financial integration on growth, 4 find no effect, and 7 find mixed results. Figure 1 Conditional Relationship Between Financial Openness and Growth, 1982-97 Notes: Growth is measured by growth in real per capita GDP. Conditioning variables are: initial income, initial schooling, average investment/GDP, political instability, and regional dummies Source: Prasad et al. (2003) 3 The control variables include: initial income, initial schooling, average investment/GDP, political instability, and regional dummies. Second, different types of capital flows and capital controls may have different effects. For example, recent work suggests that the benefits of foreign direct investment to growth may be greater than those of portfolio flows. Controls on capital inflows may be less harmful since they can be viewed as a form of prudential regulation, while controls on capital outflows may be interpreted as a lack of government commitment to sound policies and/or a lack of attractive domestic investment opportunities. Third, the impact of removing capital controls could depend on a range of other, hard-tomeasure factors. For example, recent work suggests that countries are more likely to benefit from capital account liberalization if they have stronger institutions, better corporate governance, and more effective prudential regulation. Fourth, the sequence in which different types of capital controls are removed may determine the aggregate impact. For example, lifting restrictions on offshore bank borrowing before freeing other sectors of the capital account may increase the vulnerability of a country's banking system (as seen in Korea in the mid-1990's). Finally, there may be "threshold effects" that are difficult to capture in linear regressions. More specifically, countries may need to attain a certain level of financial market integration or of overall economic development before attaining substantial benefits from lifting capital controls. Despite these imposing challenges to measuring the cross-country impact of capital account liberalization, several papers have focused on narrower aspects of this issue and generated more conclusive and promising results. For example, recent work shows that stock market liberalizations in emerging markets lead to increased investment and a lower cost of capital. 4 Other recent work suggests that the impact of capital account liberalization is closely related to the quality of governance and institutions. 5 Given the numerous channels by which capital account liberalization could affect an economy, it is not surprising that focusing on particular aspects of this relationship can yield more conclusive results. Further narrowing the investigation to specific countries and experiences with capital controls may be even more productive. Mud in the Wheels: Microeconomic Evidence of the Distortions from Capital Controls Given these myriad difficulties in assessing the impact of capital account liberalization, potentially even more promising than the approaches used in these cross-country studies is to focus on the microeconomic impact within specific countries. Although case studies inherently have the shortcoming that it is difficult to control for other events that occur simultaneously, this approach can avoid many of the problems (discussed above) with the macroeconomic, crosscountry literature. Moreover, this approach can facilitate a much more detailed measurement of exactly how capital account liberalization affects the allocation of resources and creates specific market distortions. The next four subsections discuss recent studies that have used very different methodologies to examine specific microeconomic effects of capital controls. Despite the range of experiences and approaches, each clearly identifies a significant cost of capital controls. The accumulation of these costs and distortions suggests that capital controls may act as "mud in the wheels of market discipline." 1. Protection for Cronyism in Malaysia In September of 1998, soon after the peak of the Asian crisis, Malaysia imposed controls on capital outflows. Some predicted dire effects, such as scaring foreigners from investing and doing business in Malaysia for years. Others predicted that the capital controls would have the benefit of giving the Malaysian government "breathing room" to enact reforms that would facilitate recovery and raise long-run growth. A few years later, two papers (presented at the same conference) used macroeconomic data to assess the impact of these capital controls. Kaplan and Rodrik (2002) argued that the capital controls had positive macroeconomic effects, while Dornbusch (2002) argued that they had no significant effect. These contradictory views of one specific country experience with capital controls mirrors the disagreements in the broader macroeconomic literature. Johnson and Mitton (2002), however, use a very different, microeconomic approach to analyze the impact of the Malaysian capital controls. It examines how the Asian crisis and the announcement of the capital controls affected stock returns for individual Malaysian companies. The analysis splits the sample of firms into those with political connections to senior government officials (such as Prime Minister Mahatir), and those without political connections. The paper finds that in the initial phase of the crisis, before the capital controls were enacted, politicallyconnected firms experienced a greater loss in market value than firms without political connections. When the controls were put into place, politically-connected firms experienced a relatively greater increase in market value. These results suggest that the Asian crisis initially increased financial pressures on Malaysian firms, improving market discipline and reducing the ability of governments to provide subsidies for favored firms. When the capital controls were put into place, however, investors expected that the Malaysian government would have more freedom to help favored firms and engage in cronyism. In other words, the capital controls reduced market discipline and provided a shelter for government cronyism. Moreover, the empirical estimates in Johnson and Mitton (2002) suggest that this cost of the Malaysian capital controls was substantial. In the initial phase of the crisis (from July 1997 to Increased Financial Constraints for Smaller, Publicly-Traded Firms in Chile Another well-known example of capital controls is the encaje, a tax on capital inflows adopted by Chile from 1991 through 1998. An extensive literature has examined the macroeconomic effect of these capital controls, with a range of results. 6 For example, some papers argue that the controls reduced country vulnerability to external shocks, while others claim that they had no effect on vulnerability. There is somewhat more agreement (albeit not unanimous) that the controls lengthened the maturity of capital inflows, with no significant effect on their volume. Assessing the macroeconomic impact of the capital controls is complicated by Chile's rapid growth, ambitious economic reforms, and sound policy environment during this period. Despite these difficulties, however, there is fairly widespread agreement that the encaje generated some small economic benefits for Chile, with minimal economic costs. This assessment has prompted a number of countries to consider enacting similar controls on capital inflows. A closer look at the microeconomic evidence, and especially how these capital controls impacted different types of firms, however, suggests that this assessment is overly optimistic. Forbes (2003) smaller, publicly-traded companies, but not for larger firms. In other words, the capital controls made it relatively more difficult and expensive for smaller companies to raise financing. Figure 2 (replicated from the paper) shows investment growth for publicly-traded Chilean firms around the time of the capital controls, without controlling for all the variables in the more formal empirical analysis. Investment growth was higher for smaller firms both before and after the encaje (which is a standard result in the finance literature). During the period that the capital controls were in place, however, investment growth plummeted for smaller companies and was generally lower than for large companies. Therefore, the results in Forbes (2003) suggest that capital controls may have created a number of microeconomic distortions in Chile, such as making it more difficult for smaller and resources undoubtedly reduced productivity and growth in Chile. These costs of capital controls could be particularly important for emerging markets in which small and new firms are often important sources of job creation and economic growth. Figure 2 Growth in Investment/Capital Ratios for Chilean Firms Source: Forbes (2003). 7 Recent work by Gallego and Hernández (2002) also shows that the Chilean capital controls affected a range of firm-level variables, with differential effects on small and large companies. 1988 1989 1990 1991 1992 1993 1994 1995 1996 1997 1998 1999 2000 2001 Smaller Firms Large Firms Reduced Investment and Distortionary Behavior by U.S. Multinationals While the previous two subsections discuss the microeconomic effects of capital controls on domestic firms, another potentially important impact of capital controls is on foreign investment. Theory suggests that foreign investment can bring numerous benefits to host countries, such as increasing the capital stock and transferring technology and skills, all of which would raise investment, productivity, and growth. Desai et al. (2002) attempt to measure the effect of capital account liberalization on foreign direct investment by examining the behavior of U.S. multinational firms in countries with and without capital controls. It shows that capital controls distort the asset allocation, financing, transfer pricing, and divided policies of U.S. multinationals. For example, capital controls in host countries reduce investment by multinationals by roughly 20 percent, and U.S. firms operating in countries with capital controls tend to overinvest in physical assets and underinvest (by as much as 40 percent) in financial assets. The paper also shows that when countries liberalize their capital accounts, these distortions tend to be reversed. For example, capital account liberalization is associated with large increases in multinational investment, particularly in local financial assets. Moreover, Desai et al (2002) show that capital controls can cause U.S. multinational affiliates to distort prices in order to circumvent the controls. More specifically, foreign affiliates adjust prices by which they "trade" with their U.S. parents so that they run "trade deficits" about 4 to 6 percent larger than in countries without capital controls. The magnitude of this distortion is comparable to that which would occur if the foreign country raised taxes by about 20 to 50 percent. Therefore, this paper suggests that not only will capital controls distort the amount and type of foreign direct investment available to host countries, but they can also generate additional distortions as companies attempt to evade the controls and extract profits. Reduced Efficiency in Stock Market Pricing Capital controls may not only distort the behavior of multinational affiliates and locally-owned companies, but can also affect the efficiency of domestic equity markets by reducing competitive pressure, market discipline, and the information content of stock prices. More specifically, by making it more difficult for foreigners to invest in domestic stock markets, capital controls could limit this valuable source of information and liquidity. As discussed above in the context of Malaysia, capital controls can insulate markets and reduce market discipline by providing a shelter for cronyism and other non-competitive activities. Capital controls might also limit the ability of potentially successful companies to raise additional financing, thereby restraining their ability to invest and grow. Li et al. (2004) examine the extent to which individual stock prices move up and down together in specific countries-i.e., "synchronicity"-to attempt to measure some of these effects. High levels of comovement and low levels of firm-specific variation in prices suggest that stock prices are less efficient. In other words, when stock prices are driven more by aggregate, country-level news instead of by firm-specific variables and information, there is less market discipline. This paper uses several different measures to show that greater openness in capital markets (but not in goods markets) is correlated with greater firm-specific content in stock prices, and therefore with more market discipline and pricing efficiency. This relationship is magnified in countries with strong institutions and good governance. assessment of the relationship between capital controls and market discipline. 8 Around the time of the Asian crisis, the firm-specific variation in stock prices increased significantly in most Asian countries and remained high for an extended period. This pattern is graphed for Korea in Figure 3, and is typical for most open economies in the region. In Malaysia, the firm-specific component of stock prices also increased significantly after the Asian crisis, but then fell sharply after its capital controls were imposed (as also shown on Figure 3). Although not a definitive test, this indicates that the Asian crisis increased market discipline and the firm-specific content in stock prices, while the Malaysian capital controls appear to have suppressed market discipline and reduced the efficiency of stock market prices. 45% 1990 1991 1992 1993 1994 1995 1996 1997 1998 1999 2000 Korea Malaysia Note: Higher levels of firm-specific variation in stock prices indicate greater pricing efficiency. Source: Based on data from Li et al (2004). But….Can Capital Controls Reduce Vulnerability to Crises? The four studies discussed above suggest that capital controls can create a number of microeconomic distortions and therefore reduce productivity and growth. Despite these potentially serious costs, supporters of capital controls argue that this policy can yield benefits that outweigh these costs. The most frequently cited benefit is that capital controls reduce country vulnerability to currency and banking crises. The series of emerging markets that liberalized their capital accounts, and subsequently experienced a crisis in the 1990's, is often cited to support this argument. Capital controls, by placing "mud in the wheels of market discipline" may render countries less vulnerable to external shocks and therefore reduce their susceptibility to crises. This is not surprising since capital controls share many similarities to most standard regulations---such as labor market regulations that make it more difficult to fire workers. Regulations on both capital flows and labor markets can create safer, less volatile markets, whether in the form of more stable capital flows or workers less likely to lose their jobs. Both regulations also have a cost, however, whether in the form of lower levels of investment or lower aggregate employment, both of which reduce efficiency and economic growth. To evaluate the overall desirability of a specific regulation, it is necessary to weigh the costs against the benefits. Therefore, any accurate evaluation of capital controls needs to weigh the potential costs discussed throughout this paper against the potential benefit of reduced vulnerability to crises. A thorough evaluation of this tradeoff is beyond the scope of this paper, but Figure 4 provides some anecdotal evidence. The figure graphs an index of real income per capita (adjusted for PPP) in India, South Korea, andThailand from 1980 through 2002. 9 Income is normalized to 100 in 1980 in order to equalize income levels at the start of the period. All 1980 1982 1984 1986 1988 1990 1992 1994 1996 1998 2000 2002 India South Korea Conclusions Although the theoretical literature suggests that there could be substantial benefits to emerging markets from capital account liberalization, the empirical macroeconomic literature has had limited success in consistently identifying these benefits. There are a series of compelling reasons why it may be difficult to measure the aggregate impact of capital controls in a range of very different countries that often undergo a variety of structural changes simultaneously with liberalization. A more useful approach may be to focus on more narrow empirical analyses that can measure the specific effects of capital account liberalization at the microeconomic level. The series of papers surveyed above indicates that focusing on microeconomic data, and especially individual case studies of specific effects of capital controls, yields much stronger evidence of the resulting economic distortions and costs. The Malaysian capital controls provided a shield for crony capitalism. The Chilean capital controls increased financial constraints and limited investment in smaller, publicly-traded companies. U.S. firms tend to reduce investment and adopt a range of distortionary practices in countries with capital controls. Stock market pricing tends to be less efficient in countries with capital controls. Even though none of these papers attempts to aggregate these microeconomic effects into estimates of an economy-wide cost of capital controls, they clearly suggest that capital controls can lead to a misallocation of resources through several different channels. The accumulation of these different costs of capital controls indicates that they may act as "mud in the wheels of market discipline" and significantly depress productivity and growth. Potentially offsetting these costs, capital controls may have the benefit of reducing country vulnerability to currency and banking crises. Although the short-term impact of crises on income levels can be severe, this effect is generally small when compared to the long-term benefits of higher growth rates possible with liberalized capital accounts. Moreover, the benefits of capital account liberalization may be smaller, while the risk of severe crises may be greater, for countries with weak institutions and poor corporate governance. Mud in the wheels of a cart will slow down movement towards your destination. If mud in the wheels weighs down the cart, minimizing the chance of the cart being overturned, some people may chose the weighted-down, slower vehicle. Moreover, if the cart has a weak frame and the wheels are only held together by the dried mud, it may be prudent to strengthen the wheels and ensure that a minimum frame is in place before removing the mud and moving rapidly. Given a certain level of structural integrity in the cart, however, most people would probably chose to take the mud out of the wheels, even if it slightly increases the risk of a spill, in order to more quickly arrive at their destination. Similarly, capital controls act as "mud in the
2018-05-31T05:30:37.291Z
2003-01-01T00:00:00.000
{ "year": 2003, "sha1": "89672ef59081225d6de5a76357b71a10dfea5411", "oa_license": "CCBYNC", "oa_url": "https://dspace.mit.edu/bitstream/1721.1/5056/2/4454-03.pdf", "oa_status": "GREEN", "pdf_src": "ElsevierPush", "pdf_hash": "489028b234bc0f0c6dac14699e893bf34e767c1a", "s2fieldsofstudy": [ "Economics" ], "extfieldsofstudy": [ "Business" ] }
102350522
pes2o/s2orc
v3-fos-license
On the decomposition of the De Rham complex on formal schemes We show that, for a pseudo-proper smooth noetherian formal scheme $\mathfrak{X}$ over a positive characteristic $p$ field, its truncated De Rham complex up to the characteristic $p$ is decomposable. Moreover, if the dimension of $\mathfrak{X}$ is exactly $p$, then the full De Rham complex is decomposable. Along the way we establish the Cartier isomorphism associated to a smooth morphism of positive characteristic noetherian formal schemes. Introduction An important tool for understanding some of the fine properties of algebraic varieties is the use of formal schemes. Over the field of complex numbers, Hartshorne studied the hypercohomology of the De Rham complex of the formal completion of a singular scheme on a non-singular ambient scheme and showed that this gives back singular cohomology by purely algebraic means. In this paper we start exploring the properties of De Rham cohomology of formal schemes over a characteristic p field. A motivation is to develop tools to understand the cohomological properties of singular varieties. The main technical issue is to have at hand basic results about the geometry of formal schemes. Let X be a possibly singular variety over a field k. Suppose there is a closed embedding X ֒→ P of X into a smooth k-scheme P . Its formal completion P /X is not adic over Spec(k). This leads us to consider non-adic morphisms of formal schemes. Let f : X → Y be a morphism of formal schemes. As explained in 1.2 (ii) there is a system of morphisms of usual schemes {f ℓ : X ℓ → Y ℓ } ℓ∈N such that It is a general principle that if f is adic, its properties can be studied through the underlying maps f ℓ , after all, the squares can be taken Cartesian. This is not the case for non-adic morphisms. Thus, one needs to redevelop most of the usual tools for non adic maps of formal schemes. To give a specific example, if f is a smooth morphism of locally noetherian formal schemes the morphisms f ℓ may not be smooth (see [AJP2,Example 5.3]), therefore one cannot use a limit argument to reduce the arguments to ordinary schemes. Here, we study the De Rham complex of a non necessarily adic formal scheme of pseudo finite type over a field of positive characteristic p. We show that under the usual condition of W 2 -liftability the De Rham complex is decomposed up to p. The argument does not give the degeneration of the Hodge-De Rham spectral sequence because the finiteness of cohomology is only established under adic hypothesis. The strategy of the proof is similar to the classical method by Deligne and Illusie [DI] but all the results of smoothness, deformation and cohomology are needed in the setting of pseudo-finite maps of formal schemes. The basic theory of smoothness of formal schemes is developed in [AJP1] and some more advanced properties in [AJP2]. Both papers are used intensively along the paper. Another important ingredient is the deformation theory of smooth morphisms as exposed in [P1]. A full-fledged theory of deformation is developed in [P2], but this generality is not needed in the present situation. It is worth remarking that decomposition up to p uses essentially the results of the aforementioned papers but the extension of the result at the dimension p, requires the full machinery of Grothendieck duality for formal schemes [AJL]. Moreover, Sastry's computation of the dualizing sheaf of a pseudo-proper smooth noetherian formal scheme [S] is required to reach the general result. Let us now describe the contents of the paper. An initial section recalls the basic definitions and notations that will be of use throughout the paper. In particular we recall the definition of the module of differentials and the associated De Rham complex. In the next section we discuss the basic properties of the Frobenius morphism both in absolute and relative version. It is noteworthy that the Frobenius morphism is an adic homeomorphism. Moreover we show that it is a finite locally free morphism. In Section 3 we develop Cartier theory for noetherian formal schemes. Specifically, in Theorem 3.4 we establish an analogous to the Cartier isomorphism in Sch [K, (7.2)] for relative differential forms associated to a smooth morphism of locally noetherian formal schemes of characteristic p. Once all this structure is up and running we prove the decomposition theorem. We fix Y a locally noetherian formal scheme of characteristic p together with Y, a flat lifting over Z/p 2 Z. Let f : X → Y be a smooth morphism of locally noetherian formal schemes, let us consider its relative Frobenius mophism denoted by F X/Y : X → X (p) . It holds that any smooth lifting X (p) of X (p) over Y yields a a decomposition of the complex τ <p (F X/Y * Ω • X/Y ) in D(X (p) ). Moreover, much as in the case of schemes, the existence of a smooth lifting is equivalent to the existence of a decomposition of τ <p (F X/Y * Ω • X/Y ). The proof relies on the theory of (non necessarily adic) smooth morphisms of formal schemes, its basic deformation theory and the lifting of Frobenius morphisms. Of course, a global lifting of Frobenius is not guaranteed to exist, but only local liftings. The corresponding local decompositions are glued by a procedure similar to the one employed in [DI]. Finally, in Section 6 we extend this result to degree p. For k a perfect field of characteristic p and X a formal scheme of topological dimension less or equal than p, we show that F X/k * Ω • X/k is decomposable. This is Theorem 6.6. Its proof requires establishing a pairing on differential forms where ω X (p) /k = Ω n X (p) /k , that is dualizing for coherent coefficients by Sastry's result [S,Theorem 5.1.2]. On formal schemes there are basically two dualities, one that refers to torsion coefficients and another one for complete coefficients -this last one including the familiar coherent complexes. There is a balance between them controlled by Greenlees-May duality. It is this balance that provides an explicit description of the trace map as a Cartier operator, thereby allowing to extend Deligne-Illusie's idea to the present context. In future work we will intend to apply the Decomposition Theorem to obtain vanishing theorems for formal schemes with an eye towards the cohomology of singular varieties. The main difficulty in this context is the lack of general finiteness properties. We expect to extend the available results in characteristic 0 to some situations in positive characteristic. With this in hand, the degeneration of the Hodge-De Rham spectral sequence would provide a path towards the desired results. Preliminaries We denote by NFS the category of locally noetherian formal schemes and by NFS af the subcategory of locally noetherian affine formal schemes. We follow the conventions and notations in [ EGA I,§10]. Except otherwise indicated, every formal scheme will be in NFS and we will assume that every ring is noetherian. We write Sch for the category of ordinary schemes. 1.1. Given X ∈ NFS we denote by A(X) the category of O X -Modules and D(X) its corresponding derived category. We denote by A c (X) ⊂ A(X) the full subcategory of coherent O X -Modules 1 and by D c (X) the full subcategory of D(X) of complexes whose homology sheaves lie in A c (X). Given f : X → Y a map of formal schemes, f ♯ : O Y → f * O X will denote the corresponding morphism of structure sheaves and, with a slight abuse of notation, the ring homomorphisms it induces on sections and stalks. 1.2. Let us establish the following convenient notation (cf. [EGA I, §10.6]): (i ) Given X ∈ NFS and J ⊂ O X an Ideal of definition, for each ℓ ∈ N we put X ℓ := (X, O X /J ℓ+1 ). In the category of formal schemes and all the spaces X ℓ and X have the same underlying topological space. For any such a pair of ideals setting X ℓ := (X, O X /J ℓ+1 ) and 1.3. As in [AJP2, Definitions 1.6 and 1.7], given X ∈ NFS, the topological dimension of X is dimtop(X) := dim(X 0 ) and the algebraic dimension of X is dim(X) := sup x∈X dim O X,x . Obviously, dimtop(X) ≤ dim(X). [AJL,p.7], [AJP1,§2 and §3]. A morphism f : X → Y in NFS is of pseudo finite type (pseudo finite, pseudo proper, separated) if f 0 (equivalently any f ℓ ) is of finite type (finite, proper, separated, respectively). Moreover, we say that f is of finite type (finite, proper) if f is adic and of pseudo finite type (pseudo finite, pseudo proper, respectively). The morphism f is smooth (unramified,étale) if it is of pseudo finite type and satisfies the following lifting condition: for any affine Y-scheme Z and for each closed subscheme T ֒→ Z given by a square zero Ideal I ⊂ O Z the induced map is surjective (injective, bijective, respectively). 1 We honor the capitalization conventions in EGA and write "Ideal" and "Module" for sheaves of ideals and modules respectively. If we express as in 1.2 we have the following identification From now on and whenever is clear, we will abbreviate d = d X/Y . From now on f will be a morphism of pseudo finite type. 1.7. We denote by Ω • X/Y the sheaf of graded abelian groups that to an open subset U ⊂ X associates the module The sheaf Ω • X/Y is a supercommutative O X -Algebra (i.e. graded and alternating in the terminology of [Bo2, Ch. III, §7.1, Definition 1 and §7.3, For a commutative diagram of morphisms in NFS, then Ω i X/Y is a quasi-coherent OX-module. However, in the context of of formal schemes, to have a satisfactory description of the sheaf of i-differentials we will restrict ourselves to the class of morphisms of pseudo finite type. such that f and f ′ are of pseudo finite type, there exists a morphism of graded O X ′ -Algebras for any a 1 , a 2 , . . . , for any i, j ∈ N. is a complex of coherent O X -Modules; it is called De Rham complex of X relative to Y. We abbreviate it by Ω • X/Y . Notice that the differentials are Observe that if f : X → Y is a finite type morphism of usual schemes then Ω • X/Y = Ω • X/Y . In the setting of the commutative diagram (1.7.1), the morphism of graded (1.7.2) respects the differential, i.e. it is a map of complexes. 1.9. Suppose that f : X → Y is smooth and such that, for all x ∈ X, dim x f := dim f −1 (f (x)) = n [AJP2, Definition 1.14]. Then Ω 1 X/Y is a locally free O X -Module of rank n (see [LNS,Proposition 2.6.1] and [AJP2,Corollary 5.10]) and therefore Ω i X/Y is a locally free O X -Module of constant rank n i , for all 0 ≤ i ≤ n. In particular, Ω n X/Y is an invertible O X -Module and Ω i X/Y = 0, for all i > n. Therefore Ω • X/Y is a bounded complex of amplitude [0, n] of locally free O X -Modules. Remark. Let f : X → Spec(C) be a smooth morphism of usual schemes, Z ⊂ X a closed subscheme and denote by X the completion of X along Z. The De Rham complex of X relative to C defined above, Ω • X/C , agrees with the one given by Hartshorne in [H, I, §7]. Frobenius morphism on formal schemes Henceforth, p will denote a prime number and F p := Z/pZ the prime field. A locally noetherian formal scheme 2.2. Let X be a locally noetherian formal scheme of characteristic p. The absolute Frobenius endomorphism of X, is the endomorphism F X : X → X that is the identity as a map of topological spaces and, given for all open set U ⊂ X by The following holds: (iii ) F X is a universal homeomorphism, that is, a homeomorphism such that for each morphism of locally noetherian formal schemes Z → X, the morphism obtained by base-change X × Z → Z is a homeomorphism. Indeed, with the previous notation, as F X ℓ is a universal homeomorphism (see [SGA 5,Exposé XV,§1]) in view of [EGA I, where the horizontal arrows are the absolute Frobenius endomorphisms of X and Y. Let us put X (p) := X × F Y Y. Notice the dependence of the formal scheme X (p) on the base Y. We omit it on the notation for clarity. There exists an unique morphism 2.4. Let ϕ : A → B be a homomorphism of noetherian adic rings of characteristic p; let X = Spf(A), Y = Spf(B) and f : X → Y such that f := Spf(ϕ) is in NFS af . The diagram (2.3.1) corresponds through the equivalence of categories to the following diagram Proposition 2.5. Given f : X → Y in NFS with Y of characteristic p and F X/Y the relative Frobenius morphism of X over Y it holds that: Proof. Let us consider the diagram (2.3.1). 2.6. Given Y = Spf(B) a noetherian affine formal scheme of characteristic p, n > 0 and π : A n Y = Spf(B{T}) → Y the canonical projection of the affine formal space, it holds that: (i ) There exists an isomorphism of Y-formal schemes defined through the equivalence of categories by the morphism applying the universal property of the complete tensor product there exists an unique morphism Ψ : B{T} ⊗ F B B → B{T} such that the following diagram commutes: If f : X → Y is anétale morphism of locally noetherian schemes of characteristic p, then the relative Frobenius morphism of X over Y is an isomorphism [SGA 5, Expose XV, §1]. Next we generalize this result to the setting of locally noetherian formal schemes. Lemma 2.7. Given a locally noetherian formal scheme Y of characteristic p, let f : X → Y be anétale morphism in NFS. Then the relative Frobenius Proof. Let us consider the commutative diagram (2.3.1). The morphism f isétale and by base-change (see [AJP1,Proposition 2.9,(ii)]) it follows that f ′ isétale. Then [AJP1, Corollary 2.14] and Proposition 2.5 imply that F X/Y isétale adic. On the other hand, by 2.2.(iii), F X is a universal homeomorphism and, therefore, radical (see [AJP2,Definition 2.5]). From the sorites of radical morphisms in Sch [EGA I, Corollaire (3.7.6)] we have that F X/Y is a radical morphism and applying [AJP2,Theorem 7.3] it follows that F X/Y is an open inmersion. Last, by Proposition 2.5 we have that F X/Y is a homeomorphism, so we conclude that it is an isomorphism. Remark. The last result does not follow straightforward from the analogous result in the category of schemes, since given anétale morphism of locally noetherian formal schemes it may happen that the corresponding morphisms of schemes f ℓ in the system are notétale (see [AJP2,Example 5.3]). In Proposition 2.9 we generalize 2.6.(iii) for smooth morphisms of locally noetherian formal schemes of constant relative dimension equal to n. First, we need a previous result. is an isomorphism. Proof. By base-change we have that f ′ is also a finite morphism (see [AJL,Proposition 7.1]). Then by the Finiteness Theorem for finite morphisms in NFS [EGA III 1 , (4.8.6)] it follows that f ′ * (g ′ * F) and g * (f * F) are coherent O Y ′ -Modules. Since this is a local question on the base, we may suppose that g : Proposition 2.9. Given a locally noetherian formal scheme Y of characteristic p, let f : X → Y be a smooth morphism of relative dimension n. Then the relative Frobenius endomorphism of X over Y, where g isétale, π is the canonical projection and n = rk( Ω 1 ). We may assume that U = X. Taking the diagram (2.3.1) for the morphisms g, π and f we have the following commutative diagram of locally noetherian formal schemes where: • the horizontal arrows are the absolute Frobenius endomorphisms of X, A n Y and Y; • X (p) = X × F Y Y and ♦ 3 is a cartesian square (so ♦ 2 is a cartesian square, too). Since g isétale, by Lemma 2.7 we have that 1 is a cartesian square and, since ♦ 2 is a cartesian square we deduce that ♦ 1 is a cartesian square. On the other hand, in 2.6.(iii) we have shown that F A n Y /Y is finite, flat and that Then by base-change (see [AJL,Proposition 7.1]) we have that F X/Y is finite and flat. Moreover, from Proposition 2.8 it results that: and, therefore, by 2.6(iii), Corollary 2.10. Let Y be a locally noetherian formal scheme of characteristic p and f : X → Y be a smooth morphism of relative dimension n. Proof. Let 0 ≤ i ≤ n. From 1.9 we have that Ω i X/Y is a locally free O X -Module of rank n i and therefore, the result is consequence of Proposition 2.9. Cartier isomorphism One of the technical tools more used for the differential study of schemes of characteristic p is the Cartier isomorphism [C]. Our next task will be to extend it to smooth morphisms of locally noetherian formal schemes of characteristic p following [K, (7.2)]. 3.1. Given Y a locally noetherian formal scheme of characteristic p let f : X → Y be a morphism of locally noetherian formal schemes. For all open set U ⊂ X and for all a ∈ Γ(U, O X ), it holds that d(a p ) = p · a p−1 · d(a) = 0 Therefore the absolute Frobenius morphism of X and the relative Frobenius morphism of X over Y induce zero morphisms respectively (see 2.4). After all, the differentials are null being radical morphisms. 3.2. Given a locally noetherian formal scheme Y of characteristic p and given an open set U ⊂ X, a ⊗b ∈ Γ(U, O X (p) ) and c ∈ Γ(U, F X/Y * O X ) there results that: It holds that the sheaves of abelian groups -Algebras determined by the exterior product so, the elements of degree 1 are of zero square. 3.3. Let f : X → Y be a smooth morphism of locally noetherian formal schemes of characteristic p. In this setting, there exists a unique morphism of graded O X (p) -Algebras γ : For the existence, applying [Bo2,loc. cit.] it suffices to give γ 0 and γ 1 as above. Consider the morphism D defined, for every open set U ⊂ X, by It is well defined since: . It is easily checked that D is a continuous morphism. First, we will prove that is a morphism of sheaves of abelian groups. We take U ⊂ X an open subset and a 1 , a 2 ∈ Γ(U, O X ). Applying formally d to the equality from which it follows that D((a 1 + a 2 ) ⊗1) = D(a 1 ⊗1) + D(a 2 ⊗1). Next, and so we conclude that D is a continuous Y-derivation. By Corollary 2.10 F X/Y * Ω • X/Y is a complex of locally free O X (p) -Modules of finite rank and, in particular, , for all i. Applying [AJP1, Theorem 3.5] it results that there exists an unique homomorphism of O X (p) -Modules Therefore, applying again [Bo2,loc. cit.] there exists an unique morphism of graded O X (p) -Algebras γ : i∈Z that in degrees 0 and 1 is defined by γ 0 and γ 1 , respectively. Theorem 3.4. With hypothesis as in 3.3, the morphism γ depicted in (3.3.1) is an isomorphism and it is called the Cartier isomorphism. Proof. We will do it in three steps: (1) If f : X → Y is a smooth morphism in Sch with Y of characteristic p, γ is the Cartier isomorphism in Sch ( [K, (7.2)]). (2) Let us prove the result for the canonical projection π Y : A n Y → Y. Considering the diagram (2.3.1) for the morphisms π Y and π : A n Fp → Spec(F p ) and, keeping in mind 2.6.(i) we have the following commutative diagram in NFS: The squares 1 , 2 and ♦ 1 are cartesian, therefore the square ♦ 2 is also cartesian. Since ♦ 3 is a cartesian square it results that ♦ 4 is also cartesian. Applying (1), we have the Cartier isomorphism asociated to the scheme morphism π: and, from the fact that g ′ is a flat morphism (by base-change) and from the isomorphism (3.4.1) we deduce the isomorphism γ n,Y : i∈Z By 2.6.(iii) we have that F is a finite morphism and then Proposition 2.8 applies. Therefore Fp /Fp and we obtain the Cartier isomorphism asociated to π Y γ n,Y : (3) In the general case, since it is a local question by [AJP2, Proposition 5.9] we may suppose that f factors in π • g : X → A n Y → Y, where g iś etale and π is the canonical projection. Considering the diagram (2.3.1) for the morphisms g, π and f we have a commutative diagram of locally noetherian formal schemes (2.9.1) where ♦ 1 , 1 , ♦ 2 and ♦ 3 are cartesian squares. Notice that we use that g isétale. By (2), associated to the morphism π Y : A n Y → Y, we have the Cartier isomorphism : Since g isétale and ♦ 3 is a cartesian square, by base-change ([AJP1, Proposition 2.9]) we have that g ′ isétale and from [AJP1,Corollary 4.10] we deduce that g ′ * Ω i Since g ′ is flat and, applying g ′ * to the isomorphism (3.4.2) we have the following isomorphism i∈Z Ω i On the other hand, g isétale and, from [AJP1, loc. cit.] we deduce that is cartesian we will say that Y, or that the morphism Y ֒→ Y, or thatg is a flat (smooth) lifting of Y over Z. Whenever W = Spec(F p ) and Z = Spec(Z/p 2 Z) we will say that Y is flat (smooth) lifting of Y over Z/p 2 Z. The following is one of the main results of this paper. It extends to formal schemes the classical Decomposition Theorem in [DI,Corollaire 3.7.(a)] (see also [I,Théorème 5.1]). Theorem 4.3 (Decomposition Theorem). Let Y be a locally noetherian formal scheme of characteristic p and Y a flat lifting of Y over Z/p 2 Z. Let f : X → Y be a smooth morphism of locally noetherian formal schemes. Any smooth lifting X (p) of X (p) over Y provides a decomposition of the complex Remark. Mimicking the proof of [DI,Théorème 3.5] we can show a converse to the theorem, specifically, a decomposition of τ <p (F X/Y * Ω • X/Y ) provides a smooth lifting X (p) of X (p) over Y. We leave the details to the interested reader We defer the proof of Theorem 4.3 to the next section. In the next few paragraphs we will present some consequences. We will start establishing some notations. 4.4. Let k be a perfect field of characteristic p and put Y = Spec(k). Then there exists a flat lifting of Y over Z/p 2 Z given (up to isomorphism) by Y = Spec(W 2 (k)) where W 2 (k) is the ring of Witt vectors of length 2 over k. On the other hand, the absolute Frobenius endomorphism of F k = F Y : Y → Y is an automorphism. So, given f : X → Y a smooth morphism in NFS from the corresponding diagram (2.3.1) we deduce that (F k ) X : X (p) → X is an isomorphism. Then X (p) admits a smooth lifting over Y if, and only if, X also does. Corollary 4.5. Given k a perfect field of characteristic p, let f : X → Y = Spec(k) be a smooth morphism in NFS. If there exists a smooth lifting of X over Y = Spec(W 2 (k)), then τ <p (F X/k * Ω • X/k ) is decomposable in D(X (p) ). Remark. This corollary generalizes [DI,Théorème 2.1] to the context of formal schemes. Corollary 4.6. Given a perfect field k of characteristic p, let f : Z → Y = Spec(k) be a morphism of finite type in Sch and suppose that Z is embeddable in a smooth Y -scheme X. If there exists a smooth lifting of X := X /Z over Y = Spec(W 2 (k)), then Corollary 4.7. Given a perfect field k of characteristic p, let Z be a projective k-scheme embeddable in P := P n k and let P := P /Z . Then Proof. Since P n W 2 (k) = P × k Spec(W 2 (k)), Z is also a closed subscheme of P n W 2 (k) . If κ : P n W 2 (k) → P n W 2 (k) is the completion morphism of P n W 2 (k) along Z, then by [AJP2,Proposition 3.10] it is immediate that the composition P n W 2 (k) κ −→ P n W 2 (k) −→ Spec(W 2 (k)) is a smooth lifting of P over Spec(W 2 (k)). Proof of the Decomposition Theorem The proof of Theorem 4.3 will be decomposed into several intermediate steps. We will mostly follow the strategy of the proof of the Decomposition Theorem for usual schemes in [I, §5]. that induces the identity through the functor H i for all i < p. By Theorem 3.4 it is sufficient to give a morphism in D( that coincides in homology with the Cartier isomorphism. We will associate a morphism as (5.1.1) to each smooth lifting X (p) of X (p) over Y. The proof proceeds in two stages: (i ) First we show (in Proposition 5.7) that if there exists a global lifting of Frobenius, i.e. a Y-morphism X (p) ) by constructing a lifting of the Cartier operator, see 5.5. (ii ) Liftings of Frobenius only exist locally, this is discussed in 5.9. With this, we see (in Proposition 5.11) that τ ≤1 (F X/Y * Ω • X/Y ) is decomposable in D(X (p) ) by pasting these local liftings. Finally, we extend this decomposition to the whole τ <p (F X/Y * Ω • X/Y ) using the multiplicative structure of the De Rham complex (Proposition 5.13). We start by fixing some notations and definitions. Two canonical isomorphisms. Let i : X ֒→ X be a smooth lifting over Y. From the short exact sequence of (Z/p 2 Z)-modules is exact and therefore i is a closed embedding given by the ideal p·O X ⊂ O X . The isomorphism p· : F p → p · Z/p 2 Z of (Z/p 2 Z)-modules induces the isomorphism of O X -Modules and the isomorphism of O X -Modules Observe that the isomorphism p 1 is locally defined by 1 ⊗ d(s) p · d(s). Liftings of Frobenius . From now on we will assume the set-up and hypotheses of Theorem 4.3. Given F X/Y : X → X (p) the relative Frobenius morphism of X over Y let us suppose that there exist i : X ֒→ X and i ′ : X (p) ֒→ X (p) smooth liftings over Y. We say that a Y-morphism Observe that, since X ∼ = X × Y Y and X (p) ∼ = X (p) × Y Y we have that the square (5.3.1) is cartesian. Lemma 5.4. The image of the canonical morphism corresponds through the projection formula [EGA I, 0, (5.4.8)] to and this map is zero by 3.1. We conclude since i ′ A Cartier operator. Under the hypotheses and notations of 5.3, applying i ′ * to the canonical morphism Ω 1 3 According to the terminology established in [P1, §2] we would say that F is a lifting such that the following diagram is commutative where the left vertical isomorphism is given by base-change (see [AJP1,Proposition 3.7]), and the right vertical morphism corresponds to the isomorphism . Now, given a = a 1 + p · A with a 1 ∈ A and a 2 ∈ A (p) such that a⊗1 = a 2 +p· A (p) , since F (a⊗1) = a p (see 2.4) from the conmutativity of diagram (5.3.1) we deduce that F (a 2 ) = a p 1 + p · c 1 with c 1 ∈ A. From this we deduce that ϕ 1 where c = c 1 + p · A. Lemma 5.6. In the setting of 5.3, the Cartier operator ϕ 1 F defined in (5.5.1) induces in homology the Cartier isomorphism in degree 1. Proof. From the local description of ϕ 1 F just given, we deduce that Im(ϕ 1 X/Y and that the composition of morphisms is γ 1 , the Cartier isomorphism (3.3.1) in degree 1. Proposition 5.7. Suppose that there exists a Y-morphism F that lifts F X/Y . Then there exist a morphism in the category of complexes of objects in A(X (p) ) that induces the Cartier isomorphism (3.3.1) in H i , for all i < p, such that ϕ 0 F = F ♯ X/Y and the morphism ϕ 1 F is the one defined in 5.5. Proof. By Lema 5.6 and Theorem 3.4, it suffices to take ϕ i F the composition of the morphisms Corollary 5.8. If there exists a Y-morphism F that lifts F X/Y then there is a decomposition of τ <p (F X/Y * Ω • X/Y ) in D(X (p) ). Proof. By 5.1 it is an immediate consequence of the proposition. 5.9. Having dealt with the case in which there is a global lifting of Frobenius, we treat now the general case of Theorem 4.3. We start by showing that the complex X (p) ). For that, given an arbitrary affine open covering {U α } of X, by [P1,Corollary 4.3] for each α there exists a smooth lifting U α of U α over Y. Furthermore, [AJP1,Corollary 2.5] implies that there exists a lifting F α : We are going to "glue" in D(X (p) ) the morphisms ϕ Fα asociated to each lifting F α (cf. Proposition 5.7) and we will check that does not depend of the chosen covering of X. This construction is not trivial due to the lack of the local nature of the derived category. We need the following lemma in which we compare the morphisms ϕ F asociated to different liftings F of F X/Y . Lemma 5.10. Suppose given F 1 : X 1 → X (p) and F 2 : X 2 → X (p) a pair of Y-morphisms that lift F X/Y , then there exists an homomorphism of O X (p) - Moreover given F 3 : X 3 → X (p) another Y-morphism that lifts F , the cocycle condition holds, namely Proof. First, we are going to define φ( F 1 , F 2 ) whenever there is a Y-isomorphismũ : X 1 → X 2 that induces the identity on X (cf. [P1, 3.4]). The morphisms F 1 and F 2 •ũ are two liftings over Y of the composed map and by [P1, 2.2.(1)] there exists an unique homomorphism of O X (p) -Modules Ψ : Ω 1 is commutative. Applying i ′ * to the above diagram we have that there exists a homomorphism of O X (p) -Modules φ(ũ, F 1 , F 2 ) : Ω 1 X (p) /Y → F X/Y * O X such that the following diagram commutes: . Let us show that φ(ũ, F 1 , F 2 ) does not depend onũ. Indeed, givenṽ : X 1 → X 2 another Yisomorphism that induces the identity on X, [P1, 2.2.(1)] implies that there exists an unique homomorphism of O X 2 -Modules Equivalently by adjunction and, with an abuse of notation, there exists an unique homomorphism ψ : On the other hand, since F 2 •ũ and F 2 •ṽ are two liftings of i ′ • F X/Y over Y by [P1, 2.2.(1)] there exists an unique morphism η : By 3.1 the canonical morphism F * X/Y Ω 1 X (p) /Y → Ω 1 X/Y is zero and we conclude that η = 0 and, therefore, F 2 •ũ = F 2 •ṽ. In general, given an affine open covering {U α } of X, for all α, [P1, 3.3] implies that there exists a Y-isomorphismũ α : X 1 | Uα → X 2 | Uα that induces the identity on U α . Then it suffices to define for each α To check the equalities (5.10.1) and (5.10.2) we may restrict to the affine case. In this case X 1 and X 2 are isomorphic (see [P1, 3.3]) and to simplify we set X := X 1 = X 2 . With notations as in 5.5, we have that F i ( a (p) ) =ã p +p·c i with c i ∈ A for i = 1, 2, from where we deduce that Last, if we suppose there exists yet another Y-morphism F 3 : X 3 → X (p) that lifts F and thatṽ : X 2 → X 3 is a Y-isomorphism that induces the identity in X, the equality (5.10.2) holds by adding the relations corresponding to the couples ( F 1 , F 2 ) and ( F 2 , F 3 ). Proposition 5.11. There exists a morphism in D(X (p) ) Proof. Let us fix an affine open covering {U α } of X. By 5.9 there exists a smooth lifting U α of U α over Y and a lifting F α : that is, such that the following diagram is commutative: By Lemma 5.6 for each α there exists a homomorphism of complexes of O X (p) | Uα -Modules 5.11.1) and such that, for all α, β, δ with U αβδ := U α ∩ U β ∩ U δ = ∅: (5.11.2) Data (5.11.1) and (5.11.2) allow to define a morphism of complexes is locally given by: Fα (w| Uα ) We define ϕ 1 as the composition of the morphisms in D(X (p) ) . Then if {V β } is another covering of X and for all β, G β is a lifting of F X/Y | V β , is a simple exercise to check that ϕ 1 . Last, let us see that ϕ 1 induces the Cartier isomorphism in H 1 . Since it is a local question, we may suppose that there exists a Y-morphism F : X → X (p) that lifts to F X/Y . Then ϕ 1 is defined by the morphism ϕ 1 F given in 5.5. Proposition 5.13. There is a decomposition of τ <p (F X/Y * Ω • X/Y ) in D(X (p) ) extending the previous one. Proof. For all 1 ≤ i < p we're going to find a morphism in D(X (p) ) that induces the Cartier isomorphism through the functor H i . For that, given ϕ 1 the morphism defined in Proposition 5.11, for all i ≥ 1 we consider the morphism in D(X (p) ), ). On the other hand, Corollary 2.10 implies that F X/Y * Ω • X/Y is a complex of locally free O X (p) -Modules of finite rank, from which it follows that, in and, then we define ϕ i as the composition of morphisms in D(X (p) ): Rf * : D + c (X) → D + c (Y). We denote the counit of the adjunction as This map is usually referred to as the trace map. If we need to specify the map f we will denote it by τ # f = τ # . 6.3. Frobenius and a perfect pairing of differential Modules. Let X denote a smooth pseudo proper formal scheme over a characteristic p perfect field k. Let dim(X) = n. As before, put X (p) = X × F k Spec(k). Recall from 1.7 the graded complex of coherent O X -Modules Ω • X/k . As we have already recalled (1.9), the sheaves Ω i X/k are locally free for all i and thus we have perfect pairings where 0 ≤ i ≤ n. This pairing induces the isomorphism in D + c (X): Ω i X/k ∼ = RHom X ( Ω n−i X/k , Ω n X/k ) (6.3.1) Let us denote f : X → Spec(k) and f (p) : X (p) → Spec(k) the structural morphisms, and F X/k : X → X (p) the relative Frobenius. Notice that F X/k is a finite map. Recall that f (p) • F X/k = f . We have the following string of isomorphisms in D + c (X (p) ): F X/k * Ω i X/k ∼ = F X/k * RHom X ( Ω n−i X/k , Ω n X/k ) = F X/k * RHom X ( Ω n−i X/k , ω X/k ) ∼ = F X/k * RHom X ( Ω n−i X/k , F # X/k ω X (p) /k ) ∼ = RHom X (p) (F X/k * Ω n−i X/k , ω X (p) /k ) Where the first isomorphism comes from applying the functor F X/k * to (6.3.1). The equality corresponds to the notation ω X/k := Ω n X/k ; also, we set ω X (p) /k := Ω n X (p) /k . By [S,Theorem 5.1.2] these sheaves are dualizing in D(X) and D(X (p) ), in other words, they are identified with f # ( k) and f (p)# ( k), respectively. The second isomorphism is induced by the map F # X/k ω X (p) /k− → ω X/k ([AJL, Corollary 6.1.4.(b)]). The third isomorphism is [AJL,Theorem 8.4] applied to F X/k which is finite (Proposition 2.9), therefore proper. Taking homology, we obtain the perfect pairing in A(X (p) ) F X/k * Ω i X/k ⊗ O X (p) F X/k * Ω n−i X/k −→ ω X (p) /k (6.3.2) Notice that the pairing is induced by the trace map τ # (ω X (p) /k ). 6.4. The graded piece of the Cartier isomorphism is an isomorphism of locally free sheaves γ n : Ω n X (p) /k −→ H n (F X/k * Ω • X/k ) There is a natural map ν : F X/k * Ω n X/k → H n (F X/k * Ω • X/k ) that composed with the inverse of γ n yields a canonical morphism C : F X/k * ω X/k −→ ω X (p) /k (6.4.1) In other words, C = (γ n ) −1 • ν. Proposition 6.5. The map C in (6.4.1) agrees with τ # (ω X (p) /k ) for the Frobenius map F X/k . Proof. This comes down to a local computation. Let x ∈ X andx ∈ X (p) the corresponding point by the bijection of underlying spaces. Denote by τ # F the map τ # F X/k (ω X (p) /k ). We have the following commutative diagram resx can with H n x denoting local cohomology at x and similarly H n x . The square commutes by functoriality and the triangle defines the map res x . By pseudofunctoriality the horizontal composition is τ # f (p) ( k) . As a consequence, the lower composition is resx. Using the computation in [L, (7.3.6)] it follows that H n x ( τ # F ) = H n x (γ n ). It holds also in our setting because local cohomology only depends on the completion of the corresponding stalks of the structure sheaves. Notice that in loc. cit. H n x (γ n ) is denoted C −1 x . The claim follows now by the local description of γ n . Remark. For another take on the relationship between the duality trace and the Cartier map C, see [M, §1]. For an explicit computation of the trace in the case of usual schemes and the absolute Frobenius, see [BlS,Theorem 3.2.1]. Theorem 6.6 (Decomposition at p). Let X be a smooth pseudo proper locally noetherian formal scheme over a perfect field k of characteristic p such that dim(X) ≤ p and that admits a smooth lifting over W 2 (k). Then, the complex F X/k * Ω • X/k is decomposable in D(X (p) ). Proof. We have to show that there is an isomorphism in D(X (p) ) i∈Z We may assume X connected. If dim(X) < p then the statement follows from Corollary 4.5. Let us assume from now on that dim(X) = p, in other words, n = p. By Corollary 4.5, the complex τ <p (F X/k * Ω • X/k ) is decomposed in D(X (p) ). We have a distinguished triangle τ <p (F X/k * Ω • X/k ) −→ F X/k * Ω • X/k −→ H p (F X/k * Ω • X/k )[−p] + −→ (6.6.1) As τ <p (F X/k * Ω • X/Y ) is decomposed, we only need to check that the morphism e : H p (F X/k * Ω • X/k )[−p] −→ (⊕ i<p H i (F X/k * Ω • X/k )[−i]) [1]
2019-04-05T16:51:40.000Z
2019-04-05T00:00:00.000
{ "year": 2019, "sha1": "5cfb2647e7e138c317678558b1c6a0f49350299e", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1904.03156", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "68423090bb16be3d7b3a78715ce2f5db3190c253", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
203454907
pes2o/s2orc
v3-fos-license
Knowledge, Attitudes, and Barriers Toward Research Among Medical Students of Karachi Introduction Our study was meant to assess the knowledge, attitude, and barriers towards research in medical students of Pakistan. By assessing the factors, we aim to increase the role of medical students in research, which will eventually help developing countries like Pakistan to achieve self-reliance in health care. Methods Undergraduate and postgraduate students of medicine, dentistry, and pharmacy schools of Dow University of Health Sciences, Karachi, were enrolled from February-March 2018 in a cross-sectional, descriptive study using questionnaires to provide details of the parameters of attitude to the knowledge of and barriers towards research for each individual. All data were coded for each of the parameters. Data analyses were performed by one-way analysis of variance (ANOVA)/Tukey and Student’s t-test, Pearson’s correlation, and Chi-squared tests. Results A total of 850 questionnaires were received. The overall mean scores of students on attitude, knowledge, and barriers were 69.27 ± 13.44, 70.39 ± 15.67, and 72.46 ± 13.46, respectively; 81.8% of students’ scores fell above the middle of the maximum score for knowledge, but 84.5% of attitude scores came in at below the middle of the maximum score. Undergraduate students had a more positive attitude to research than postgraduate students (69.20 ± 11.10 vs 64.23 ± 10.98; p = 0.002). Male students had a better attitude than females (72.97 ± 20.54 vs 67.09 ± 21.56; p = 0.010). Barriers highlighted by students most significantly included a lack of funding support and preference for instruction over research. Conclusion Students showed good knowledge of research, but their attitude was not up to the mark. The barriers highlighted suggest a need for a change in the strategies for research. Attention should be paid to inculcate research as part of the student curriculum and to make available incentives, information, and mentors to solve the problems most students face in the field of research. Results A total of 850 questionnaires were received. The overall mean scores of students on attitude, knowledge, and barriers were 69.27 ± 13.44, 70.39 ± 15.67, and 72.46 ± 13.46, respectively; 81.8% of students' scores fell above the middle of the maximum score for knowledge, but 84.5% of attitude scores came in at below the middle of the maximum score. Undergraduate students had a more positive attitude to research than postgraduate students (69.20 ± 11.10 vs 64.23 ± 10.98; p = 0.002). Male students had a better attitude than females (72.97 ± 20.54 vs 67.09 ± 21.56; p = 0.010). Barriers highlighted by students most significantly included a lack of funding support and preference for instruction over research. Conclusion Students showed good knowledge of research, but their attitude was not up to the mark. The barriers highlighted suggest a need for a change in the strategies for research. Attention should be paid to inculcate research as part of the student curriculum and to make available incentives, information, and mentors to solve the problems most students face in the field of research. Introduction In contemporary times, where advancements in the field of medicine are taking place at a never-before pace, it has become an essential requirement to stay updated with progressing medical techniques. Therefore, health research has become an important component of medical education. Medical research not only ingrains the art of critical thinking and reasoning skills in a health professional but also provides new findings having the potential to influence health care. Thus, motivating medical students towards research in the early years of their medical career can help developing countries like Pakistan to achieve self-reliance in health care and aid in the process of development [1]. Research is also considered to be one of the best indicators of the scientific progress of a country [2]. It was found that the increased participation of undergraduate medical students has a positive impact on their career choice, oral/written communication skills, and analytical thinking [3]. Further, these projects help in producing physicians better accustomed to applying new knowledge to their profession [4]. Insufficient attention to research by the government and well-educated population of a community may contribute to the scientific and knowledge lag within the national community and world as a whole and, therefore, concern over conducting scientific and accurate research has grown over the past years in most countries [5][6]. In Pakistan, the number of physicianscientists has declined over the past two decades and there is a dire need for more clinical as well as basic health science investigators given the ever-rising importance of medical research. One of the most important predictors of research attitude is the researcher's beliefs about the particular subject of research, as they influence his motivation level to conduct research in the first place. The majority of researcher's beliefs are found to be positive in countries like Ireland, Croatia, New Zealand, and Pakistan [7][8][9]. Barriers to research among medical students were found to include: inadequate knowledge of study design, time limitations, restricted funding support, lack of research training, lack of mentors, lack of research self-efficacy, lack of interest, and limited access to data sources [6]. In Pakistan, knowledge of research is lower during the initial years of medical school and students undergoing lecture-based learning generally show less interest in health research than those undergoing problem-based learning [9]. Furthermore, factors like curriculum overload, Internet inexperience, an uncooperative community, difficulty in finding a mentor, difficulty in selecting a topic, and lack of previous exposure lead to a lack of interest in research among medical students of Pakistan [10]. Given the emerging role of research in health care, it is imperative to conduct studies that signify the current status toward conducting research. Therefore, our study is meant to assess the knowledge, attitude, and barrier towards research in medical students of Pakistan. By assessing these factors, we can help make research more appealing to medical students to increase the number of skilled researchers in the future. Materials And Methods In this study, we surveyed the opinions of fourth and final year medical students towards research to find out the predictors of attitude and barriers to research among medical students. Our study sample consisted of fourth and final year medical students of the academic year 2018 of Dow Medical College. The duration of the study was two months from February-March 2018. The total sample size was 850. We distributed a pre-validated, three-page, self-reporting questionnaire to medical, dental, and pharmacy students [6]. The questionnaire consisted of three main sections. The first section included demographics, i.e. gender, marital status, year of birth, year of entry, undergraduate or postgraduate student, and the field of education, e.g. medical, dental, or pharmacy. The second section included questions that were used to assess attitude. The 5-point Likert rating scale was used to score the answers. The questions were based on the perceived importance of scientific research in the students' mind in aspects of their education, life, and medical career and their reasons of being interested in research, kind of research they would prefer, and, finally, their future plans to take part in research. The third section assessed knowledge of research through eight questions varying from the basic knowledge of kinds of scales, knowledge of the writing styles of references, scientific writing, to database resources. The last section focused on the barriers toward research that were assessed through 32 questions. The questions focused on the factors limiting the student's role in research, whether it was lack of research ideas, problems in performing research (i.e. lack of access to equipment and research materials or the lack of suitable research space), lack of time, etc. The content of the questionnaire was adapted from a previous similar study, with efforts made to make the questionnaires appropriate to our local university [6]. Students have explained the objectives of the study and reassured confidentiality. All the data collected from medical students were anonymous except for their year of study. Informed written consent was taken. The questionnaire, along with the protocol of the study, was approved by the Institutional Ethical Review Committee of Dow University of Health Sciences Karachi, Pakistan. No conflict of interests was encountered in the entire study period. A pilot study was conducted with 30 medical students, to rule out any ambiguity in the questionnaire, and the pilot study data were not included in the final analysis. Minor changes were made in the questionnaire after the pilot study had been conducted. All of the responses were coded for each of the parameters. The students' answers were compared to each other to find any impact of age, sex, marital status or level of education on their responses. Data analyses were carried out using SPSS (IBM Corp, Armonk, NY, US). Oneway analysis of variance (ANOVA)/Tukey was used to compare the mean scores and Student's ttests were used for ages in different levels and fields of education. Pearson's correlation coefficient was used to assess the relationship between age and scores. The chi-square test was used to make comparisons between groups of different sex and marital status. In this study, p < 0.05 was considered statistically significant. Results Completed forms from 850 undergraduate (mean age 22.55 ± 3.50) and 250 post-graduate students (mean age 30.15 ±4.56) were taken into account. In total, 67% were women and 33% were men. Table 2 shows comparative means between the sex and marital status groups as well as field and level of education groups. Male students had a better attitude than females (73.10 ± 11.1 vs 69.65 ± 13.06; p=0.015), and single students had a better attitude than their married peers (70. 10± 11.31 vs 64.22 ± 12.21; p=0.03). The mean attitude score of undergraduate students was significantly greater than that of postgraduate students (69.20 ± 11.10 vs 64.23 ± 10.89; p = 0.002). However, there was no significant difference between the education levels of students in terms of their knowledge and barrier scores (p =0.434; p = 0.546). The mean for knowledge in dental students was significantly lower than that of pharmacy and medicine students (p = 0.002). There was no significant difference between the educational field of students and their attitudes and barriers (p=0.206, p=0.345). TABLE 2: Comparison of mean (± SD) of research subjects for demographic characteristics In the field of education, different letters in each column show significant differences between the fields (Tukey HSD (honestly significant difference) test). p<0.05 is significant*. Table 3 shows comparative means and standard deviations between the three different fields, i.e. medical, dental, and pharmacy and level of education, in terms of undergraduate or postgraduate, on research subjects. The levels for attitude, knowledge, and barriers to health research, assessed by the percentage of students falling in the quartiles of the possible score for each field as indicated in Table 4. Data showed that most of the student (84.5%) had an attitude score to research that was lower than half of the maximum score while 81.8% fell above half of the maximum score on the knowledge parameter and showed ample knowledge. Discussion Recent advances in medicine and health care have made research an integral part of modern medicine. However, in many developing countries like Brazil, research is not yet a compulsory component of medical education although it is well-known to have a positive impact on the quality of medical education [11]. Apart from a few studies aimed at investigating the involvement of medical students in research and the barriers they face, research training in developing countries, especially Pakistan, have not yet been adequately addressed. This study is an attempt to better understand the current understanding of research and the barriers that students, both undergraduate and postgraduate, face in implementing research knowledge to conduct research during their medical education. With regards to attitude towards research, the results of our study showed that most of the students had an attitude to research that was lower than half of the possible attainable score, which is inconsistent with what other studies have found, e.g. Vodopivec et al. [7] and Khan et al. [1]. This may be because of the difference between the location of the study and the questions asked in our study to assess attitude. While the attitude score was found to be relatively lower among our study participants; most fell above half of the possible attainable scores on the knowledge parameter and showed adequate knowledge in our study. Our study showed that male students had a more positive attitude than their female peers, but knowledge of research was not different between genders. Similar results were obtained by studies in the US and other studies conducted in Pakistan [9,12]. In contrast to our study, Amin et al. [13] and Memarpour et al. [6] concluded in their study that females had better knowledge than males while attitudes were indifferent between the two genders. The differences may be attributable to differences in sample size, in population, or to the increased ratio of acceptance of female students in Pakistan's medical universities when compared to their male counterparts. According to a study in the UK, uptake of research opportunities by undergraduates is disappointing [14], however, the results of our study showed the mean attitude score of undergraduate students was significantly greater (compared to postgraduate students) and increasing age, level of education, and marriage were clearly seen to have an adverse effect on attitude to and knowledge of research. The findings of Askew et al. [15], Khan et al. [1], and Memarpour et al. [6] were similar to ours, reporting that physicians of a younger age were more keen about research. This may be because the schedule and pressure involved in further studies drastically decrease the motivation and time available to postgraduate students to avail research, along with marital responsibilities [12] and the perception that research has no more than a minimal role in their profession [9]. However, Vukaklija et al. noted the opposite, as they found the attitude toward research enhanced among undergraduate students with each year of education [2]. This difference may be because the level of education of students evaluated was different in their study as compared to ours or because of variations according to the country of study. Our study results found the mean for knowledge in dental students to be significantly lower than that of pharmacy and medicine students. This differs from the findings of Memarpour et al. [6], who found medical students to have less knowledge than dental or pharmacy students. This particular difference could be due to a difference between the two studies in the ratio of students sampled from each school. Previous studies have documented a decline in the number of physicians-scientists in medical practice [16]. Quite a few studies have already proven a lack of interest as a general trend among medical students, leading to a lack of commitment to publishing an article [1,17]. Even among postgraduates, factors associated with an observed decline in the physician-scientists range from a lack of incentives to inadequate prior exposure to medical research during undergraduate years [18][19]. As such, this decline in interest can be attributable to a number of factors, including a paucity of institutional incentive, time restraints, a lack of practical application of research methodology, failure to understand the important role that research plays in our community, and the availability of mentors to overlook such projects, difficulty in finding a mentor [20], however, our study results showed that only 62% of students regard lack of mentorship as a barrier to conducting research and almost 88% regarded inadequate financial support as the most significant barrier to their participation in research activities, followed by a preference for academic instruction over research, lack of skill, and lack of incentive. Similar to our study, a lack of research funding support was also cited as the most important barrier for students in other studies [13,21]. This may be because the limits on funding for research allowed a low incentive for research [9]. An interesting finding of our study is that even though only 18% of students had knowledge below half of the possible attainable score, almost 78% perceived a lack of knowledge and skill as a barrier towards research. This means that even when students have adequate knowledge of research, they perceive it to be little or less than that required to practically take part in or carry out their own research. This barrier can be addressed by creating positive awareness of research among students and empowering them to carry out their research by mentoring them through all the steps of research. Limited time due to workload can be addressed by making a slot specifically allocated for research activity in the student curriculum. For research funds, institutes should try to avail funding support not only from the government but also other resources and researches should be told to look out for grants and awards. The study was conducted at one institution to serve as a pilot for other studies that should be conducted on a larger scale. Thus, the findings cannot be generalized for the whole population of Pakistani medical students. Despite these limitations, the use of a validated questionnaire allows for a comparison of our findings to other studies done under similar settings and using the same evaluative tool. We encourage further detailed studies to be carried out across health institutes all over the country to address this crucial issue of research. Further, other factors influencing research activity and barriers towards conducting research, such as funding, research infrastructure, increasing the cost of medical education, and adequate research opportunities, need to be evaluated.
2019-09-17T03:09:24.741Z
2019-09-01T00:00:00.000
{ "year": 2019, "sha1": "0348cbaf1053d550fff8caa79800b1cb156997af", "oa_license": "CCBY", "oa_url": "https://assets.cureus.com/uploads/original_article/pdf/17602/1572543418-20191031-30931-4j9qwb.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e16599a08c0b631dd8fb003495c46b60f9ea7f72", "s2fieldsofstudy": [ "Medicine", "Education" ], "extfieldsofstudy": [ "Medicine" ] }
214119142
pes2o/s2orc
v3-fos-license
Ultra-low power 0.45 mW 2.4 GHz CMOS low noise amplifier for wireless sensor networks using 0.13- m technology ABSTRACT INTRODUCTION A wireless sensor network (WSN) develops very fast in the market because it offers great functions. The WSN has basic specifications such as the accuracy, flexibility, reliability, expenses, power consumptions and the difficulty of designing [1]. Part of that, the power consumption is the most important specification due to the battery powered of the nodes [2]. Consequently, this specification leads to a great development of CMOS RF usage in a research area. The reasons are due to the low cost and size which are capable to be fabricated on a single chip and can be integrated in many applications [3]. In the front-end of the RF receiver chain, LNA is a block that cannot be replaced [4][5][6]. The functions of LNA are to boost the signal received from the receiving antenna and transmit the signal with high gain to minimize noise contribution of the next stages [7,8]. There are many publications of CMOS LNAs focusing on gain [9][10][11]. However, only a few recent published works are related to the very low power LNA. In [12], the topology of a two-stage cross-coupling cascaded common gate (CG) is adapted by using 0.18 µm TSMC process. It achieves gain of 16 .8 dB and power consumption of 2.16 mW. But the inductor is fabricated separately with a large value which is 32 nH. Besides, by using a current -reused topology can also obtain low power consumption [13]. It achieves gain of 14 dB and 2.45 mW. The proposed design uses a few inductors and consequently increases the chip size. Hence, the aim of this work is to design a LNA with low power consumption and other specification within the expectation range. The power dissipation of the LNA must be minimized in order to extend the battery lifetime of the wireless sensor nodes [14]. Therefore, a design based on forward body bias with two stages is presented [15]. In order to reduce power consumption, forward body bias is an effective technique. This is because it will lower down the supply voltage by decreasing the threshold voltage and thus the power consumption can be reduced [15][16][17]. The dc bias at the bulk terminal of transistor can be varied to control the threshold voltage [18]. The similar work has been presented in [7]. However, the second stage is not connected correctly as a cascade topology. Therefore, the simulation results are questionable. The proposed work also performed pre-layout simulation results without the layout. This paper is organized as follows: in Section 2, the forward body bias technique is explained, the proposed LNA circuit design is analyzed in Section 3. Section 4 presents the simulation results achieved from the proposed circuit design and finally, Section 5 concludes overall achievement. FORWARD BODY BIAS TECHNIQUE Due to the simplicity of the forward body bias technique; it is used in this topology as to lower down the value of power consumption of the LNA [15] which will be discussed in details as below. This topology of LNA is implemented and designed by using a 0.18-μm CMOS technology [15]. The LNA operates at 2.4 GHz with only 0.6 V of supply voltage and achieves 2.88 dB of NF, 10.1 dB of gain, and power consumption is 0.84 mW. Figure 1 shows the schematic of the forward body bias topology. Typically, the equation between the threshold voltage and the body -source voltage is given as [16]. where Vtho is the threshold voltage for Vbs = 0, γ is a process-dependent parameter, f is a semiconductor parameter with a typical value in the range of 0.3 -0.4 V, Vbs is the source-to-body voltage. From Figure 2, it can be seen that the Vth is decreased when Vbs is increased. Therefore, the low voltage and low power LNA can be achieved in the design without affecting other device characteristics of gain, linearity and noise figure [15]. The forward body bias technique has low linearity inherently but can be done with simplicity which resulted to a smaller size of the LNA. LNA CIRCUIT DESIGN The design approached is based on previous published technique which is forward body bias [14] and single forward body bias technique [19][20]. The proposed LNA is shown in Figure 3. The technique is employed forward body bias technique for both stages. This LNA operates at low supply voltage which is 0.55 V. The cascode structure is employed in the first stage and the second stage which consist of transistor M1, M2 and M3, M4, respectively. The suitable sizes for transistors are chosen as to ensure an acceptable gain at low bias voltage. The size of M1 and M2 is set to 130 µm/0.13 µm with 13 fingers and 10 µm widths while the size of M3 and M4 is set to 120 µm/0.13 µm with 12 fingers and 10 µm widths. The bulk voltage (VB) of M1, M2, M3 and M4 is 0.3 V through R4 is chosen in order to ensure that the transistors operate in a saturation region. The inductor L1 affects input matching and also input stage gain. Moreover, the inductor L1 and capacitor C1 provides DC path for transistor M1. Capacitor C2 is resonates with L2 as to improve a gain and an input matching while to guarantee that the design operate at frequency 2.4 GHz. A C5 provides DC path to the next stage of the LNA design. SIMULATION RESULTS In order to generate S-parameters, noise figure, stability and linearity, the simulations are carried out using the Cadence Spectre analog design environment (ADE). Hence, the simulated results that are obtained from two stages forward body bias technique at 2.4 GHz are presen ted in this section. The overall circuit operates at 0.55 V supply voltage and draws 820 µA of the total current. The pre-layout and post-layout results of S11 are illustrated in Figure 4. The input return loss, S11 for pre-layout is -23.2 dB while for post-layout is -17.6 dB at 2.4 GHz. From the Figure 2, it can be seen that both pre-layout and post-layout results are agreed to each other. The differences is due to the extracted parasitic that exists during the simulation has affected the input matching of the LNA [14]. But, the value is still in the acceptable range. Figure 5 depicts the output return loss (S22) of the proposed two stages LNA. It can be seen that both the pre-layout and the post-layout are -16.6 dB and -12.29 dB respectively. As can be seen in Figure 5, the post-layout curve is shifts to the right from the operating frequency, 2.4 GHz. This happens due to the parasitic of routing, capacitance, inductance and output pad. is observed for overall circuit. But, the gain is degrading to 15.05 dB during post -layout simulation. The reason is because of the existence of the extracted parasitic in load inductors. Figure 7 illustrates the pre-layout and post-layout simulations results for the noise figure (NF). The NF of 4.9 dB and 5.9 dB is achieved for the pre-layout and post-layout at 2.4 GHz, respectively. The increasing of NF is due to the parasitic effects. Moreover, the transmission lines from the input pad to t he inductor and gate of transistor can be modelled by a resistor which can also contribute to the noise. In addition, there are more components for the proposed two stages LNA which can also contribute to the noise for overall performance. However, the NF achieved from this LNA circuit is within the specifications performance. Linearity is performed as it can determine the performances of the LNA. It occurs when an unwanted signal presents nearer to the operating frequency. The higher linearity is more desirable. First, the linearity of the proposed LNA is simulated by the inp ut 1 dB compression point (IP1dB). As illustrates in Figure 8, the simulated IP1dB of -15 dBm is obtained at 2.4 GHz. Meanwhile, the third-order intercept point (IIP3) is the interception of the first-order output curve and the third-order intermodulation output curve at 2.4 GHz is shown in Figure 9. As can be seen in Figure 9, the IIP3 for the proposed two stages LNA of -2 dBm is achieved. The stability factor from the proposed LNA which is more than one is achieved as shown in Figure 10. Therefore, the proposed LNA is unconditional stable. The layout of the proposed two stages LNA is illustrated in Figure 11. All components are designed on-chip. In the layout, spiral inductors and metal-insulator-metal (MIM) capacitors are used due to high quality factor and low losses, respectively. The total layout area is 0.99 mm × 0.98 mm which consists of two GSG pads, three dc pads, four inductors, DC block, five capacitors, four RF resistors and four transistors. The layout will be tape out in the future. The overall post-layout results performances comparison between the proposed LNA with recently published works is summarized in Table 1. This proposed LNA consumes only 0.45 mW for overall circuit with 0.55 V supply voltage. The LNA cannot be replace or remove in the RF receiver front -end [21]. Therefore, it should deliver a significant gain to minimize the noise contribution of the subsequent stages and to amplify the attenuated signal received by the antenna so that it can be efficiently handled by the next stages such as mixer and VGA [4]. A low voltage and low power LNA's performance is evaluated based on figures of merits (FOMs). There are different FOMs are commonly used in the previous works [22]. One of the FOM of the LNA can be calculated based on the ration of power gain to the power dissipation and NF performance as represented in (2). where, abs is the absolute value of gain and NF [23]. Comparing with the other works, the proposed LNA obtains the lowest power consumption. Besides, the linearity performance of this work is better than others. Meanwhile, gain is comparable by considering a low supply voltage. Referring to [21], high gain and good FOM are achieved without linearity (IIP3) performance. References [24] also obtained high gain with high power consumption. Based on the calculated FOMs in Tab le 1, the proposed LNA shows the best performances than other proposed LNAs. CONCLUSION A very low power 2.4 GHz two stages CMOS LNA using forward body bias technique has been proposed in this work. As the forward body bias technique allows for the reduction of the threshold voltage and thus reduces the power consumption, therefore the technique was implemented in this work. Two stages LNA is proposed as to enhance the overall gain. In order to get low power consumption, this circuit
2020-03-19T19:46:19.407Z
2020-02-01T00:00:00.000
{ "year": 2020, "sha1": "9d928ce16f235d151bf2e16d4f4a1c0ac6c5b0d6", "oa_license": null, "oa_url": "https://beei.org/index.php/EEI/article/download/1852/1295", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "aee55d26858bf6898b691c0c2f794b41ab3099dc", "s2fieldsofstudy": [ "Computer Science", "Engineering" ], "extfieldsofstudy": [ "Computer Science" ] }
258256634
pes2o/s2orc
v3-fos-license
Copious vaginal discharge finally diagnosed as cervical adenocarcinoma: A case report Rationale: Copious vaginal discharge is a frequent manifestation of reproductive tract infections. However, when little effect can obtain treated as vaginitis, cervical disease should be highly suspected. Patient concerns: A 41-year-old woman had suffering from abnormally increased vaginal discharge without any other signs of discomfort for the past 4 years. A lot of medical examinations and treatment of vaginosis were administered, resulting in unclear diagnosis and little effect. Diagnoses: Cervical adenocarcinoma. Interventions: Gynecological examination, vaginal microbiome culture, and primary cervical cancer screening were negative, and a positron emission tomography revealed an increased 18F-fluorodeoxyglucose metabolism in the local cervix. After a thorough description, the patient demanded a hysterectomy and bilateral salpingo-oophorectomy. Outcomes: Histopathological evaluation confirmed adenocarcinoma in situ of the uterine cervix. Lessons: The correct diagnosis of symptomatic patients with increased vaginal discharge is challenging. Human papillomavirus-negative patients presenting profuse watery vaginal discharge with an abnormal signal of cervix lesion on positron emission tomography or magnetic resonance imaging should be alert to cervical adenocarcinoma. Deep-seated cervical biopsy, conization, or even hysterectomy is conducive to early diagnosis, treatment and improvement of prognosis. Introduction An abnormal vaginal discharge, as a frequent manifestation of reproductive tract infections, is always considered to be related to vaginosis, including sexually transmitted infections, vulvovaginal candidiasis, and bacterial vaginosis. [1] While, recurrent abnormal vaginal discharge with little effect treated as vaginosis and without much other information to work with, is often a conundrum to clinicians to diagnose and administer treatment. This case describes a women suffered from copious vaginal discharge and confirmed as cervical adenocarcinoma by pathology after hysterectomy. Case presentation A divorced childless 41-year-old woman with bachelor degree had suffering from abnormally increased vaginal discharge, presenting as an annoying and copious, faint yellow vaginal discharge leading to an impression by the patient of smelly, unremittingly having wet underwear for the past 4 years. During the past 4 years, she visited tens of hospitals and a lot of medical examinations and treatment of vaginosis were administered, resulting in unclear diagnosis and little effect. Her menstrual cycle was regular and had no miscarriage ever. She did not have any signs of discomfort such as fever or pelvic pain and denied sexual activity for 6 years after her divorce. In December 2022, she was tested for human papillomavirus (HPV) and ThinPrep cytology test (TCT) for primary cervical cancer screening and showed negative results in local hospital. An ultrasound examination suggested the possibility of adenomyosis (uterus size of 5.1 * 4.0 * 4.0 cm), endometrial polyps (size of 0.8 * 0.5 cm) and multiple cervical cysts (maximum size of 1.4 * 1.3 cm). At the patient strong request, a hysteroscopic The patient discussed in the case gave her full permission for the disclosure of her case history, images and textual materials for publication. endometrial polypectomy and cervix loop electrosurgical excision procedure was performed, postoperative pathology showed endometrial polyps and low-grade squamous intraepithelial lesion. However, copious vaginal discharge persisted after the surgery. In January 2023, she was admitted to the gynecological ward in our hospital, with the aim of clear diagnosis and symptoms control. Gynecological examination revealed a mobile uterus of normal size and inaccessible double tubo-ovary without pain. Neither candida, trichomonas, and clue cells; nor chlamydia trachomatis, cervical mycoplasma and gonococcus were detected in vaginal discharge. HPV was negative tested by both hybrid capture II and polymerase chain reaction. TCT was also negative. An ultrasound examination suggested no obvious abnormality ( Fig. 1). A positron emission tomography (PET) revealed an increased 18 F-fluorodeoxyglucose metabolism in the local cervix (Fig. 2). Colposcopy was performed, clear watery secretions covering the cervix can be seen; after wipping off with a cotton swab, no obvious gland openings can be observed on the surface of the cervix. The acetowhite epithelium was thin and Lugol staining stain (Fig. 3). And negative serum tumor markers were detected. A provisional diagnosis of cervical lesion was made. The patient was given a thorough description and she demanded a hysterectomy and bilateral salpingo-oophorectomy. After she provided written informed consent, a laparoscopy was performed. During the operation, there was no ascites in the pelvic cavity, and normal size uterus, bilateral fallopian tubes and ovaries were seen. Uterus dissection showed that the cervical tube is cylindrical without any obvious space-occupying lesion, and the local tissue is thickened. There is no abnormality in endometrium and myometrium. Much mucus was found in the uterine cavity and cervical canal. Frozen section pathology suggested cervical dysplasia. Postoperatively, the patient was sent to the ward and recovered well. Histopathological evaluation confirmed usual adenocarcinoma in situ of the uterine cervix (Fig. 4). Discussion Cervical cancer is the leading cause of cancer deaths in women in the developing world, [2] which remains the fourth most common cancer in women worldwide. [3] HPV infection is recognized as a major causative factor in the development of cervical cancer. [4] Primary cervical cancer screening (HPV and TCT) make it a highly preventable disease and can be easily treated if detected at early stages. [5] However, HPV-negative cervical cancer accounts for approximately 3% to 8% of all cases, [6] while cervical adenocarcinoma is frequently (15%-38%) HPV-negative. [7] In this case, both hybrid capture II and polymerase chain reaction were applied to test HPV in order to avoid false-negative results by sampling errors as well as pitfalls in classification accuracy of HPV test. HPV-negative cervical cancer is an easily neglected disease due Key Points HPV-negative patients presenting profuse watery vaginal discharge should be alert to cervical adenocarcinoma. Deep-seated cervical biopsy, conization or even hysterectomy is conducive to early diagnosis, treatment and improvement of prognosis. Figure 1. The ultrasound examination suggested a 5.1*4.0*4.4 cm uterus with normal blood supply and endometrium with thickness of 0.7 cm (A); a cervix with uniform blood supply (B) and without space-occupying lesion (C) expect for multiple cervical cysts, whose maximum size was 1.6*1.3 cm (D). to lack of interest of clinicians, whom with a misconception to consider HPV infection as the sole cause of cervical cancer. It is reported that the majority of studies about cervical cancer have aimed to discover the viral oncogenic mechanism of HPV and develop diagnostic methods for early detection, HPV vaccines, and targeted drugs for patients with HPV-positive cancer. [8] This might lead to very limited number of studies focus on HPV-negative cervical cancer. Nevertheless, it must be emphasized that patients with HPV-negative cervical cancer had significantly advanced International Federation of Gynecology and Obstetrics stage at diagnosis, lower disease free survival, poorer overall survival and higher risk of progression and mortality than those HPV-positive cervical cancer. [7] Interest in HPVnegative cervical cancer needs to be improved. Adenocarcinoma currently accounts for 10% to 25% [9] of all uterine cervical carcinomas and has a variety of histopathological subtypes. It classically arises in middle-aged women with symptoms, including profuse watery vaginal discharge and abnormal uterine bleeding. [10] When profuse vaginal discharge cannot be relieved after treatment of vaginitis, further examination in differential diagnosis of cervical adenocarcinoma is needed. Cervical tumor can be evaluated by several imaging modalities, and it has been reported that squamous carcinoma of the cervix (SCC) tended to be hypoechoic and adenocarcinoma tended to be isoechoic on ultrasound. [11] Computed tomography is limited by poor soft tissue contrast, and magnetic resonance imaging is superior to CT for local evaluation. [12] Though there was no significant difference in standardized uptake value max between SCC and non-SCC on 18F-fluorodeoxyglucose PET, [13] an increased metabolism of cervix is a signal of cervix lesion. Patients suffering from large amount of vaginal discharge had cervical lesions on magnetic resonance imaging/PET showing abnormal signal should be alert to cervical adenocarcinoma.
2023-04-22T05:05:58.338Z
2023-04-21T00:00:00.000
{ "year": 2023, "sha1": "e51734c8352a45f2a8446ea9340ac829f39a2782", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "e51734c8352a45f2a8446ea9340ac829f39a2782", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
242182394
pes2o/s2orc
v3-fos-license
The Effectiveness and Impact of Preoperative Dental Hygiene Care on the Incidence of Postoperative Pneumonia after Esophagectomy: An Interventional Prospective Study Background: Postoperative pneumonia is a major cause of postoperative mortality after esophagectomy. Preoperative oral hygiene care is reportedly effective to prevent pulmonary complications after esophagectomy. Methods: Since April 2012, we have included preoperative oral hygiene in the standard perioperative care regimen for esophagectomy and have accumulated data on 188 consecutive patients undergoing esophagectomy to evaluate oral hygiene care’s effectiveness. To determine basic (i.e. non-clinical) and clinical effects of preoperative oral care, we prospectively observed the incidence rate of postoperative pneumonia and accumulated perioperative culture study and oral bacteria count data on these 188 patients. One hundred ve patients studied in our previous retrospective study from 2009 to 2012 were enrolled as a historical control. Results: In the current study’s patients, no signicant reduction of postoperative pneumonia was observed compared to the historical control (30 out of 188 vs. 21 out of 105, P=0.423). Perioperative culture studies showed signicantly decreased positivity in preoperative oral samples (11% in dental plaque and 13% in tongue coating) but no such decrease was observed in studies of postoperative gastric juice and endotracheal sputum. With the exception of postoperative endotracheal sputum, perioperative cultures had few of the pathogenic microbes identied in pneumonia patients. In the analyses of oral bacterial count, oral microbial ora were signicantly decreased after oral care in both dental plaque (median ratio to before care: 1:0.13, P<0.0001) and tongue coating (median ratio: 1:0.015, P<0.0001); however, only in the dental plaque did the decrease last until the day of the operation (median ratio: 1:0.10, P=0.0008). Logistic regression analysis showed only the bacterial amount in dental plaque on the operation day (P=0.026) to be marginally correlated to the incidence of pneumonia. Conclusions: Although perioperative oral hygiene care had a signicant impact on oral bacterial load, its contribution to the prevention of postoperative pneumonia patients. One hundred ve patients studied in our previous retrospective study from 2009 to 2012 were enrolled as a historical control. Results: In the current study's patients, no signi cant reduction of postoperative pneumonia was observed compared to the historical control (30 out of 188 vs. 21 out of 105, P=0.423). Perioperative culture studies showed signi cantly decreased positivity in preoperative oral samples (11% in dental plaque and 13% in tongue coating) but no such decrease was observed in studies of postoperative gastric juice and endotracheal sputum. With the exception of postoperative endotracheal sputum, perioperative cultures had few of the pathogenic microbes identi ed in pneumonia patients. In the analyses of oral bacterial count, oral microbial ora were signi cantly decreased after oral care in both dental plaque (median ratio to before care: 1:0.13, P<0.0001) and tongue coating (median ratio: 1:0.015, P<0.0001); however, only in the dental plaque did the decrease last until the day of the operation (median ratio: 1:0.10, P=0.0008). Logistic regression analysis showed only the bacterial amount in dental plaque on the operation day (P=0.026) to be marginally correlated to the incidence of pneumonia. Conclusions: Although perioperative oral hygiene care had a signi cant impact on oral bacterial load, its contribution to the prevention of postoperative pneumonia was limited. Background Esophagectomy is associated with high postoperative morbidity (1,2,3). Most postoperative mortality is accounted for by respiratory complications, whose incidence in the rst decade of the 21st century has been reported to range from 16-27% (3,4,5). Innovations and progress have been made in surgical techniques as well as in perioperative care, and postoperative morbidity has decreased (6,7,8). Among the innovations proposed to reduce pulmonary complications after esophagectomy is perioperative oral hygiene care (9,10,11,12). Evidence of the effectiveness of oral hygiene care to prevent postoperative pneumonia has not been convincing, however, due to the small number of studies conducted so far. Our own previous study of 105 esophageal cancer patients suggested that pathological bacteria responsible for postoperative pneumonia were rarely detected in preoperative oro-nasal culture specimens (13). This calls into question the theory that postoperative pneumonia is caused by oral pathogenic bacteria. Both the rationale and the clinical bene t of oral hygiene care before esophagectomy required reappraisal in a prospective study. In this study, we investigated the incidence of postoperative pneumonia in prospectively accumulated esophageal cancer patients who had received preoperative oral hygiene care by dentists before their operation. We also investigated both quantitatively and qualitatively the effect of oral hygiene care on microbes adjacent to the airway. Patients A consecutive series of 188 esophageal cancer patients who underwent esophagectomy from April 2012 to March 2016 in The University of Tokyo Hospital were enrolled. We previously reported the incidence of postoperative pneumonia as 20% (21 out of 105 patients) from 2009 to 2012 in a patient cohort without preoperative oral hygiene care; this cohort was used as a historical control (13). All of the current study participants received preoperative oral hygiene care from dentists and perioperative bacteriological studies were performed routinely as standard perioperative care. The clinico-pathological characteristics and the surgical procedures of the 105 previous and 188 current patients are shown in Table 1. ( Table 1 to be located here) Study design This was an interventional study with a prospectively accumulated cohort of esophageal cancer patients undergoing esophagectomy after preoperative oral hygiene care. The primary clinical endpoint was the incidence of postoperative pneumonia. The secondary study endpoint was the impact of oral hygiene care on bacteria in the oral cavity and other sites adjacent to the airway. Data obtained from the perioperative bacteriological cultures and oral bacterial counts were analyzed to evaluate oral hygiene care's effectiveness in both clinical and non-clinical terms. Oral hygiene care Patients visited dentists as outpatients and screening for dental disease was done by dental pantomography. Tooth extraction was performed when severe dental disease was noted. Buccal swab samples were retrieved for quantitative bacterial analysis as described in the subsequent section. After these steps, patients were given detailed instructions on tooth-brushing techniques and advised to perform tooth brushing at least three times a day. No subsequent care was given by dentists except on occasions such as follow-up visits after tooth extraction. Perioperative culture studies Culture specimens were collected at the sites and timings reported below and sent immediately to the laboratory. Bacteriological studies were performed in exactly the same way as those in our previous study (13). The culture specimens were (A) dental plaque and (B) tongue coating immediately before surgery; (C) gastric juice from the gastric conduit immediately before anastomosis; (D) sputum obtained by endotracheal suction during the operation; (E) gastric juice from the nasogastric tube on the rst or the second postoperative day; (F) sputum obtained by endotracheal suction within three days after surgery. Quantitative measurement of oral bacteria. Quantitative measurement of oral bacteria was begun in March 2013. Buccal swabs were done with cotton swab sticks from the 5 th tooth (or, if missing, from any other tooth) and the tongue coating. These swab samples were processed using a Bacterial Counter (Panasonic Healthcare Co., Ltd., Tokyo, Japan) according to the manufacturer's instructions. This device measures the dielectrophoretic impedance in the aqueous medium washing the cotton swab to quantify the amount of microbes trapped in the cotton swab (14,15). If possible, this quanti cation of the oral cavity microbial load was repeated three times for each patient: the rst and the second samples were retrieved before and immediately after oral hygiene care, and the third was retrieved in the early hours of the day the surgery was performed. (The rst and the second samplings, however, had to be abandoned in January 2015 when the medical staff collecting these specimens retired.) Increases or decreases in the bacterial load detected in the three types of specimens and their association with the incidence of postoperative pneumonia were the subjects of the analysis. Diagnostic criteria of postoperative pneumonia The diagnosis of pneumonia was made in accordance with the Japanese Respiratory Society's Guidelines for Hospital Acquired Pneumonia in Adults. This diagnosis was contingent on the presence of pulmonary in ltrates in the standard chest radiography and at least two of the three criteria (a) pyrexia (>38.0 degrees), (b) leukocytosis (>12,000/mm3) or leukocytopenia (<4000/mm3), and (c) purulent airway exudates. All cases of pneumonia diagnosed by the above diagnostic criteria occurring within 14 days after the operation were retrospectively de ned as postoperative pneumonia [16]. These criteria are identical to those of our previous study. Statistical analysis Proportional differences were tested by Fisher's exact test. Student's t-test was used to compare group differences. Association of the oral bacterial count to the incidence of pneumonia was veri ed by logistic regression analysis. A value of P<0.05 was regarded as statistically signi cant. All analyses were performed using JMP Pro software version 14.0.0 (SAS Institute Inc., Cary, NC, USA). Incidence of pneumonia Diagnoses of pneumonia were made from four to fourteen days after the surgery (median: 6 days). In the current cohort, 30 out of 188 patients (16%) suffered from postoperative pneumonia; the reduction in pneumonia incidence in comparison to our previous cohort (16% vs. 20% (21/105), P = 0.423) was not considered signi cant. Perioperative Cultures And Incidence Of Pneumonia Positivities of the preoperative oral specimens, namely (A) and (B), were lower (11%, P = 0.018 and 13%, P = 0.0011, respectively) in the current study patients; in the intraoperative and postoperative culture studies, namely (C), (D), (E) and (F), the current and the previous studies' positivities did not differ. None of the culture studies (from (A) to (F)) showed signi cant associations with the incidence of pneumonia (data not shown). In 25 out of the 30 pneumonia patients, bronchoscopically suctioned sputum was collected soon after the onset of pneumonia; Table 3 shows the result of bacteriological studies performed before and after the onset of pneumonia. ( Table 3 to be located here.) Twenty-three cases presented microbial growth in the sputum specimen after onset; species detected included Klebsiella pneumoniae (7 cases), Pseudomonas aeruginosa (5 cases), Xanthomonas maltophilia (8 cases), as well as other microbes in a few cases. In 10 out of the 23 cases these presumably pathogenic microbes were identi ed in the perioperative studies performed before the onset of pneumonia. Such identi cation was observed most frequently in (F) endotracheal sputum retrieved after surgery (8 out of 20); detection rates were lower in other bacteriological study categories. Effect Of Oral Hygiene Care On Oral Bacterial Amounts Quantitative evaluations of oral bacteria were performed using the Bacterial Counter for 129 patients; complete sets (before oral care, after oral care, on the operation day) were available from 62 patients. Histograms in Fig. 1 show rates of reduction to before-surgery levels of bacterial amounts on two occasions, soon after oral care and on the day of the operation. Bacterial amounts were signi cantly decreased immediately after oral hygiene care in both dental plaque (median ratio to before oral care 1:0.015, P < 0.0001) and tongue coating (median ratio to before oral care 1:0.13, P < 0.0001). The decrease on the operation day was also signi cant in dental plaque (median ratio 1:0.10, P = 0.0008) but not in tongue coating (median ratio 1:0.83, P = 0.2313). Correlation Of Oral Bacterial Amounts To Incidence Of Pneumonia We also investigated whether oral bacteria amounts were associated with the incidence of postoperative pneumonia. Table 4 summarizes the logistic regression analyses for their possible association with the incidence of postoperative pneumonia. Bacterial amount in dental plaque was the sole criterion that showed a (marginal) association. Discussion In this study, we investigated preoperative oral hygiene care's effectiveness in reducing the incidence of postoperative pneumonia after esophagectomy. We also analyzed oral care's basic (i.e. non-clinical) effects on microbes that might induce postoperative pneumonia. Culture studies as well as analysis of oral cavity bacterial ora showed clear effects of oral hygiene care. These effects did not, however, appear to have reduced postoperative pneumonia. Comparison of the previous and the current cohorts might have been confounded by evolution in the types of surgical procedure employed, especially the surgical approach and the reconstruction route. The current cohort included a greater proportion of non-transthoracic approaches and posterior mediastinal route reconstruction (Table 1). When patients undergoing non-transthoracic esophagectomy were excluded from both studies, the difference in pneumonia incidence between the previous and the current cohorts was smaller (21% (21 out of 102) vs. 20% (26 out of 131 patients)). When the two cohorts' comparisons were performed separately for each of the three types of reconstruction route, no signi cant difference in pneumonia incidence was observed (data not shown). Our current ndings are incompatible with previous reports investigating the effectiveness of preoperative oral hygiene care for patients undergoing esophagectomy (10,11,12); evidence of its effectiveness has also been reported by several studies in elds of surgery other than esophagectomy (17,18). The discrepancy might be explained in part due to differences in preoperative intervention methods. The fact that study participants' compliance and adherence were not investigated also limits the signi cance of our ndings. However, the intervention included in our study did reduce the rates of positive ndings of oral bacteria in culture studies and also reduced bacterial amounts in dental plaque on the operation day. Postoperative culture studies, however, showed no decrease in positivity, which implies that the effect of preoperative oral hygiene care had become attenuated or limited in the meantime, and that improvement in the postoperative airway environment had no meaningful lasting effect. In patients after esophagectomy, airway contamination by regurgitated gastric contents must also be considered a signi cant source of pathogenic microbes responsible for postoperative pneumonia; recurrent nerve injuries as a surgical complication or clinical manifestations of lymphatic metastasis may also corrupt the airway environment. Given the existence of such adverse factors, preoperative oral hygiene interventions cannot be assured of success in improving postoperative airway bacteriological status. Preoperative oral hygiene care may therefore be less useful for patients undergoing esophagectomy than it is for patients undergoing other types of surgeries. In our attempt to identify quanti able factors associated with the frequency of pneumonia, only bacterial amounts in preoperative dental plaque appeared to have (marginal) signi cance. Preoperatively reduced bacterial amounts can be interpreted as a sign of successful oral hygiene care; in those patients whose adherence to oral hygiene care was exceptionally good, postoperative pneumonia may well have been prevented effectively through these measures. Preoperative oral hygiene care may also have different effects in individuals with varying compliance, dental disease, and susceptibility to pneumonia. In sum, the oral hygiene care investigated in our study showed signi cant effects on the oral bacterial studies' ndings, but those effects were insu cient to reduce the incidence of postoperative pneumonia after esophagectomy. Preoperative reduction of oral bacteria as a preemptive measure to prevent postoperative pneumonia awaits continued re-evaluation through further prospective studies or more powerful oral hygiene interventions. Declarations Ethics approval and consent to participate This study was approved by the institutional review board of The University of Tokyo Hospital (No.3383). All participating patients gave written informed consent. Consent for publication Not applicable. Availability of data and materials The datasets used and/or analysed during the current study are available from the corresponding author on reasonable request.
2020-07-30T02:09:09.524Z
2020-07-28T00:00:00.000
{ "year": 2020, "sha1": "2fcbb4c093ff21821c72075e310892d18a025202", "oa_license": "CCBY", "oa_url": "https://www.researchsquare.com/article/rs-41598/v1.pdf", "oa_status": "GREEN", "pdf_src": "ScienceParsePlus", "pdf_hash": "846a0125522e4a43f17a82f0a67017823e5c84cf", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
226389268
pes2o/s2orc
v3-fos-license
Marchiafava–Bignami Disease Associated with Spinal Involvement Marchiafava–Bignami disease (MBD) is a rare disorder of unknown etiology, strongly associated with alcoholism and malnutrition. MBD causes primary involvement of the corpus callosum, leading to confusion, dysarthria, seizures, and frequent death. We report the case of a 54-year-old male without a history of alcoholism or known malabsorption disease, who presented with altered consciousness and neurologic impairment. Complex B deficiency was addressed. Magnetic resonance imaging (MRI) showed typical corpus callosum lesions. The clinical features and radiologic images suggested spinal cord involvement. Brain histopathologic findings were consistent with MBD. Despite vitamin replacement therapy, he had a poor outcome. Introduction Marchiafava-Bignami Disease (MBD) is a rare disorder that affects adults between 40 to 60 years of age, with a male predominance, and a history of chronic alcoholism and/or malnutrition. It has been associated with vitamin B complex deficiency, and its main feature is progressive demyelination and necrosis of the corpus callosum [1]. ere is also involvement of other cerebral structures, such as hemispheric white matter, middle cerebellar peduncle, and basal ganglia. Few cases of spinal cord involvement were reported [2,3]. Clinical features are heterogeneous and nonspecific. MBD can cause impairment of consciousness, neuropsychiatric symptoms, dysarthria, tetraparesis, hypertonia, ataxia, seizures, and symptoms of interhemispheric disconnection [4][5][6]. A clinical and radiologic classification proposed by Henrich divides patients into two types: type A, characterized by involvement of the entire corpus callosum, and type B, characterized by partial or focal callosal lesions [5,7,8]. Although the prognosis is variable, the mortality is high. Early thiamine replacement therapy is associated with a better outcome, whereas the effectiveness of steroids and other treatments remains hypothetical [9,10]. Different etiologies (infectious, vascular, metabolic, demyelinating, or neoplastic) can cause corpus callosum signal abnormalities. erefore, the exclusion of differential diagnoses is essential [11,12]. We present the case of a 54-year-old man suffering from vitamin B complex deficiency and spinal cord involvement, with histopathologic examination consistent with MBD. Case Report A 53-year-old male was admitted to our hospital due to progressive weakness and impaired state of consciousness. He had a personal history of noninsulin-dependent diabetes, with no history of alcoholism or malnutrition. Two months before admission, he presented with lower limbs weakness, which rapidly progressed leading to severe gait impairment and wheel chair bound. Initial evaluation was performed at another medical center and was referred to our hospital one week later. On admission, he had signs of respiratory distress, which led to intubation and transfer to the intensive care unit. At the initial examination, the patient was in a spontaneous coma. Pupils were symmetric and reactive. ere were no signs of cranial nerve impairment, and brainstem reflexes were normal. He had an abnormal extension posture of the upper limbs, with normal withdrawal of lower limbs to painful stimulus. Muscle tone and reflexes were normal, and no meningeal signs were found. A brain MRI showed hyperintensities and extended T2-signal and fluid-attenuated inversion-recovery sequence (FLAIR) involving cerebral peduncles and middle cerebellar peduncles, parietal corpus callosum, and corona radiata (Figures 1(a)-1(d)). On reviewing of diffusion-weighted imaging signal (DWI), there was homogenous and symmetrical hypersignal throughout the lesion and a corresponding decrease of apparent diffusion coefficient (ADC) values on the ADC maps diffusion restriction (Figures 2(a)-2(f )). Based on the clinical and image findings, MBD versus central nervous system demyelinating disorders (i.e., aggressive multiple sclerosis, acute disseminated encephalomyelitis, or neuromyelitis optica spectrum disorder) were suspected. Blood count showed severe pancytopenia with megaloblastic anemia. Serum vitamin B1 and B12 levels were very low. Levels of serum homocysteine were elevated, while malonic acid and serum folic acid were normal. CSF analysis revealed hyperproteinorrachia(468 mg/dl) and increased lactic acid (7 mmol/l). Oligoclonal bands, AQP4 IgG and anti-MOG IgG, were negative. Viral and bacterial infections were ruled out (viral PCR screen and bacterial cultures). Other laboratory tests are shown in Table 1. Treatment with IV thiamine: high doses of parenteral B12 vitamin and methylprednisolone pulses (1000 mg/day for 5 days) were started. A brain biopsy was performed at the 20 th day of hospitalization, revealing white matter necrosis, areas of demyelination, and gliosis, as well as macrophage infiltrates and perivascular lymphocytes ( Figure 4). ese findings were also consistent with MBD. Although laboratory parameters improved with treatment (reaching normal values of both vitamins), there was no clinical improvement, as shown in Table 2. e patient died on day 125 of hospitalization. Discussion Marchiafava and Bignani were Italian pathologists who, in 1903, described a syndrome characterized by selective demyelination and necrosis of the corpus callosum. is entity affected consumers of large volumes of red wine [13]. Later on, this affection was also found in people with severe nutritional deficits [14]. It has been associated with vitamin B complex deficiency, primarily thiamine deficiency. In a few cases, low cyanocobalamin serum levels were also associated with spinal cord involvement [2,3]. Even though our patient had no personal history of alcoholism or malnutrition, low serum levels of vitamins B1 and B12 were found. e pathophysiological mechanism of MBD remains unclear. Possible mechanisms include cytotoxic edema, the breakdown of the blood-brain barrier, demyelination, and necrosis. Cytotoxic edema is possibly the underlying mechanism involved in the early stages, while demyelination and necrosis take place in later stages. Morel's laminar sclerosis of cerebral cortex is seen [15]. MBD clinical manifestations vary and are nonspecific, posing a diagnostic challenge, and different clinical presentations were described [16]. Acute MBD causes seizures, altered consciousness, and limb hypertonia. Subacute MBD is characterized by confusion, dysarthria, behavioral abnormalities, visual disturbance, memory deficits, signs of interhemispheric disconnection, apraxia, tetraparesis, and gait disorders. e chronic clinical presentation of MBD is the least common and characterized by chronic dementia. Severe impairment of consciousness and neurocognitive deficits at the beginning appear to be indicators of a poor prognosis [1,5,15]. Clinical manifestation in our patient was consistent with an acute/subacute presentation. Typical MRI findings in patients with acute MBD include symmetric T2signal and FLAIR hyperintensity of the corpus callosum, with low signal intensity in T1-signal sequences. DWI restriction of cerebral cortex, hemispheric white matter, middle cerebellar peduncles, and basal ganglia was also described. Contrast enhancement of the corpus callosum, which resolves in the subacute stage, has been reported. In some cases, the T2-signal of the corpus callosum normalizes, evolving into symmetric atrophy [5,15]. Classifying our patient into type A or B was difficult, as he presented with clinic-radiological features of both types. Type A is characterized by severe impairment of consciousness, seizures, dysarthria, and hemiparesis. Imaging findings are swelling of the entire corpus callosum and extra callosal lesions. It has been associated with a poor outcome (death in up to 21% of cases). In contrast, type B presents with slight impairment of consciousness, focal callosal lesions on MRI, and a lower mortality rate [5,17]. An interesting feature of our case was the extensive spinal cord involvement from C2 to T1. We thought that this finding could be associated with B12 hypovitaminosis. e combination of cerebral and spinal cord involvement has only been reported in two previous MBD cases [3,18]. Histopathological findings of MBD showed demyelination (particularly of corpus callosum central fibers) with myelin breakdown products, foamy macrophages, perivascular lymphocytes, gliosis, and white matter necrosis [19,20]. Most of the cases reported show postmortem examinations. In our case, a brain biopsy was performed, and the findings were consistent with MBD. Regarding prognosis, MBD is usually associated with a high mortality, although some cases with a favorable outcome have been reported [9]. In a large published series, out of 250 patients, only 20 (8%) presented a favorable recovery [7]. Alcoholic MBD was associated with a less favorable outcome, and death was caused mainly by infectious complications [1,21]. Related to vitamin B complex deficiency association, thiamine replacement has become a widespread treatment option, and better outcome was described in subjects treated in the acute stage. Corticosteroids may reduce brain edema, suppress demyelination, stabilize the blood-brain barrier, and reduce inflammation. Although some publications have reported an improvement with this therapy, its benefits are still doubtful and further research is required [12,21]. Conclusion MBD is an uncommon entity, which represents a neurological emergency. It requires strict multidisciplinary care in the acute stage, due to its high morbimortality. Our patient had an atypical presentation with spinal cord involvement without typical comorbidities. His diagnosis was a real clinical challenge. Our hypothesis in this case was a double mechanism of neuronal degeneration due to nutritional deficiency of both vitamins B1 and B12. Ethical Approval e institutional review boards and ethics committees approved this manuscript. Consent Informed consent was signed by the patient's family member. Conflicts of Interest e authors declare no conflicts of interest.
2020-10-29T09:02:22.165Z
2020-10-28T00:00:00.000
{ "year": 2020, "sha1": "596703a6353323d68247f9c561678b85096f6fe2", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1155/2020/8867383", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "95a44da7450fabbe376088d4552cf89babbeaad4", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
235280288
pes2o/s2orc
v3-fos-license
Influence of thickness nonuniformity of piezoelectric zinc oxide layer on parameters of microelectronic BAW solidly mounted resonator The paper presents the experimental results of a study of the influence of the thickness nonuniformity of a piezoelectric zinc oxide films on electrical equivalent parameters of microelectronic BAW resonators: the static and dynamic capacitances, dynamic inductance, and dynamic resistance. These parameters were determined using the experimental frequency dependences of resonator impedance and the Butterworth-van Dyke model. The resonators under investigation had an operating frequency of 2.8…3.0 GHz; the frequency spread about 500 MHz. The quality factor of the resonators was 250…350 and the relative width of the resonant bandwidth of the resonators were equal to 0.2…0.4%. Introduction The modern technology of mass production of microelectronic devices including magnetron deposition of multilayer thin-film structures involves the use of dielectric substrates 60x48 mm 2 or more and the dimensions of the devices are 1.5x1.5 mm 2 . As a result, the electrophysical properties of thin films (for example, surface resistance, layer conductivity, etc.) may change [1][2]. Possible reasons for the occurrence of nonuniformity in the layers thickness include: the quality of the substrate material, the quality of the substrate washing before sputtering, the percent depletion of the target material, the design and technological possibilities of the magnetron sputtering installation (configuration and magnitude of the magnetic induction of the magnetron, the configuration of the correcting diaphragm, the distance from the target to the substrate, etc.). Such nonuniformity has a great influence on the operation of microelectronic devices working in the microwave range, in particular, thin-film resonators based on bulk acoustic waves (BAW). The nonuniformity in the film thickness leads to a change in the operating frequency of the device and in some cases to the impossibility of using resonators as part of a finished product [2][3][4][5]. The most sensitive to changes of the layers thickness is a microelectronic BAW resonator with a Bragg reflector called a solidly mounted resonator (SMR). The SMR is a multilayer structure and its design contains up to 15 thin-film layers. A change in the thickness of each layer of a Bragg reflector leads to a change in the conditions for the reflection of bulk acoustic waves in its structure [6][7][8]. Due to the wide bandwidth of the Bragg reflector, the shift of the resonant frequency has the weak effect on the performance of the reflector. However, additional difficulties arise when the thickness of the piezoelectric film changes. In this case the resonant frequencies of the BAW resonator are shifted. As a consequence, it is required to use the additional technological operations for controlling of the resonator frequencies and bringing them to the nominal value. It is known that a change in the design parameters of the resonator leading to a shift in its operating frequency affects the dynamic parameters of the resonator. For particular problems, stringent requirements can be set for limiting the certain dynamic parameters of resonators, for example, for the dynamic inductance. Knowing the nature of the change in dynamic parameters, it is possible to optimize the thickness of the resonator layers. Until now, this issue has not been considered in detail. In this regard, the purpose of this work is to study the effect of the thickness of a piezoelectric zinc oxide film on the frequency characteristics and the equivalent electrical parameters of a microelectronic BAW SMR. Method of experiment To study the effect of nonuniformity thickness of a piezoelectric zinc oxide layer on the electrical characteristics of a microelectronic BAW SMR, the design of resonators with different areas of the upper electrode in the range from 0.01 mm 2 to 0.04 mm 2 was developed. The structure of the resonator is a CT-50-1 sitall substrate, on which layers of a Bragg reflector based on five pairs of layers of molybdenum and aluminium films and layers of a piezoelectric transducer based on zinc oxide films and aluminium electrodes are applied. Thin-film layers of the resonator were deposited by the magnetron method in a single technological cycle. The distance from the target to the substrate was 70 mm. The target had a size of 160x70 mm 2 . The thickness of the thin-film layers was controlled by the resonance method with an accuracy of 0.1%. The thicknesses of the BAW resonator layers were calculated for an operating frequency of 2.9 GHz. After deposition of the multilayer structure of the resonator, the configuration of the upper electrode was formed using a photolithography operation. The electrical parameters of the resonators were investigated using an E5071C vector network analyzer (Agilent Technologies) in the frequency range from 100 MHz to 8 GHz in the reflection mode (measured parameter S 11 ). Before all measurements to eliminate the parasitic influences of the frequency characteristics of cables and microwave probes the measured parameters are corrected with a calibration board CSR-15 Cascade Microtech. The dynamic parameters of the resonator were determined according to the Butterworth-van Dyke model with using the algorithm described by the authors in [9]. In conformity with the Butterworth-van Dyke model, the equivalent electrical circuit of the BAW resonator includes static (C 0 ) and dynamic (C m ) capacitances, dynamic inductance (L m ) and dynamic resistance (R m ). The static capacitance of the resonators was measured using an E5071C network analyzer according to the Wolpert-Smith diagram at a frequency of about 100 MHz. The quality factor of the resonators was determined from the frequency dependence of the active conductivity of the BAW SMR. Measurement of the upper electrode dimensions was carried out with a KN-8700 high-resolution video microscope. Results and their discussion On the basis of the developed design and technology, the prototypes of resonators were made. These resonators operated at frequency of 2.6...3.1 GHz and had a quality factor of 250...350. Their resonant bandwidth was 8...14 MHz. The external view of the cross section of microelectronic BAW SMR is shown in Fig. 1. After fabrication of the resonators the substrate was divided into modules. Each module contained 8 resonators with a different area of the upper electrode (S el ). In this work, 30 modules with resonators fabricated in one technological cycle were investigated. Figure 2 shows typical frequency dependences of the active conductivity (G) and the modulus of electrical impedance (|Z|) of a resonator based on a piezoelectric zinc oxide film. The series resonance frequency of this resonator is 2.803 GHz, the resonant bandwidth of the SMR is 11.3 MHz and the Q factor is 332. The maximum value of the active conductance corresponds to the frequency of series resonance and is equal to 8.43 mS. The modulus of electrical impedance at the frequencies of series and parallel resonances is 44.3 ohms and 64.1 ohms, respectively. After recording all the electrical characteristics of the BAW resonators, the experimental results were processed according to the algorithm presented by the authors in [9]. The thickness of the zinc oxide film was determined from the serial resonance frequency of BAW SMRs. Based on the data obtained, the dependences of the equivalent electrical parameters of the resonator on the thickness of the piezoelectric zinc oxide film were plotted, Fig. 3. Figs. 3a and 3b show that with increasing the thickness of the zinc oxide film by 1.2 times, the values of static and dynamic capacities decrease by 1.23 times and 1.6 times, respectively. With increasing the thickness of the zinc oxide film by 1.2 times, the increase of the values of the dynamic resistance and inductance by a factor of 2 is observed. The general nature of the change in the static capacitance and dynamic inductance remains at different values of the area of the upper electrode of the BAW SMR. However, for resonators with an upper electrode area less than 0.0225 mm 2 , we observe a smaller change in the values of the dynamic capacitance, in contrast to resonators with an upper electrode area from 0.0256 mm 2 to 0.04 mm 2 . For dynamic resistance, we observe the opposite picture. Accordingly, for resonators with an area of less than 0.0225 mm 2 , we have a greater change in Conclusion This paper presents the results of the experimental studies of the electrical parameters of microelectronic BAW SMR from the nonuniformity thickness of a piezoelectric zinc oxide film. The obtained resonators operate at a frequency from 2.6 GHz to 3.1 GHz, the bandwidth of the resonator is 8...14 MHz, and the Q factor is 250...350. It is shown that the increase of the thickness of the zinc oxide film by a factor of 1.2 leads to the decrease of the values of the static and dynamic capacitances by a factor of 1.23 and 1.6 times, respectively, and the increase of the values of dynamic resistance and dynamic inductance in 2 times. The results obtained will be useful for developers of microwave frequency selection and signal generation devices, sensors and biosensors, as well as developers of other devices based on microelectronic BAW SMR.
2021-06-03T01:34:15.283Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "40301aeb15eea4b757060afcd11ece6bace67b92", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/1901/1/012110", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "40301aeb15eea4b757060afcd11ece6bace67b92", "s2fieldsofstudy": [ "Engineering", "Physics", "Materials Science" ], "extfieldsofstudy": [ "Physics" ] }
54609549
pes2o/s2orc
v3-fos-license
Cracking Back: The Effectiveness of Partisan Redistricting in the Texas House of Representatives The authors thank Jeremy M. Teigen and Gary King for their assistance. Portions of our research were funded by the Public Policy Institute at the University of Texas at Austin, and we thank David Leal, the institute’s director, for his assistance. We also thank Charles Eckstein, Research Assistant for the Texas Legislative Council, for providing most of the data used in our study. As always, the authors are solely responsible for the analysis, the conclusion, and any errors. In 2003, the redistricting battle in Texas played out in dramatic fashion, as Democrats in the Texas House and Texas Senate both left the state at different times in an ultimately unsuccessful attempt to stop the majority in the Legislature from implementing a congressional redistricting plan that greatly favored Republicans.In this paper, we show that the drama provided by the escapes to Ardmore, Oklahoma and Albuquerque, New Mexico are the result of a previous redistricting-the 2001 redrawing of lines for the Texas House of Representatives.Those new lines allowed Republicans to take the majority in that body in the 2002 elections and complete the first sweep of Texas government by Republicans since Reconstruction. Our study uses a methodology that allows us to evaluate the partisan impact of redistricting by comparing different plans proposed for the Texas House in 2001.Our analyses show that if Texas Democrats were able to implement their map then they would have retained a majority of Texas House seats in 2002-an important finding since the 2003 congressional redistricting would not have been possible if Texas House Republicans remained in the minority.In addition, our paper makes two other contributions: (1) we use innovative methods to compare the partisan effects of the plan used in the 2002 and 2004 elections with proposed maps that did not go into effect, and (2) that rules matter in redistricting, especially in the context of substantial party system change, with the Texas GOP making rapid political gains at all levels of electoral competition. The paper proceeds as follows.We provide a discussion of what scholars have learned about redistricting and what we still need to explore.Then we examine redistricting in Texas, providing some historical perspective on Texas House elections in the 1990s and the 2001 redistricting, and introduce the plans we have selected for analysis.We present our model and results, using the JudgeIt statistical program to analyze the redistricting plans.Last, we conclude with a discussion of the significance of Texas House redistricting, examining in particular how our results provide insight on the future of Texas politics. What We Know (and Don't Know) about Partisan Redistricting Redistricting studies in the 1980s focused on the partisan impact of redistricting in the post-Baker v. Carr era.Results were mixed (Cox and Katz 2002, 21-22).Some studies found that the party of the line drawer had only modest effects on the partisan seat balance (Ayres and Whiteman 1984;Cain 1985;Niemi and Jackman 1991;Squire 1985), while others found that redistricters did succeed at creating seats for their party (Abramowitz 1983;Cranor, Crawley, and Scheele 1989;Gopoian and West 1984). 1 In the 1990s, research on the impact of redistricting focused more on the impact of race on redistricting outcomes.Driving this line of research was the increase in the number of majority-minority districts, which states had to create in the 1990s round of redistricting due to Justice Department interpretations of several late 1980s court cases (Bullock 2000;Clayton 2000;Cunningham 2001). Redistricting has been especially contentious throughout the South because of the strong connection between race and partisanship.After the 1990 Census, the creative cartography of new congressional and state legislative districts-most of which were drawn by Democrats (Niemi and Abramowitz 1994)-drew judicial scrutiny. 2Democratic line drawers had to maximize the number of majority-minority districts (Cunningham 2001), which by definition concentrate the most loyal Democratic voters.But to maximize their party's electoral benefit, Democratic redistricting plans had to create neighboring districts that could offset the loss of loyally Democratic minority voters with others who could be expected to be most supportive of the Democratic Party. 3 In addition, Anglo 4 Democratic incumbents were protected by minimizing the percentage of their district populations that contained residents who resided in a different district prior to redistricting (Petrocik and Desposato 1998).The expectation was that these Democratic incumbents could rely heavily on the votes of constituents they represented before and after redistricting. Research on the impact of majority-minority districts, despite the best efforts of Democratic line drawers, has generally found that racial redistricting in the 1990s proved beneficial to Republican candidates (Black and Black 2002;Bullock 1995aBullock , 1995bBullock , 2000;;Cameron, Epstein, and O'Halloran 1996;Epstein and O'Halloran 1999a, 1999b, 2000;Hill 1995;Hill and Rae 2000;Lublin 1997;Lublin and Voss 2000a, 2000b, 2003;McKee 2002;Swain 1993).These studies found this partisan effect was mainly a consequence of the strong and growing racial polarization in southern elections (Voss and Lublin 2001).Throughout the South, and especially in Texas, the ongoing realignment of whites into the Republican Party has served to widen the difference in vote choice among Anglos and minorities. 5 While research indicates that redistricting can have marked partisan effects, the evidence is generally mixed and case-specific.While political scientists, journalists, and political practitioners all assume that line drawers can manipulate district lines to their party's benefit (but see Rush 1993), questions still remain about the magnitude of that impact.One reason for this gap is that nearly all studies have focused on enacted boundaries (with notable exceptions like Gronke and Wilson 1999).In this paper, we combine a methodological tool (the JudgeIt program) that allows us to examine both enacted and proposed redistricting maps with data on not only the districts used in the 2002 and 2004 elections, but also in districts that existed only in proposed plans. Texas House Redistricting Table 1 provides a look at the course of Texas House politics in the 1990s.In 1990, Democrats controlled the Legislature, and, with the signature of Democratic Governor Ann Richards, passed a redistricting plan that benefited their party.This advantage held throughout the decade of the 1990s.While Republicans came to dominate statewide elections, 6 the Democrats continued to hold onto their advantage in the Texas House throughout the decade, maintaining a 78-72 majority after the 2000 elections. The remarkable partisan transformation in Texas is captured in Table 1.The dramatic change in representation in the Texas House from 1990 to 2002 is documented by the link between the racial/ethnic composition of a district and the corresponding race/ethnicity of the representative.In 1990 there were more Anglo Democrats than Anglo Republicans, but by 2002 over 80 percent of Anglo representatives were Republicans.In a little over a decade the Texas Democratic Party has become the party of racial minorities and the Texas Republican Party has achieved majority status by relying overwhelmingly on the electoral support of Anglos. Table 1 illustrates that, despite a pro-Republican tide in the state, the Democrats were able to hold onto their majority through the 2000 elections.Democrats have a nearly rock-solid base of support in districts with high concentrations of minorities, winning all but three majority-minority seats from 1990 to 2002.The Democratic redistricting plan for the 1990s allowed them to keep enough Anglo seats to maintain their majority throughout the decade, despite strong Republican gains concentrated in these seats.Not surprisingly, Republican seat gains in 2002 come almost entirely at the expense of Anglo Democrats who represented majority Anglo districts. Thanks to their slim majority, Democrats had the edge in the first stage of redistricting in 2001.In this legislative stage, Democratic Speaker Pete Laney dominated the process in the House.A plan proposed by Laney's handpicked Redistricting Committee passed the House on May 8, 2001 by a 75-68 margin on a largely party line vote. 7However, disputes in the Texas Senate over their redistricting plan, as well as strong opposition to the Laney plan by the Republican Party and conservative interest groups (Copelin 2001a;Halter 2005, 122) held up all redistricting business, and the Legislature adjourned without passing along a redistricting plan to Republican Governor Rick Perry. Texas law requires that when the legislature and the governor cannot agree on a redistricting plan during the regular legislative session following the decennial census, the responsibility for redistricting passes to the Legislative Redistricting Board (LRB).The LRB is composed of the leaders of the two houses of the Texas Legislature and three other statewide elected officials. 8The LRB meets for the sole purpose of drawing new districts, and a simple majority is needed to implement a new plan. It is important to emphasize that the plans advocated by Speaker Laney faced long odds of winning approval.Because Republicans constituted a 4 to 1 LRB majority, Republican legislators could simply let redistricting legislation die, and thus allow the LRB to draw maps much more favorable to their party. 9The need for Republican support in the Texas Senate and from the Republican Governor made it extremely unlikely that even a map that protected Texas House incumbents of both parties would pass the Legislature.This context helps demonstrate how the rules matter in shaping both the strategies of political actors and the partisan outcome of the redistricting process. In 2001, led by then Attorney General John Cornyn, the board's Republican majority passed a redistricting plan for the Texas House on July 24, 2001.While amendments to the plan made during the July 24 session of the LRB prevented immediate analysis of its partisan impact, Cornyn had previously estimated that his proposed plan would yield 82 to 88 Republican seats (Copelin 2001b). The redistricting process then moved to its third stage, where the U.S. District Court took up the case.During deliberations, the Department of Justice denied preclearance 10 to the plan passed by the LRB.However, the objections were relatively minor, and related to retrogression of Hispanic voting in three districts in South and Southwest Texas, and in Bexar County (San Antonio).Taking these objections into account, the Court approved the LRB plan with minor modifications in 28 districts (out of 150 total) to account for the Justice Department's objections.The 2002 Texas House elections then proceeded under the lines drawn by the Court. 11 Plans for Analysis Our analysis examines three plans proposed for the Texas House of Representatives in 2001.The first plan, Laney House, passed the Texas House in 2001 with the strong support of Speaker Laney.The goal of Speaker Laney and his allies was to preserve a Democratic majority in the Texas House and allow Laney to retain the Speakership.In addition, we consider the LRB plan, sponsored by Attorney General John Cornyn and passed by the Republican-dominated Legislative Redistricting Board. Again, we assume a partisan goal for both of these plans: increasing the number of seats for their party in the Texas House.The Democratic plan hoped to accomplish its partisan goal by preserving the status quo-helping Democratic incumbents hold onto their current seats by packing Republicans into a smaller number of districts.The Republicans hoped to take control of the Texas House by targeting vulnerable Democratic incumbents.Cracking the districts of Anglo Democrats would force them either to retire or to face a difficult reelection by placing these incumbents in districts with large percentages of new voters.Also, both parties' plans sought to defeat the opposition's incumbents by having them face off against other incumbents (of the same and opposite political affiliations).Finally, the Republican sponsored plans shifted more districts in the state's urban areas to the growing and Republican leaning suburbs (Halter 2005, 123;Ma 2001;Copelin 2001c). While several other redistricting plans were offered, we chose to examine these plans (Laney House and the LRB plan) for two reasons.First, the partisan intent of the two plans is abundantly clear.Second, both plans had to be politically adroit enough to garner a majority of votes in the Texas House and the LRB, respectively.Both plans' sponsors had to balance the different demands of various supporters.We argue that these plans thus represent not just the partisan intentions of their sponsors, but also the most politically viable plans proposed by Democrats and Republicans during the 2001 redistricting process. The final plan we consider is the Court Plan, which, as noted above, was the actual plan used in the 2002 elections.Again, the Federal Court made only minor changes to the plan passed by the LRB.In addition, we will analyze the 1990s plan 12 to provide a baseline for evaluating the new plans.As discussed previously, Democrats crafted this plan.Table 2 provides a list of the plans, their sponsors, and their partisan affiliations. In analyzing each of these plans, we focus primarily on the partisan impact of the plan.Specifically, we assess how many seats would Republicans have won had these plans been implemented.And second, we assess whether the sponsors of the two partisan plans would meet their partisan goals of winning as many seats as possible for their party, ensuring majority control of the Texas House. Data and Methods The data we employ in our analyses were provided by the Texas Legislative Council (TLC).The TLC serves as a research institution for the Texas Legislature and it utilizes state of the art mapping technology for the purpose of providing information on redistricting to lawmakers and the public. We have constructed four explanatory variables to assess the partisan effects of three plans proposed for the 2002 Texas House elections.First, we include a variable to account for incumbency.In Texas, state house candidates must live in the district they seek to represent.Based on reports from the TLC, we know which district each incumbent resided in before the election.All of the plans in our study contain some districts that pit incumbents against each other based on district residence. We have followed a consistent method to determine incumbency given the complication of multiple members living in the same district.We award the new district to the incumbent who retains the largest portion of their old district population.For example, if a Democrat and Republican are paired in a district, the Democrat is classified as the incumbent if more of his/her old district population remains compared to the share of the district previously represented by the Republican.We then assume an incumbent runs in every district where applicable, and code our variable 1 for Republican, 0 for open seat, and -1 for Democrat. 13 We also include variables for the percent Black voting-age population (BVAP) and the percent Hispanic voting-age population (HVAP).Data from Texas show a strong and positive correlation between the minority percentage of Texas legislative districts and their propensity to vote for Democrats (Halter 2005, Figures 7.7 and 7.8). 14As discussed earlier, partisan gerrymandering relies heavily on manipulating the racial composition of districts since race is the most accessible and reliable indicator of vote choice apart from party identification. The last independent variable is an index of district partisanship compiled from all six statewide open seat elections taking place in 1998 and 2000.To provide an accurate gauge of two-party competition, we selected only open seat, lower profile contests. 15The index is the Republican share of the two-party vote for each Texas House district in each of the plans.Finally, the dependent variable is the Republican share of the two-party Texas House vote in the 2000 elections. JudgeIt Model To conduct our statistical analyses we use the JudgeIt statistical program created by Andrew Gelman and Gary King (1994). 16The strength of JudgeIt is its predictive power and versatility.Gelman and King explain that, "the purpose [of JudgeIt] is not estimating causal effects . . .[but] to choose variables that would help in forecasting future votes" (1994,523).The program can be used to generate numerous statistics commonly used to assess redistricting plans (e.g., seats-votes curves, electoral responsiveness, partisan bias, seat predictions, etc.).Furthermore, JudgeIt allows one to evaluate: (1) elections that have already taken place, (2) elections under a new districting plan that have yet to occur, and (3) counterfactual conditions such as no incumbents running for reelection (i.e., all open seats). We employ JudgeIt to estimate partisan bias, electoral responsiveness, and the predicted probability that the Republican Party wins each district in each of the plans we evaluate.Partisan bias is measured as the degree to which a party receives a greater or lesser proportion of seats than what would be fair in terms of the proportion of seats won by the opposite party.In other words, based on its average district vote, one party is able to capture more or less seats in comparison to the other party.For example, if we consider the Republican share of the two-party vote, a partisan bias of -0.03 would mean that the GOP receives 3 percent less seats than they would if there were perfect symmetry in terms of the translation of votes to seats between the parties. Electoral responsiveness measures the percent increase in seats a party is expected to gain based on a 1 percent increase in the average district vote across all districts.So, for instance, a responsiveness of 3.8 percent means that a 1 percent increase in the average district vote for a party across all districts should increase their share of seats in the Texas House by 3.8 percent. Finally, with JudgeIt, under each plan we can generate the probability that each district is won by a Republican candidate.We report the number of seats the Republicans should win under each plan based on the number of seats in which Republicans have a probability of victory that is greater than .5.We also estimate the security of each party's seats, estimating the number of safe seats for each party (> .8probability of winning). 17 We estimate the bias, responsiveness, and the number of seats Republicans should win assuming that the 2002 elections are contested under the lines drawn for each plan.We report the results for the plans assuming that all incumbents run for reelection based on the coding rules listed above.Then, we report the same results assuming that none of the incumbents stand for reelection.By excluding incumbents, we can examine the long-term impact of each of these plans. Our model, like all models, is but a representation of reality.As such, it does not include a number of important factors in determining election results, including the decision of politically skilled or experienced candidates to run for office (Jacobson and Kernell 1983;Hogan 2004;Gierzynski and Breaux 1991).Research on Texas legislative elections has found that the amount of money a challenger spends (Hogan 2000), the quality of campaign communication-which Texas legislative campaigns conduct with direct mail and pamphletting (Hogan 1997), and even how a campaign schedules their candidate's time (Arbour 2006) can impact election outcomes.Certainly in the 2002 Texas House elections, the efforts of the Republican Party and its financial donors to target competitive races and to direct large donations to these contests played a vital role in many Republican victories (Olsson 2002). 18 Our model cannot know these things.Nor can we establish with any certainty what line drawers knew about potential candidates in the upcoming election, or what they estimated the effort of individual campaigns-both in terms or raising money and organizing their get-out-the-vote programswould be.What we can model is what redistricters knew at the time that they drew the lines-the partisan and racial make-up of proposed districts.And we also know that these two factors are essential to explaining the outcomes of state legislative elections (cf.Hogan 2004;Tucker and Weber 1987). Results The data show a clear partisan effect in the redistricting plans.As we expected, the party of line drawers tells us much about the partisan implications of the plan they drew. Short Term Impact of Redistricting Plans Our first analysis uses JudgeIt to examine each of the plans under the conditions of the 2000 elections.We include incumbency in this analysis, again assuming that all incumbents stay in the district of their residence and that all incumbents run for reelection in 2002.This analysis provides a shortterm perspective on the impact of redistricting-examining how each plan would affect the 2002 Texas House elections. As Table 3 shows, we find that Republican line drawers were successful in their attempts to bias the districts in favor of their party.The LRB plan has a Republican bias of 2.7 percent.Speaker Laney and his allies were just as successful in tweaking the lines to his party's favor, as the plan he endorsed is similarly weighted toward Democrats (a 2.6% Democratic bias).The Laney House plan increases the Democratic bias from the 1990s lines.Although it is significant for all plans, responsiveness in substantive terms has a minor effect, and little difference exists between the plans. The impact of the bias of the plans shows up in the expected number of victories per party.The model shows that under Speaker Laney's plan, Democrats are projected to win 79 seats, enough to retain a Democratic majority in the House.The Republicans crafted a plan that would have The Effectiveness of Partisan Redistricting | 395 provided their party with 83 seats, a strong majority in the 150 member Texas House. Long Term Impact of Redistricting Plans To examine the long term impact of the 2001 round of Texas House redistricting, we examined each plan without incumbents.By removing incumbency, we get a better idea of the impact of each plan over the course of the decade, as many of the incumbents currently in the legislature will retire. Shorn of incumbents, the plans show strong Republican majorities in the future of the Texas House.Nonetheless, the electoral impact of the individual plans is tempered by the partisan intent of the line drawer.The Laney House plan would have created districts more favorable to Democratic open seat candidates than the 1990s plan.Despite this, Laney was fighting an uphill battle, as the model projects both a Republican bias (though the 1.4% Republican bias is not statistically significant), and more importantly, a Republican majority of 83 seats.The Republicans, not surprisingly given the Republican trend in Texas, do a much better job of creating seats for their fellow partisans; the LRB plan projects 95 Republican seats.The result for responsiveness (3.8%) shows that in the absence of incumbents, relatively small partisan electoral swings will lead to large pickups in the Texas House. Results of the 2002 and 2004 Texas House Elections The districts in place for the 2002 Texas House elections were not drawn by partisan politicians, but instead by a Federal Court.However, as noted above, the judges made only minor changes to the plan passed by the Legislative Redistricting Board.The data show that the impact of the Court plan was similar, but slightly less favorable to Republicans, than the LRB plan.The Federal Court's plan reduces the Republican bias both in the short term and in the long term.Compared to the LRB plan, the seat predictions show that the Court plan, both with and without incumbents, gives the Republicans one less safe seat and one less seat in which they are favored to win. The JudgeIt program evaluates the 2002 plans assuming that electoral conditions are the same as in the 2000 elections.But the 2002 elections were not held under the same conditions as 2000.In 2002, Republicans improved their electoral showing statewide, as the Democratic "Dream Team" ticket of Tony Sanchez for Governor, Ron Kirk for U.S. Senate, and John Sharp for Lieutenant Governor proved disastrous.Not only did all three candidates lose, they also did much worse than expected. 19 The Republican trend continued in Texas House races, as Republicans won 88 seats.Of the 78 Democrats elected in 2000, 14 either retired from politics or ran for higher office, while five members lost to a Republican in November 2002, and another five were defeated in a primary election.The partisan sweep in November affected the next legislative session, as Tom Craddick won election to the Speaker's Chair-the first Republican Speaker in Texas since Reconstruction. In the 2004 Texas House elections, Democrats recorded a minor success, gaining one seat to reduce the Republican majority to 87-63.But the 2004 Texas House elections do not deviate much from the general patterns identified in this paper.Republicans actually gained vote share across the state, improving their total share of the two-party vote from 54.9 percent in 2002 to 57.3 percent, and their share of votes in contested elections from 60.1 percent to 62.1 percent. Democrats netted one seat in large part because they were more successful in closer elections, as eleven of their winners garnered under 55 percent of the vote, compared to only six Republicans who did so. 20The Democrats who hold these competitive seats tend to be Anglos (nine of the eleven), come from districts with majority Anglo populations (67.7% Anglo on average), and vote Republican in other elections (President Bush not only won all eleven districts, but the 62.0% of the vote he averaged in these districts is higher than his statewide vote share of 61.1%). Thus, Republicans would seem the strong bet to win most of these eleven seats in the near future, either by defeating the Democratic incumbent outright, or by winning open seats when these Democrats retire.In short, while Democrats may be able to occasionally use advantages in incumbency, government performance, or candidate skills to win these seats periodically, their long term prospects of holding these seats appear bleak.Republicans have a more fertile field to pick off Democratic seats in upcoming elections.It would not surprise us to see a Texas House in the near future with more than 100 Republicans. Conclusion Redistricting has played a vital role in contemporary Texas politics.As we have shown, the great variation in the number of Texas House seats that each party would have won is a function of the redistricting plan.With the JudgeIt statistical program, we have been able to measure the partisan impact of Texas House plans conditioned by short-and long-term scenarios.According to our analyses, both parties proved quite adept at furthering their electoral interests, particularly in the short-term, when we account for incumbency.However, in the long-term, assuming all open seat contests, the Republican trend in Texas renders Democrats the minority party even if a Democratic plan were enacted. In the case of Texas, a state undergoing rapid party system change with the GOP in ascendancy, redistricting still registers a substantial independent effect on electoral outcomes.If the Democratic plan had been passed, Pete Laney would have remained Speaker, and hence Texas Republicans could not have pursued their successful effort to redraw the congressional map for the 2004 U.S. House elections.Of course, after the Laney plan passed the Texas House, Republicans held all the cards at each of the subsequent stages of the Texas redistricting process.By not passing the Laney House plan, the Republican-controlled Texas Senate ensured that Texas House redistricting would be left to the Legislative Redistricting Board.There, Republicans could use their majority to pass a plan favorable to them. That Republicans held a majority on the LRB is reflective of the pro-GOP trend in Texas politics.Republicans have been on a seemingly inexorable climb in Texas.The current trend has shown that while Democrats can hold onto marginal seats through the incumbency advantage, once Republicans win those seats, Democrats have little chance to win them back.The projections for open seat races provide little reason to believe that this trend will change within the decade.Democrats will continue to lose even more seats as their incumbents retire. In the Texas House of Representatives, the two parties have almost reached the point where they are divided entirely by the racial makeup of their districts.After the 2002 Texas House elections, Republicans represented 86 percent (83 out of 97) of the Anglo majority districts, whereas Democrats represented 96 percent (48 out of 50) of districts with majorityminority populations (Table 1).The most reliable indicator of vote choice in Texas elections is race and partisan line drawers primarily manipulate the racial compositions of districts to further their redistricting goals. Over the long term, the rapid growth of the Hispanic population provides Democrats the opportunity to retake some seats.But outside of a large shift of Anglo voters back to the Democratic Party, the Republican majority won in the 2002 Texas House elections appears to be the first in a long series of consecutive Republican majorities. APPENDIX JudgeIt Model The JudgeIt model takes the form of a random components regression: v = Xβ + γ + ε, where v is the Republican proportion of the two-party vote for each district; X is a vector of explanatory variables (incumbency; % BVAP; % HVAP; and statewide index); β is a vector for the k parameters that estimate the impact of the explanatory variables on v; and γ and ε constitute independent error terms.The variable ε is equivalent to the error term in ordinary least squares regression.What makes the JudgeIt regression different from OLS is the variable γ which is the error term that accounts for the random component in the model that arises from the fact that the explanatory variables cannot perfectly predict election outcomes because of measurement error in the variables included in the model and the omission of other relevant variables (Gelman and King 1994). Several steps are required to generate the estimates for our model.First, because of the random component in the model we have to estimate two hyperparameters, sigma (σ) and lambda (λ), for which we obtain constant values by running separate regressions on several election years (Hill 1995).We have run regressions for three elections to determine the average values for sigma (.0495) and lambda (.6917). 21These constants are then used in the regressions that estimate partisan bias, electoral responsiveness, and the probability of a GOP victory for each district in each plan. Next, using the redistricting plan in place for the 1992-2000 elections, we regress the Republican portion of the state house vote in 2000 on our explanatory variables: incumbency; % BVAP; % HVAP; and the statewide index. 22Then we use the parameters estimated from the 2000 plan to predict the outcomes for the three plans proposed for the 2002 Texas House elections.In other words, the regression estimates from the 2000 plan are used to predict the outcomes for each 2002 plan conditioned upon the values of the explanatory variables as they apply to each plan. 12 Texas House districts were changed three times during the 1990s.When we refer to the 1990s plan in the paper, we are referring specifically to the last valid plan prior to the 2002 elections-Plan H882-which was passed by the legislature in 1997, and used in the 1998 and 2000 elections. 13It is of course true that candidates can move into different districts (and several did), but line drawers do not have a crystal ball.Therefore, our method best replicates the information available to redistricters as they drew up their maps.So, for the plan enacted for the 2002 Texas House elections, instead of assigning incumbents according to the districts in which they actually ran, for the purpose of consistency, we apply the same criteria to determine incumbency. 14To provide an estimate of the impact of race on partisan vote share, we, like Halter (2005), regressed Democratic vote share (measured by our partisan index of statewide open seat races in 1998 and 2000) on the minority percentage in each district using the 1990s plan.A 1% increase in the percentage of minority voters increased the Democratic vote by 0.51%.The equation is Democratic Vote Share = 0.2684 + 0.5162 * % Black + % Hispanic (s.e.0.0290; t = 17.80). 15 Such a method allows us to better account for the true state of partisanship in each district by eliminating the distortions than can be created by a popular incumbent (e.g., George W. Bush in the 1998 gubernatorial election).Further, in lower information elections, voters are more likely to use partisan rather than candidate-specific cues.We calculated the average Republican portion of the two-party vote at the district-level for the following contests : 1998 lieutenant governor, 1998 attorney general, 1998 comptroller, 1998 land commissioner, 1998 agriculture commissioner, and 2000 court of criminal appeals. 16Please see Gelman and King (1994;2001) for a more technical and complete explanation of the inner workings of the JudgeIt program. 17 Following Gronke and Wilson (1999), safe seats are those in which a party has greater than an 80% probability of winning. 18An effort brought to national attention by the September 28, 2005 indictment of U.S. House Majority Leader Tom DeLay for activities in the 2002 Texas House elections. 19The down-ballot statewide races illustrate the Democratic disaster and Republican triumph of 2002.No Democratic candidate in these low-information elections came within eight percentage points of defeating a Republican. 20If one uses a victory with less than 60% of the vote as the standard for a vulnerable incumbent, Democrats do not fare much better.Seventeen of their members won with less than 60% of the vote, compared to eleven Republicans in the same situation. 21To generate values for sigma and lambda we use the JudgeIt program to regress the Republican portion of the two-party state house vote v (t + 1) on v and several explanatory variables.For example, we regress the 1996 Republican portion of the state house vote (v t + 1 ) on the 1994 Republican vote (v); and the following explanatory variables: the 1992 Republican portion of the state house vote; 1994 incumbency; 1992 party control (1= sitting incumbent is Republican, -1 = sitting incumbent is Democrat); 1994 uncontested (0 = contested district, 1 = Republican running uncontested, -1 = Democrat running uncontested); 1994 percent BVAP; and 1994 percent HVAP.We repeat this regression for 1998 and 2000 with the same set of explanatory variables corresponding to the next set of election years.We then take the average of the values obtained for each sigma and lambda produced from each regression.We use multiple elections to get sigma and lambda in order to develop greater precision for these estimates (Gelman and King 1994). 22Following Gelman and King (1994), for uncontested state house elections we impute values of .75 and .25 for Republicans and Democrats, respectively.This procedure is used to better fit the data since even in the most one-sided districts two-party competition would yield values closer to .75 and .25 as opposed to 1 and 0. Table 1 . Texas House of Representatives Districts: 1990, 1992, 2000, and 2002 Note: There are 150 seats in the Texas House of Representatives.Data compiled by the authors.All data were provided by the Texas Legislative Council (TLC).Racial/ethnic statistics are all computed according to a district's voting-age population.Percentages for Asian-Americans and those from "Other" races are not included.Majority-minority districts contain a combined majority of Hispanic plus black voting-age populations; neither population by itself constitutes a majority.No majority means that neither the combined minority population (Hispanic plus black), nor the Anglo population alone, constitutes a majority of a district's voting-age population.The results for 1990 and 1992 were based on the 1990 Census and the results for 2000 and 2002 were based on the 2000 Census.
2018-12-02T16:28:53.595Z
2006-01-01T00:00:00.000
{ "year": 2006, "sha1": "5f3bd061e95e5faabc4dacfc0b385b9b88d3f2e1", "oa_license": "CCBYNCSA", "oa_url": "https://journals.shareok.org/arp/article/download/362/339", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "5f3bd061e95e5faabc4dacfc0b385b9b88d3f2e1", "s2fieldsofstudy": [ "Political Science" ], "extfieldsofstudy": [ "Political Science" ] }
218862765
pes2o/s2orc
v3-fos-license
L2R2: Leveraging Ranking for Abductive Reasoning The abductive natural language inference task ($\alpha$NLI) is proposed to evaluate the abductive reasoning ability of a learning system. In the $\alpha$NLI task, two observations are given and the most plausible hypothesis is asked to pick out from the candidates. Existing methods simply formulate it as a classification problem, thus a cross-entropy log-loss objective is used during training. However, discriminating true from false does not measure the plausibility of a hypothesis, for all the hypotheses have a chance to happen, only the probabilities are different. To fill this gap, we switch to a ranking perspective that sorts the hypotheses in order of their plausibilities. With this new perspective, a novel $L2R^2$ approach is proposed under the learning-to-rank framework. Firstly, training samples are reorganized into a ranking form, where two observations and their hypotheses are treated as the query and a set of candidate documents respectively. Then, an ESIM model or pre-trained language model, e.g. BERT or RoBERTa, is obtained as the scoring function. Finally, the loss functions for the ranking task can be either pair-wise or list-wise for training. The experimental results on the ART dataset reach the state-of-the-art in the public leaderboard. INTRODUCTION Abduction is considered to be the only logical operation that can introduce new ideas [9]. It contrasts with other types of inference such as entailment, which refers to the well-known natural language inference tasks (NLI), that focuses on inferring only such information that is already provided in the premise. Therefore, abduction reasoning is an important inference type deserved to An earthquake occurred in her city. be explored. A new reasoning task, namely the abductive natural language inference task ( NLI), is proposed to test the abductive reasoning capability of an AI system [1]. Different from traditional NLI tasks, NLI first provides two pieces of narrative text treated as a start observation and an end observation. The most plausible explanation is then asked to pick out from the candidate hypotheses. Many models have been successfully developed for the NLI tasks and directly adopted in the new proposed NLI task. These methods for NLI tasks treat the entailment between two sentences from a classification perspective, also treat NLI task as a binary-choice question answering problem, which selects one plausible hypothesis from two. However, discriminating true from false does not measure the plausibility of a hypothesis in abductive reasoning task, where all the hypotheses have a chance to happen with their probabilities, though some of their values are close to zero. As we can see in Figure 1, from a tidy room (observation 1 ) to a mess room (observation 2 ), we do not know what has happened. Thus, four hypotheses are proposed, where 'thief broke into the room' is the most likely happened, and 'cat slipped into the room' is also a potential answer. Nevertheless, even for the hypothesis 'earthquake' is also reasonable, just with a very small probability. It is hard to draw a line to determine which one is true from others. Depending on these insights, we argue that NLI is better to be treated as a ranking problem. From the ranking perspective, binary-choice question answering setting in the recent NLI task is just an incomplete pair-wise ranking scenario that only considers partial plausible order of given hypotheses. In order to fully model the plausibility of the hypotheses, we switch to a complete ranking perspective and propose a novel learning to rank for reasoning ( 2 2 ) approach 1 for NLI task. 2 2 adopts the mature learningto-rank framework, which first reorganizes training instances into a ranking form. Specifically, two observations 1 and 2 can be view as a query, and the candidate hypotheses can be view as a set of candidate documents. The relevance degree between a query and each document represents the plausible probability between observations and each hypothesis. Then, two parts of the learningto-rank framework, scoring function and loss function are designed for NLI task. Two types of scoring functions are chosen in this paper, the matching model ESIM [5] and the pre-trained language models, e.g. BERT [6] and RoBERTa [8]. Besides, pair-wise and list-wise loss functions are applied to train the ranking task. The experimental results show that our 2 2 approach achieves a new state-of-the-art accuracy on the blind test set of ART. Further analyses illustrate that the benefit of the ranking perspective is to assign a proper plausibility to each hypothesis, instead of either 0 or 1. TASK FORMALIZATION The task of NLI contains two major concepts, observation and hypothesis. The observation describes the state of the scene, while the hypothesis is the imaginary cause that transforms one observation to another. The famous Piaget's cognitive development theory tells us that our world is a dynamic system of continuous change, which involves transformations and states. Therefore, predicting the transformation is the core of the NLI task. In detail, two observations are given 1 , 2 ∈ O, where O is the space of all possible observations. The goal of NLI task is to predict the most plausible hypothesis * ∈ H , where H is the space of all hypotheses. Note that the happening time of observation 1 is earlier than 2 . Inspired by the traditional NLI task, where the hypothesis is regarded to be directly entailed from the premise. However, the relation between hypothesis and two observations in NLI task is in a totally different way, where hypothesis is depended on the first observation 1 , and the last observation 2 is depended on 1 and . The best hypothesis * is the one to max the score of these two parts. It can be modeled by a scoring function that treats 1 , 2 and as input, and outputs a real value , e.g. scoring function: : O×H ×O → R For easy model adaptation, NLI in the ART dataset is originally defined as a binary-choice question answering problem, whose goal is to choose the most plausible hypothesis from two candidates 1 and 2 . From the classification perspective, it can be formalized as a discriminative task that distinguishes the category of 1 − 2 . The positive indicates 1 is more plausible than 2 , while the negative is the opposite. We argue that it is an incomplete pair-wise approach in a ranking view, which only considers a small portion of the order in a ranking list and yields poor performance. Therefore, we reformulate this task from the ranking perspective and adopt the learning-to-rank framework. In this framework, observations 1 and 2 can be regarded as a query, and their candidate hypotheses H = { } =1 can be viewed as the corresponding candidate document set labeled with plausibility scores y = { } =1 , where is the number of candidate hypotheses. The loss function is a key part of the learning-to-rank framework, where point-wise, pair-wise and list-wise are three commonly used loss function types. In this paper, we only consider pair-wise and list-wise loss function, because point-wise is just a classification loss that does not take the order of the hypotheses into consideration. Given the plausibility scores, we can make all possible hypotheses pairs, when plausibility scores are different, in order to train on a pair-wise loss function. We also use a list-wise loss function by treating the candidate hypotheses as an ordered list, which measures the error on a whole ranking list. OUR APPROACH Under the ranking formalization, we proposed our learning to rank for reasoning ( 2 2 ) approach, which is an implementation of the learning-to-rank framework for the NLI. The learning-to-rank framework typically consists of two main components, e.g. a scoring function used to generate a real value score for a query-document pair and a loss function used to examine the accurate prediction of the ground truth rankings. Scoring Function The scoring function can be implemented in different forms, for example, the deep text matching models and the pre-trained language models can be employed as the scoring functions. ESIM is a strong NLI model that uses Bi-LSTM to encode tokens within each sentence and perform cross-attention on these encoded token representations, whose performance on entailment NLI is close to state-of-the-art. Thus, it is a good choice to implement as a scoring function. ESIM takes two sentences premise and hypothesis as input. For NLI task, the concatenation of 1 and is treated as the premise, and 2 is treated as the hypothesis. ESIM outputs a scalar score indicating the relevance between them. In scoring functions based on pre-trained language models such as BERT or RoBERTa. For NLI task, the observations 1 , 2 and hypothesis are first concatenated into a narrative story with a delimiter token and a sentinel token. Then, it feeds into the pretrained language model to get a contextual embedding for each token. After that a mean pooling is applied to obtain the feature vector of the observations-hypothesis pair (( 1 , 2 ), ). Finally, a dense layer is stacked upon to get the plausible score ( 1 , , 2 ). Loss Function and Inference Though the implementation can be different, all scoring functions are optimized by minimizing the empirical risk as follow: where is the loss function utilized to evaluate the prediction scores ( ) for a single query. Since point-wise loss functions are only suited for absolute judgment, we only explore pair-wise and list-wise loss functions in this work. Pair-wise loss functions are defined on the basis of pairs of hypotheses whose labels are different, where ranking is reduced to a classification on hypotheses pairs. Here, the pairwise loss functions of Ranking SVM [7], RankNet [2] and LambdaRank [3] are used. Hinge loss used in Ranking SVM and logistic (cross entropy) loss used in RankNet both have the following form: where the functions are hinge function ( ( ) = max{0, 1− }) and logistic function ( ( ) = log(1 + − )) respectively; > means that ranks higher (is more plausible) than with regards to the query ( 1 , 2 ). Building upon RankNet, LambdaRank uses the logistic loss and adapts it by reweighing each hypotheses pair: the Δ NDCG( , ) is the absolute difference between the NDCG values when the ranking positions of and are swapped, , are gain and discount functions respectively, and maxDCG is a normalization factor per query. Note that binary-choice classification baselines for NLI can be viewed as special cases of pairwise ranking methods when = 2. Listwise loss functions are defined on the basis of lists of hypotheses. In this paper, the loss functions of ListNet [4], ListMLE [11] and ApproxNDCG [10] are employed. In the ListNet approach, K-L divergence between the permutation probability distribution for the scoring function ( | ) and that for the ground truth ( | ) is used as the loss function, where ∈ Π denotes a permutation. Due to the huge size of Π, ListNet reduces the training complexity by using the marginal distribution of the first position and the K-L divergence loss then becomes Different from ListNet, ListMLE uses the negative log likelihood of the ground truth permutation as the loss function, where is he ground truth permutation. ApproxNDCG optimizes approximate NDCG directly, and its loss function is then defined as follow: whereˆ( ) is the approximation for ( ) that indicates the position of in the ranking list . In inference stage, since original NLI task is to pick the more plausible one from two hypotheses, we can choose the hypothesis with highest score as the prediction result. EXPERIMENTS In this section, the experimental results on a public dataset are demonstrated to evaluate our proposed approaches. Experimental Settings We conduct our experiments on the ART [1] dataset. ART is the first large-scale benchmark dataset for abductive reasoning in narrative texts. It consists of ∼20K pairs of observations with over 200K explanatory hypotheses, where observations are drawn from a collection of manually curated stories, and the hypotheses are collected by crowd-sourcing. Besides, the candidate hypotheses for each narrative context in the test sets are selected through an adversarial filtering algorithm that uses BERT LARGE as the adversary. For our 2 2 approach, the data need to reorganize into a ranking form. Concretely, we merge original instances ( 1 , 2 , , ) ≠ sharing the same observation pair ( 1 , 2 ) into a new instance ( 1 , 2 , H), where H = { } =1 is a set of candidate hypotheses for a given observation pair. In the ART training set, there are an average of 13.41 hypotheses for each observation pair ( 1 , 2 ), of which 4.05 are plausible. We further employ a heuristic labeling strategy to construct ground truth plausibility scores y = { } =1 for H. Consider -th hypothesis for ( 1 , 2 ), the ground truth plausibility score of is labeled with #( occurs as plausible) #( occurs) . To demonstrate the effectiveness of our approach, we develop 18 2 2 models based on three scoring functions, i.e. ESIM, BERT and RoBERTa , with six ranking loss functions, including Logistic for the loss (Eq 2) used in RankNet, Hinge for that (Eq 2) used in Ranking SVM, LambdaRank for that (Eq 3) used in LambdaRank, KLD for that (Eq 4) used in ListNet, Likelihood for that (Eq 5) used in ListMLE, and ApproxNDCG for that (Eq 6) used in ApproxNDCG. Three binary-choice classification models are selected as our baselines. They have the same structures with the aforementioned scoring functions, whereas the only difference is that they are trained on original data with the cross entropy loss function. For implementation details, we employ Adam as the optimizer and use early-stopping to select the best model. The models based on ESIM use 300 as the LSTM hidden size, which are trained for at most 64 epochs with batch size set to 32 and learning rate set to 4e-4. The models based on pre-trained language models are finetuned for at most 10 epochs with batch size set to 4 and learning rate set to 5e-6. The evaluation method accuracy defined in [1] is used. Table 1 shows the experimental results of 2 2 models and baselines on the development set. Our best model was evaluated officially on the test set, which achieved the state-of-the-art accuracy ( Table 2). Experimental Results We summarize our observations as follows. (1) All 16 versions of our 2 2 approaches improve the performance on the adbuctive reasoning task, which means that the ranking perspective is better than classification. (2) Pair-wise models perform better than classification models, and most list-wise models perform better than pair-wise models. The former boost can be attributed to full version of pair-wise training, whereas the latter boost from pair-wise to list-wise is due to the global reasoning over the entire candidate set. (3) BERT based ranking models have the largest gains about 8.2% improvement over the corresponding baseline. It is because BERT was taken as the adversary for dataset construction, the substantial improvement illustrates that our 2 2 approach is more robust to adversarial inputs. (4) The loss functions optimizing NDCG metric, i.e. LambdaRank and ApproxDNCG, have poorer performances than others, mainly due to the gap between NDCG metric during training and accuracy metric during testing. Detailed Analyses To further illustrate the rationality of our 2 2 approach, Figure 2 demonstrates two normalized score distributions on the more plausible hypotheses in development set candidate pairs, where the scores are predicted respectively by two models using BERT as the scoring function, the one trained with the classification loss and the other with list-wise likelihood loss. The area under the curves in the right part (probability > 0.5) can be viewed as accuracy values. As shown in the figure, the classification model distinguishes the pairs of candidate hypotheses with a great disparity, either close to the probability 0 or 1, whereas the 2 2 model has the ability to judge the borderline instances whose two candidates are competitive to each other. Look at the sampled borderline instance in the bottom of Figure 2, where both hypotheses are likely to happen but 1 is slightly more plausible, the 2 2 model makes the right choice, which outputs two competitive probabilities for 1 and 2 , 0.5891 vs 0.4109; whereas the classification model not only fails to distinguish which one is better but also outputs probabilities 0.0024 and 0.9976 in a significantly large gap. That is to say, the ranking view in 2 2 approach is a more reasonable way to model the abductive reasoning task. CONCLUSION In the NLI task, all the hypotheses have their own chance to happen, so it is naturally treated as a ranking problem. From the ranking perspective, 2 2 is proposed for the NLI task under the learning-to-rank framework, which contains a scoring function and a loss function. The experiments on the ART dataset show that reformulating the NLI task as ranking has improvements, also reaches the state-of-the-art performance on the public leaderboard.
2020-05-25T01:00:53.876Z
2020-05-22T00:00:00.000
{ "year": 2020, "sha1": "735605138e0187e8a3c0b7befe94645787564bec", "oa_license": null, "oa_url": "http://arxiv.org/pdf/2005.11223", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "735605138e0187e8a3c0b7befe94645787564bec", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
125983212
pes2o/s2orc
v3-fos-license
A hybrid case-based reasoning approach to detecting the optimal solution in nurse scheduling problem Demand for healthcare is increasing due to a growing and ageing population. Choosing an adequate schedule for medical staff can be a difficult dilemma for managers. The goal of nurse scheduling is to minimize the cost of the staff while maximizing their preferences and the overall benefits for the unit. This paper is focused on a new hybrid strategy based on detecting the optimal solution in nurse scheduling problem. The new proposed hybrid approach is obtained by combining case-based reasoning and general linear empirical model with arbitrary coefficients. The model is tested with original real-world data set obtained from the Oncology Institute of Vojvodina in Serbia. Introduction Demand for healthcare is increasing due to a growing and ageing population, which makes access to medical care more difficult. Medical staff performance represents a significant determinant of public healthcare quality. There is an excessive pressure for cost reduction, which negatively influences work-life balance for a small number of employed physicians and nurses and often results in a decrease of demanded quality of services. Moreover, due to the challenging economic conditions, not only has ever greater number of physicians and nurses from public hospitals moved to live and work abroad, but a lot of them are also employed in private healthcare organizations to earn higher salaries. This tendency has caused the critical issue in medical staff preferences; medical staff satisfaction is a fundamental part in providing the necessary care for patients. There are two aspects to this problem: (i) physician scheduling problem (PSP) and, on the other hand, (ii) nurse scheduling problem (NSP). This PSP is more complex than the NSP since residents still need an educational praxis to get licensed as physicians. Whereas many physicians generally have individual contracts with their hospital with specific and limited details, it is more challenging for the scheduling process to involve these intricate agreements and these physicians will not be scheduled with other hospital staff. Choosing an adequate schedule for nursing staff can be a difficult dilemma for nurse managers. They need to obtain balance between the staff's individual preferences and the overall benefits for the unit and, consequently, the patients. Therefore, traditional approaches to addressing the challenges of clinical staff organization and scheduling are not always effective in modern complex healthcare environment [23]. More state legislatures are mandating specific nurse-staffing levels, and many nurses are dissatisfied with their work schedules. Optimal solutions derived from techniques with high computing times are usually less valuable than the ones based on a flexible algorithm or userintuitive application [2]. This paper is focused on a new strategy based on hybrid approach to detecting the optimal solution in NSP. The new proposed hybrid approach is obtained by combining case-based reasoning (CBR) and general linear empirical model with arbitrary coefficients. The model is tested with original real-world data set obtained from the Oncology Institute of Vojvodina (OIoV) in Serbia. Also, this paper continues the authors' previous research in nurse decision-making, scheduling and rostering healthcare organizations which are presented in [12,15,[17][18][19]. The rest of the paper is organized in the following way: Section 2 provides an overview of the basic idea in NSP, related work with solution approaches based on general methods, classical heuristics methods and metaheuristics and subsection about CBR reasoning method. Section 3 presents the NSP proposed in this paper, based on hard/soft constraints, CBR representation of empirical data set and the proposed algorithm for NSP. Experimental results and its verification are presented in Section 4. Section 5 provides conclusions and some points for future work. NSP and related work The NSP is a well-known non-polynomial (NP)-hard scheduling problem that aims to allocate the required workload to the available staff nurses at healthcare organizations to meet the operational requirements and a range of preferences. The NSP is a 2D timetabling problem that deals with the assignment of nursing staff to shifts across a scheduling period subject to certain constraints. In general, there are two basic types of scheduling used for the NSP: cyclic and non-cyclic scheduling. In cyclic scheduling, each nurse works in a pattern which is repeated in consecutive scheduling periods; whereas, in non-cyclic scheduling, a new schedule is generated for each scheduling period: weekly, fortnightly or monthly. Cyclic scheduling was first used in the early 1970s due to its low computational requirements and the possibility for manual solution [5]. Related work in NSP Studies of NSPs date back to the early 1960s. Despite decades of research into automated methods for nurse scheduling and some academic success, it may be noticed that there is no consistency in the knowledge that has been built up over the years and that many healthcare institutions still resort to manual practices. One of the possible reasons for this gap between the nurse scheduling theory and practice is that oftentimes academic community focuses on the development of new techniques rather than developing systems for healthcare institutions [3]. In the past decades, many approaches have been proposed to solve NSP as they are manifested in different models. The three commonly used general methods are mathematical programming, heuristics and artificial intelligence approaches. Many heuristics approaches were straightforward automation of manual practices, which have been widely studied and documented [7,22]. For combinatorial problems, exact optimization usually requires large computational times to produce optimal solutions. In contrast, metaheuristic approaches can produce satisfactory results in reasonably short times. In recent years, metaheuristics including tabu search algorithm (TS), genetic algorithm (GA) and simulated annealing, have been proven as very efficient in obtaining near-optimal solutions for a variety of hard combinatorial problems including the NSP [4]. Some TS approaches have been proposed to solve the NSP. In TS, hard constraints remained fulfilled, while solutions move in the following way: calculate the best possible move which is not tabu, perform the move and add characteristics of the move to the tabu list. The TS with strategic oscillation used to tackle the NSP in a large hospital is presented in [10]. GA, which is stochastic metaheuristics method, has also been used to solve the NSP. In GA, the basic idea is to find a genetic representation of the problem so that 'characteristics' can be inherited. Starting with a population of randomly created solutions, better solutions are more likely to be selected for recombination into novel solutions. In addition, these novel solutions may be formed by mutating or randomly changing the old ones [8]. CBR CBR is a technique that has its origins in knowledge-based systems. CBR systems learn from previous situations. The main element of a CBR system is the CASE BASE. It is a structure that stores problems, elements (cases), and their solutions. So, a case base can be visualized as a database that stores a collection of problems with some sort of relationship to solutions to every new problem, which gives the system the ability to generalize to solve any new problem. The learning capabilities of CBR system rely on their own structures, which consist of four main phases: retrieval, reuse, revision and retain. Figure 1 shows a graphical representation of those four phases. The retrieval phase consists of finding the cases in the CASE BASE that most closely resemble the proposed problem. Once a series of cases have been extracted from the CASE BASE, they must be reused by the system. In the second phase, the selected cases are adapted to fit the current problem. After offering a solution to the problem, it is then revised, to check whether the proposed alternative is in fact a reliable solution to the problem. If the proposal is confirmed, it is retained by the system, modifying some knowledge containers and could eventually serve as a solution for problems in the future. CBR has been used to solve a variety of problems in health care sciences [24], financial predictions [13,14,16], an agent system for detecting Structured Query Language (SQL) injection attacks [11], solving the oil spill problem [9] and everywhere where images can play a key role [6]. Modelling the NSP This research is focused on cyclic scheduling on NSP in planning period in intensive care unit (ICU) at the OIoV. Cyclic scheduling is used here, where each nurse follows a pattern repeated in consecutive scheduling periods. Hard constraints Recently, duty rosters are generated manually by head nurse for ICU, which enables the nurses to express their requests and preferences for working/or not working certain shifts, holidays and days off. Nurses in the unit have different skills categories, meaning different qualifications, specialization training, experience and gender, presented in Table 1. Regular work days are 5 days per week, from Monday to Friday. Regular working hours are 7 hours and 12 minutes. Full-time nurses are defined by: multiple regular work days * regular working hours. When this number is rounded, it represents the total number of shifts allowed per month. Nurses can work in three On-duty shifts: Table 2. Hard requests define a constraint that must be respected in the roster and Soft requests define the preferred option expressed by a nurse which is desirable but can be violated in the roster if needed. Some typical values for a few of the constraints are given below: • Min (max) nurses on shifts: In the OIoV, three nurses in Day shifts, three nurses in Night shifts; • It is not desirable to work a Night shift followed by a Day shift; • After 5 Morning shifts, 2 days off must be assigned; • After a break of more than 7 days, (annual leave, sick leave) Day shift must be assigned; • Maximum differences between Day shifts and Night shifts per nurse could be no greater than five; • At least one of the members of Shift must be shift leader, which is for every nurse defined in Table 1; • Max (min) days: Full-time nurses may not work more than predetermined number of days. The ideal and proposed work shift dynamic is Day-Night-Off-Off-Off (DNOOO). Day-Night, meaning that 2 work shifts and 3 days off in 5 days is ideal shift dynamic. This is recommended by the OIoV management. Also, the DONOO dynamic is allowed, where there are 2 work shifts and 3 days off in 5 days, other combinations of two work shifts and 3 days off in 5 days are allowed as well. But, in the real world, when creating the nurse scheduling, it is impossible to have ideal work shift dynamic. For that reason, a more difficult shift dynamic is allowed, e.g. 3 working days and 2 days off (DDNOO) (DNNOO). Other dynamics of 3 working days and 2 days off are allowed as well. After five Morning shifts (XXXXX) 2 days off must be assigned to create (XXXXXOO) dynamic. Empirical data set-CBR representation For this experiment original real-world data set between 1 January and 31 January 2014, from ICU of OIoV is used. The part of experimental data set is presented in Table 2 where the columns are presented: There is no Solution for that case, and it will be calculated when the system calculates schedule for nurse N-01 for 01.02. In CBR basic representation, Case No. = 26 presents NEW CASE, and the hybrid system will try to find Optimal Solution for it. All cases (data set) stored in CASE BASE can be described in the same manner as for the previous nurse. All the cases in CASE BASE which have Solution will be used in reused and revised CBR phases for detecting the best Solution in NSP. The algorithm for NSP The proposed hybrid model is obtained by combining CBR and general linear empirical model with arbitrary coefficients. The basic steps of the proposed hybrid algorithm for NSP are summarized by the pseudo code shown in Algorithm 1 and the most important CBR phases and general linear empirical model with arbitrary coefficients are presented in blue bold colour. Our algorithm is inspirited by integration of CBR method, which is discussed in Section 2.2, CASE BASE representation is shown in Section 3.2. The general linear empirical model with arbitrary coefficient defined from (1-4) is in detail presented and discussed in [20]. Equation (1) presents calculation of weighted value Va(t,d) Day shift occurrence d for nurse t, while S 7(t,d) presents frequency of next letter = D when it is calculated for nurse t for pattern. The same logic applies to S 6(t,d), S 5(t,d), S 4(t,d), S 3(t,d). Also, (2) calculation of weighted value Va(t,n) Night shift occurrence n for nurse t, while S 7(t,n) presents frequency of next letter = N when it is calculated for nurse t. The same logic applies to S 6(t,n), S 5(t,n), S 4(t,n), S 3(t,n). Equation (3) presents calculation of weighted value Va(t,o) Day off shift occurrence o for nurse t, while S 7(t,o) presents frequency of next letter = O when it is calculated for nurse t. The same logic applies to S 6(t,o), S 5(t,o), S 4(t,o), S 3(t,o). The values of Va(t,d), Va(t,n) and Va(t,o) then must be normalized and as such represent probability for shift occurrence for a specific worker for the next day. Va(t,n) = S_7(t,n) + 0.8 * S_6(t,n) + 0.6 * S_5(t,n) + 0.4 * S_4(t,n) + 0.2 * S_3(t,n) Considering that the letters, candidates for the next shift occurring after the pattern string of cases with different lengths, do not have the same significance, the weighted factor for each of them is introduced. Thus, the weighted factor of cases for the frequency of pattern String-7 is 1, the weighted factor cases for the frequency of pattern String-6 is 0.8, the weighted factor cases for the frequency of pattern String-5 is 0.6, the weighted factor cases for the frequency of pattern String-4 is 0.4 and the weighted factor cases for the frequency of pattern String-3 is 0.2, as shown in (1-3). Partially target function, defined by arguments of the maxima (arg max), which occurs in (1-3) is presented in (4), where arg max refers to the inputs, or arguments, at which the function outputs are as large as possible. The arg max are the points of the domain of some function at which the function values are maximized. Table 3 presents calculation for the nurse candidates for the 1 February based on the range from (1)(2)(3). For every workday it is necessary to select three nurses for Day shift, and three nurses for Night shift. Revision process considers additional calculations which are control hard constraints, corrects and modifies calculated data Day and Night shift candidates for Final solutions for the Day and Night shift. Therefore, it is interesting to see Table 3 where, in N-09, there is a great imbalance between Day shift -9 and Night shift -4. The system allows the greatest difference between Day shifts and Night shifts to be <= 2 in the same month. Therefore, N-09 cannot work Day Shift, because shifts type imbalance would be even greater. N-09 becomes Night shift candidate. Also, revision, Day shift candidate, will consist of the following nurses in the following order: N-17, N-06, N-15, N-11 and N-12. In the first three Day shift candidates the constraint that at least one member of Shift must be shift leader is satisfied, even two of them N-06 and N-15. Finally, Day shift for 1 February could be completed: N-17, N-15 and shift leader is N-06. Following the rules and constraints in CBR revise phase, N-09 is Night shift candidate, but on the other side N-03 has a great imbalance between Night shift-8 and Day shift-4. Therefore, N-11 and N-12 are added from Day shift candidate, also because both of them have higher working percent 16.41 then N-13 with working percent 12.50. And now Night shift candidates are as follows: N-09, N-07, N-08, N-16, N-11, N-12 and N-13. Finally, Night shift for 1 February could be completed: shift leader is N-09, and the members are N-07, N-08. Now, the whole shifts for the 1 February are completed. In CBR retained phase, now is time to update some cases stored in CASE BASE, which are empty and are shown in grey in Table 2. It is necessary to fill the Field 7 Solution with appropriate Shift Begin Step 1 The data store in CASE BASE after retained phase is presented in Table 4. The rest of the schedule for the whole period continues as previously described: algorithm, hard constraints, soft constraints and established rules, and in the 2D timetabling is presented in Table 5. Verification of experimental results To verify our research methods, experimental results are compared with real-world data set obtained from the OIoV. A cumulative Workload is calculated for every nurse and summarized for every month, and after that Coefficient of variation (CV) is calculated and compared with original data set and experimental data set. Cumulative workload for February presents workload from the beginning of the year until February. The same logic applies to cumulative workload for March. Coefficient of variation is dimensionless and is defined as the ratio of the standard deviation (SD) to the mean. Hence, coefficient of variation is a useful quantity for comparing the variability in data sets having [21]. Then Correlation between Means and statistical Significant Difference F-test between data sets is calculated, and results are presented in Table 6. Comparison means of real-world data set and experimental data set can be shown as high correlations, and CV is lower in every month in experimental data set as presented in Table 6. Experimental model is, also, verified by Univariate Analysis of Variance, by the Tests of Between-Subjects Effects, and interaction effect between real-world data set, and experimental data set for June was not statistically significant, F = 0.108, Sig = 0.744, which is shown in Table 6. This result shows that the differences between real-world and experimental data set do not exist and it could be concluded that experimental results fit quite well when they are compared in Workload aspect. The nurses' satisfaction level in original (previous) real-world shift planning and in new nurse scheduling timetable are compared, and on one hand, it can be concluded that nurses are, in general, much more satisfied, but on the other hand a small ward group of nurses are not working together, as in previous timetable. Taken generally, it is much better, for ward work quality, when employees are combined according to different criteria: years of service-experience, specialization training, shift leader, sex and age. Also, it is important to mention that nurse scheduling is now generated by 'machine-computer' as unknown person, and employee nurses cannot react as on personal subjective activity of head nurse. Conclusion and future work The aim of this paper is to propose the new hybrid strategy for detecting the optimal solution in NSP. The new proposed hybrid approach is obtained by combining CBR and general linear empirical model with arbitrary coefficients. The model is tested with original real-world data set obtained from the OIoV in Serbia. The data set is represented in CASE BASE as a database structure where problems, elements (cases), and their solutions are stored. All the cases in CASE BASE which have Solution will be used in reused and revised CBR phases for detecting best Solution in NSP, using hard/soft constraints, rules and general linear empirical model with arbitrary coefficients. In CBR retained phase, some cases stored in CASE BASE are updated, and the new cases are added. Preliminary experimental results encourage further research because data set is stored in database and it is easily manipulated. Our future research will focus on creating new hybrid model combined by intuitive thinking style, to solve problems logically, considering different options until discover the best solution, which will efficiently solve NSP. The new model will be tested with original realworld data set for longer periods, including the year 2017, obtained from the OIoV in Serbia.
2019-04-22T13:12:41.580Z
2018-10-01T00:00:00.000
{ "year": 2018, "sha1": "2a78496e0f81ccf761e9c3368206ee1f7650d462", "oa_license": "CCBY", "oa_url": "https://academic.oup.com/jigpal/article-pdf/28/2/226/32932584/jzy047.pdf", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "b179fe4c2f35b992a9a7f673ce22db5854aa37cb", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
259849227
pes2o/s2orc
v3-fos-license
Trends and hotspots in familial hypercholesterolemia: A bibliometric systematic review from 2002 to 2022 Background: We visually assessed the research hotspots of familial hypercholesterolemia (FH) using bibliometrics and knowledge mapping in light of the research state and development trend of FH. Methods: We employed bibliometric tools, such as CiteSpace and the alluvial generator, to illustrate the scientific accomplishments on FH by extracting pertinent literature on FH from the Web of Science Core Collection database from January 1, 2002, to December 31, 2022. Results: A total of 4402 papers in total were selected for study; 29.2% of all articles globally were from the USA, followed by the Netherlands and England. The University of Amsterdam, University of Oslo, and University of Western Australia are the 3 institutions with the most publications in this area. Gerald F. Watts, Raul D. Santos, and John J. P. Kastelein wrote the majority of the pieces that were published. The New England Journal of Medicine, Circulation, and Atherosclerosis were the journals with the greatest number of papers in this field. Prevalence and genetic analysis of FH, proprotein convertase subtilisin/kexin 9 inhibitors, and inclisiran are current research hotspots for the condition. Future research in this area will be focused on gene therapy. Conclusions: FH research has shown shows a trend of ascending followed by leveling off. The prevalence and diagnosis of FH, proprotein convertase subtilisin/kexin 9 inhibitors, inclisiran, and gene therapy are current research hotspots. This report may serve as a reference for current research trends. Introduction The most clinically significant and treatable monogenic defect is familial hypercholesterolemia (FH), a co-dominant and highly penetrable condition that affects the liver's ability to clear low-density lipoprotein (LDL) via the LDL receptor, resulting in a classic phenotype that includes early atherosclerotic cardiovascular disease. [1][2][3] The availability of novel biological therapeutics and advancements in diagnostic gene technologies have changed perceptions of FH and made it an example of how precision medicine may be used to prevent early cardiovascular disease in families in our communities. If untreated, this rather frequent co-dominant condition causes early coronary events because it represents the cumulative impact of increased LDL cholesterol (LDL-C) levels from birth on the formation of atherosclerosis. [1,4,5] FH is still not well identified or treated, [6,7] but clinical researchers and scientists throughout the world are now actively addressing this care gap. [4][5][6]8,9] Researchers' exceptional performance in FH has substantially improved over the last ten years. However, these studies have not been measured systematically. With the aid of bibliometrics analysis software, a review of this literature was conducted. We hope that there were significant findings in our study Medicine that could be helpful for the advancement of FH research and that we could offer some advice and inspiration to the scientists working on FH studies and treatments. Materials and methods We used CiteSpace 6.1.R6 (64-bit) Advanced ((c) 2003-2023 Chaomei Chen. All rights reserved), Alluvial Generator (reduced by Daniel Edler, Anton Holmgren Coding & ideas, et al researchers and developers at Umeå University with a background in physics), and Microsoft Excel (2021MSO) (Microsoft Corporation) to analyze research trends, references, and keywords analysis of FH (Fig. 1). No ethical approval was required for this systematic review. Data source and search strategy On January 7, 2023, literature was downloaded within a day from the Web of Science Core Collection (WoSCC) database. The search terms were as follows: TS = ("*amilial *ypercholesterol*mia "), and the dates of the search were January 1, 2002, to December 31, 2022, resulting in 9365 records. We chose a time period from 2002 to 2022 since the literature on FH has grown explosively in the last 20 years compared to the preceding 20 years, and to better highlight the advancement of FH research. Publications from the Social Science Citation Index and the Science Citation Index Expanded were chosen, as indicated in Figure 1. In this analysis, only original Englishlanguage articles were taken into account. A total of 4402 original articles were included, and there were no duplicates after being examined by CiteSpace. CiteSpace and Alluvial Generator were used to further analyze these papers. The data were chosen and independently recorded by 2 writers, L.C. and H.P. All differences were discussed until an agreement was reached. Bibliometric analysis and software assistance In this study, using the CiteSpace v.5.8 R3 (64-bit) application, a knowledge map of FH's research status and hotspots was constructed. CiteSpace is a Java-based information visualization program created by Professor Chao-Mei Chen. In bibliographic databases, visual exploration and knowledge discovery are supported by CiteSpace. It provides a visual mapping tool for a wide range of users to examine areas of expertise and the establishment of research themes inside knowledge domains, as well as to find new patterns and trends in the body of scientific literature. [10,11] CiteSpace generates visual graphs with nodes and lines. The frequency is represented by the node's size. The strength of collaboration is also represented by the thickness of the link between the nodes; the thicker the connection, the greater the cooperation. Centrality is a network parameter that measures the importance of nodes. The greater the centrality larger than 0.1, the more important the node. The higher a node's centrality, the more frequently it communicates with other nodes and the more significant it is in the overall network. In the CiteSpace network, nodes with centrality >0.1 are referred to as key nodes. Additionally, Purple circles are commonly used to signify nodes with betweenness centrality >0.1. The more centrality a node has, the more helpful it is to other nodes. The log-likelihood ratio technique was selected to group related terms and references. There are numerous closely related terms in each cluster. A cluster has more references or keywords if there are fewer cluster labels. The silhouette (S) value refers to the cluster's average contour value. In general, it is believed that the cluster is acceptable with a S > 0.5 and compelling with a S > 0.7. [12] CiteSpace's particular settings were set for this investigation as follows: An alluvial flow map depicting the evolution of co-cited papers over the previous 5 years was created using the alluvial generator (2018-2022). The web-based program MapEquation's alluvial flow diagram illustrates how the network changes over time, which obviates the propensity of fields. The intellectual structure of a certain topic can be built numerically and graphically for assessing literature performance, identifying fundamental issues, and resolving disciplinary quandaries by combining bibliometrics with data visualization. The publications' modules that have been mentioned throughout the course of these 5 years in a row are colored, signifying that they have attracted a lot of interest during that period. Microsoft Excel was used to create tables, rose charts, and show annual national trends in publications. Annual publications and trends There were 4402 publications on FH in the WoSCC database's Science Citation Index Expanded and Social Science Citation Index from 2002 to 2022. According to Figure 2, the number of publications on FH increased gradually between 2002 and 2018 until it reached 341 pieces. In the next 4 years, there was a little decline in the number of publications, although they still reached a high level. Analysis of countries, institutions, and authors The analysis of the worldwide collaboration network is displayed in Figure 3A to help comprehend the contribution made by each nation to the FH research area. The USA (1284, 29.2%) had the most publications, followed by Netherlands (480, 10.9%), England (440, 10.0%), Canada (376, 8.5%), and China (365, 8.3%) (Fig. 3B). Regarding the centrality of countries, Belgium (0.60) ranked first, followed by Argentina (0.58), Hungary (0.53), Singapore (0.43), and Israel (0.41) ( Table 1). The USA was a significant FH research nation, and it collaborated closely with France in this area. Scholars can find a base for looking for partnering institutions while doing research by understanding the global distribution of research institutions studying FH through the analysis of research institutions ( Table 1). Figure 3E shows the collaboration network of authors, which provides a basis for finding research partners and identifying industry giants. The author with the most publications was John J.P. Kastelein (125), followed by Gerald F. Watts (122), Raul D. Santos (77), G. Kees Hovingh (74), and Carol Jing Pang (57) (Fig. 3F). The Analysis of references As shown in Figure 5A and Table 3 Robinson JG (2015)" were frequently cited references. In the last 21 years, there have been 15 major study subjects that have been concentrated in the area of FH, as illustrated in Figure 5B, where 15 clusters of varying colors and sizes were developed. In Table 4, we provided details for each cluster. The 15 clusters' silhouettes, which were all above 0.9, showed that their homogeneity was considerably greater. Clusters #2 Table 1 Top 10 publication counts and centralities of countries, institutions, and authors. Items Rank Count Name Rank Centrality Name 11 49 Hayato Tada (Japan) ("carotid arteries "), #3 ("LDLR gene mutations"), #5 ("risk factors ") and #8 ("lathosterol") had the earliest average publication year among their members (1999, 2000, 2002, and 2001, respectively), indicating that they were early research topics in this area. The largest cluster for co-cited reference, cluster #0, was labeled as "Familial hypercholesterolemia." The timeline view of co-cited references is also shown in Figure 5C, which reveals that the most recent areas are clusters #0 ("Familial hypercholesterolemia"), #1 ("inclisiran "), and #15 ("genetic analysis "), and second most recent clusters were #7 ("evolocumab "), and #9 ("alirocumab "). The top 5 referred and referring references in these 5 clusters are displayed in Supplementary Table S1 The goal of alluvial flow diagrams is to uncover time patterns in evolutionary networks. We created an alluvial map to easily observe the citation changes of the top 5 cited references from 2018 to 2022. The data were first obtained from CiteSpace, then networks of co-cited references were created. The longest-presence nodes in the import network are highlighted by coloring the flows they form. The flow increases as the flow lines become thicker, reflecting the significance of the co-cited reference. The top largest flow of co-cited references in the alluvial flow map is shown in Table 5. General information The increasing annual number of articles on FH before 2018 and remaining high-level recent years indicates that this research field remains heated. This was the first bibliometric examination of worldwide FH articles. We can observe through a graphic study of the distribution of the nations and institutions that FH research is primarily conducted in the US. The research on FH piqued the interest of nations from all over the world, including countries from Europe, Asia, Australia, and Africa. In the previous 21 years, the USA published approximately 2.7 times as many papers as the Netherlands. The most frequently cited document in the USA (citations = 1962) was published by Kasey C. Vickers in 2011. It primarily identified a new intercellular communication channel that involved HDL-mediated microRNA transport and cellular distribution. [13] Belgium has the highest centrality, signifying the tight interaction it has with other nations. University of Amsterdam in the Netherlands, with 241 papers addressing the risk factors, prevalence, genetics, therapy, and other facets of FH, was the organization with the most publications. [14][15][16][17][18][19][20] The most popular of them was a randomized experiment conducted by Jennifer G. Robinson to assess 2341 patients from 27 nations in North and South America, Europe, and Africa. Alirocumab or placebo was administered as a 1-ml subcutaneous injection every 2 weeks for a total of 78 weeks. The findings of this research demonstrated that the addition of alirocumab to statin treatment at the highest tolerated dose significantly decreased LDL-C levels and the frequency of cardiovascular events. [20] In terms of publications, 3 of the top ten universities were from the US: Harvard University was ranked fourth, University Table 2 Top 10 publication counts and centralities of co-cited authors and co-cited journals. Items Rank Count Name Rank Centrality Name It was a randomized control trial to assess the long-term effects of Evolocumab, a monoclonal antibody that inhibits proprotein convertase subtilisin/kexin 9 (PCSK9). In a predetermined but exploratory study, the use of evolocumab in addition to conventional medication considerably decreased LDL-C levels and decreased the incidence of cardiovascular events over the course of about a year of treatment. [21] According to the findings, John J.P. Kastelein (University of Amsterdam, Amsterdam, Netherlands) ranked first in terms of both publications and centrality. His main focus was on the treatment of FH through randomized control trials, his main focus was on the treatment of FH through randomized control trials, demonstrating the ineffectiveness of ezetimibe in reducing carotid artery intima-media thickness (citation = 531), [22] the efficacy of alirocumab in lowering LDL-C levels in patients with heterozygotes FH (HeFH), [23] and the failure of torcetrapib to slow the progression of atherosclerosis in HeFH. [24] The journal that received the most co-citations was Atherosclerosis. Only a tiny percentage of patients with HeFH achieve the LDL-C treatment target of 2.5 mmol/l, according to one of the most widely quoted articles in atherosclerosis, which was also written by the team of Kastelein, JJP. [25] One of the journal's most recent studies examined the SAFEHEART risk prediction model's external validity in patients with FH in an English routine care cohort. [26] Prevalence and genetic analysis of FH. The term "Familial Hypercholesterolemia" was assigned to the greatest cluster for co-cited references. Amit V. Khera's paper from cluster #0, which was the most often mentioned, discussed the likelihood of an FH mutation in those with severe hypercholesterolemia and the risk of developing coronary artery disease. They discovered that 1.7% of patients with LDL-C > 190 mg/dL had an FH mutation detected by genome sequencing. FH mutation carriers, however, exhibited a markedly elevated risk for coronary artery disease for any detected LDL-C. [27] The purpose of number 2 of the cluster, which was published by Marianne Benn, was to determine the prevalence and predictors of FH-causing mutations in general populations. It was discovered that these mutations are estimated to occur in the general population in 1:217 (0.46%) cases. [28] Additionally, the third edition, released by Samuel S. Gidding et al on behalf of the American Heart Association, was a scientific statement that gave an agenda for further advancement, building on the foundation set by current recommendations and assessments of advancements in diagnosis and treatment. [4] Samantha Karr was the cluster's most important citer. He has covered the epidemiology and treatment of FH and other forms of hyperlipidemia, and he offers guidelines for best practices in Table 3 Top 10 co-cited references. Genetic analysis LDL = low-density lipoprotein, LLR = log-likelihood ratio, Narc-1 = neural apoptosis-regulated convertase. www.md-journal.com Figure 6. Alluvial flow map of co-cited references in the last 5 years. Each line represented a study, and colored and continuous lines referred to articles that had been cited continuously or with the largest alluvial flow. Table 5 Top 10 largest flow of co-cited references. The flow increases as the flow lines become thicker, reflecting the significance of the co-cited reference. Rank Article title Author Year Flow clinical care to help medical practitioners effectively manage patients with these disorders. [29] The word "prevalence" appeared 52, 133, and 138 times from 2014 to 2016, 2017 to 2019, and 2020 to 2022, respectively, in CiteSpace when we clicked the node "prevalence" in Figure 7A to view the history of appearance. This indicates that the prevalence of FH has truly attracted widespread attention, as can be seen in Figure 7B. In 2022, there were about 15 articles focusing on the frequency of FH in various areas. [30][31][32][33][34] The predicted prevalence of probable/definite FH in the US as a whole was 0.40%, or 1:250. [35] In the meta-analyses conducted in 2020, the prevalence of HeFH in the general population was estimated as 1:313 and 1:311. [36,37] According to 2 of the CEFH criteria, the prevalence of FH in the Chinese population aged 35 to 75 years was 0.13% (or around 1 in 769), and the patients were severely undertreated and undercontrolled, according to recent research by Haobo Teng et al. [34] However, we must determine the diagnostic criteria that were applied and if genetic analysis was performed each time we cite prevalence now. This is one of the main causes for the current boom in interest in genetic analysis as a field of study. Prior research approximated the frequency of FH without taking genetic testing into account, [34][35][36]38] but in the Netherlands, Norway, UK, Spain, Denmark, Belgium, Canada, Australia, New Zealand, and South Africa, among others, FH genetic testing has been carried out more widely and/or at the population level. [5,39,40] According to research by Mark Trinder et al, both monogenic and polygenic hypercholesterolemia were substantially linked to an elevated risk of CVD events among people with similar levels of LDL-C, as opposed to people with hypercholesterolemia without a known genetic basis. [41] The prevalence of FH was determined to be 1:270 using the DLCN clinical criteria alone, 1:263 with genetic testing alone, and 1:152 when clinical criteria and genetic testing were combined, according to research by Brandon K. Bellows. Additionally, in young individuals aged 20 to 39 years, using clinical criteria alone was predicted to be 1:769; however, when clinical criteria and genetic testing were combined, this number increased to 1:238. [42] "Clinical Genetic Testing for Familial Hypercholesterolemia," a 2018 JACC Scientific Expert Panel publication, had the highest citations of the cluster #15 (genetic analysis) articles and is the No. 22 reference with the greatest citation bursts in Figure 5D. The standard of treatment for individuals with definite or probable FH and their at-risk relatives should include FH genetic testing, according to their recommendations. LDL receptor, apolipoprotein B, and PCSK9 genes should be tested; depending on the patient's condition, more genes may need to be examined. Greater diagnoses, more efficient cascade testing, faster therapy initiation, and more precise risk classification are anticipated consequences. [3] And the articles citing most references in the cluster were guidelines for FH in Australia [43] and Brazil, [44] respectively. They all concur that genetic testing and phenotypic criteria should both be used to make the diagnosis of FH, but that phenotypic criteria should be used in the absence of genetic evidence. PCSK9 inhibitor. PCSK9 inhibitor is one of the research hotspots, as shown by the burst keyword analysis. The articles with the highest alluvial flow in Figure 6 and the No. 25 article in Figure 5D both discussed the clinical efficacy of PCSK9 inhibitors, evolocumab, or alirocumab. The proprotein convertases, which are enzymes that change proproteins into their functional protein forms, include PSCK9. By attaching to the LDL-C cellular receptors on hepatic cells, its active form signals their degradation. As a result, there is a general decline in LDL-C receptor activation, which raises LDL-C. Due to the fact that LDL-C requires receptor binding in order to enter cells, LDL-C levels in the blood start to rise if it doesn't. More LDL-C receptors are expressed as a result of PCSK9 medicines' blocking of PCSK9 proteins, which decreases the LDC-C level. [45,46] IgG1 and IgG2 monoclonal antibodies called alirocumab and evolocumab, respectively, block the PCSK9 enzyme and stop the loss of LDL receptors. [47,48] Lower levels of LDL-C are caused by an increase in LDL receptors on the surface of hepatocytes. [48] Phase III clinical studies of alirocumab revealed a substantial reduction in LDL-C. [49] Additionally, its mid-term and long-term efficacy has previously been demonstrated. [20,50,51] Evolocumab treatment significantly decreased LDL-C in both adult and pediatric patients with HeFH. [52,53] In individuals with homozygous FH (HoFH), alirocumab and evolocumab were both secure and well-tolerated. [54,55] Recently, several researchers were interested in how evolocumab affected cognitive function. They discovered that evolocumab had no detrimental effects on cognition in either adults or children. [56][57][58] Inclisiran -new approaches to reduce LDL-C. The cluster analysis of co-cited references and keywords revealed that inclisiran, a different strategy for targeting PCSK9 that uses RNA interference, was one of the research hotspots. In contemporary biomedical engineering and clinical illness treatment, gene therapy has emerged as a significant study area. New technologies, including clustered regulatory interspaced short tandem repeats, antisense oligonucleotides, small interfering ribonucleic acid (siRNA), antisense oligonucleotides, and new transport techniques, like nanomaterials and lipid carriers, have recently emerged, greatly promoting the use of gene therapy in clinical settings. [59] A long-acting synthetic double-stranded siRNA called Inclisiran selectively binds to the triantennary n-acetylgalactosamine carbohydrate ligand and the glucose-lowering glycoprotein receptor to target PCSK9 mRNA. [60] The PCSK9 synthesis-inhibiting small interfering RNA (siRNA) drug inclisiran lowered LDL-C by up to 50% in a phase I and phase II study. The reduction was dose-dependent. For around 6 months, PCSK9 and LDL-C levels were kept lower. [61,62] No particular severe adverse effects were noted. 15,000 people with a history of a myocardial infarction or stroke are now participating in the HPS4/TIMI65/ORION4 study, which has a mean length of 5 years and compares inclisiran versus placebo. Kausik K. Ray et al merged individual patient's data from 3 Phase III lipid-lowering trials to assess the impact of inclisiran treatment or placebo on the risk of cardiovascular events and provide preliminary insights into the potential of this therapeutic approach. Inclisiran significantly decreased composite major adverse cardiovascular events, but not fatal and nonfatal myocardial infarctions or fatal and nonfatal stroke, according to their findings. [63] The appropriate treatment and management of this patient group have proven to be a formidable issue for the worldwide medical community due to the very significant risk of cardiovascular events in FH. Patients with HoFH now have hope for a cure because of the advancement of gene therapy technologies. [64] The development from classical gene substitution to gene editing has opened up countless treatment options for FH. Gene therapy may eventually enable FH patients to receive lasting advantages from a single therapy in a safe and ethical manner as technology advances, significantly lowering the financial burden that FH has on society at large. Future study may focus on the efficacy and safety of gene therapy. With regard to the current status of research and the trajectory of specific research areas, there are many studies that use bibliometrics and knowledge mapping. [65][66][67][68] CiteSpace and the alluvial generator, which were used in this study, are the 2 of the most bibliometric tools. Our analysis strategy is similar to those employed in previous research. However, there are several restrictions on this study. To begin with, we only included scientific papers from WoSCC and omitted material from other databases like Google Scholar and PubMed. This may have led to bias. Second, some crucial information or viewpoints could have been left out by software because the material we first downloaded wasn't the entire text. Our research, however, is based entirely on information that was acquired without any bias from supervisors. Third, because synonyms were used in the analysis, it is possible that subjectivity might have influenced the results. Co-reference analysis may nevertheless result in unavoidable accuracy loss. Finally, the authors may still be ignoring certain literature and therefore missing out on some more important study areas. Conclusions This study systematically assessed the research publications on FH by bibliometric analysis. It is sufficient to demonstrate that this study area has consistently piqued the attention of many academics by the number of publications, which has been rising yearly until 2018 and has continued to be at a high level in the years after. Evaluations of publications from various nations, organizations, authors, and journals demonstrated their contributions to the FH research and may also be used to direct future study. We have identified the FH research hotspots and trends for the future by the examination of references and keywords. The incidence and genetic diagnosis of FH have garnered a lot of attention, and the patients were significantly undertreated and poorly managed. The effects of PCSK9 inhibitors on adults or kids have caught the interest of researchers. The most promising treatment for patients with HoFH may be gene therapy, which includes the PCSK9 siRNA medication. The efficacy and safety of gene therapy could be the subject of future research. Researchers that are interested in this topic may find the study's reference on research trends and savings in time while looking for research frontiers and hotspots.
2023-07-14T15:04:21.089Z
2023-07-14T00:00:00.000
{ "year": 2023, "sha1": "982092b4ca6de37af7f32951d6a6b648d68d26d1", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "WoltersKluwer", "pdf_hash": "260037a2815029d969b681341f6fbb9fac797302", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
225269031
pes2o/s2orc
v3-fos-license
Hyaluronic Acid Mediated Zinc Nanoparticles against Oral Pathogens and Its Cytotoxic Potential Aim: To determine the Hyaluronic acid mediated zinc nanoparticles against oral pathogens and cytotoxic potential. Introduction: Hyaluronic acid is a non sulfated glycosaminoglycan. Bacterial invasion can also be repressed by an inhibitor Interfering with receptor interaction for bacterial invasion, Hyaluronic acid is an example of inhibitor.Nanoparticles is considered as one of the most promising studies in science and technology study and Maintaining the shape, size and distribution of nanoparticles helps in its function and interaction with other molecules. Materials and Methods: 0.1 g of Hyaluronic acid was added to a flask containing 100 ml of distilled water and heated for an hour. After observing the solubility, 0.574 g of Zn was added to the mixture and then kept on a magnetic stirrer for 1 hour at 100 degree Celsius. Anti microbial activity: Agar well diffusion and Disc diffusion method is used. Then incubated for 37 degree Celsius for 48 hour. The zone of inhibition is recorded. Cytotoxic potential: Different concentrations of Hyaluronic acid mediated zinc nanoparticles are incorporated to the wells. After 24 hrs the results were analysed. Original Research Article Karthik et al.; JPRI, 32(19): 113-117, 2020; Article no.JPRI.59849 114 Results and Discussion: Hyaluronic mediated zinc nanoparticles is proved to be effective against a wide range of foodborne and clinically relevant Gram-positive and Gram-negative bacteria using several assays such as disk diffusion, agar or broth dilution.Hyaluronic acid mediated Zinc nanoparticles has high potent cytotoxic potential it had been proved with the help of brine shrimps. Conclusion: From the observed results, it has been concluded that Hyaluronic acid has a lot of medicinal values and it has antimicrobial activity and it has good cytotoxic potential. INTRODUCTION Hyaluronic acid is a non sulfated glycosaminoglycan . Bacterial invasion can also be Inhibited by an inhibitor Interfering with receptor interaction for bacterial invasion . Based on these inhibitors,a main example is a hyaluronic acid [1]. Nanoparticles is considered as one of the most promising studies in science and technology study [2]. By maintaining the shape, size and distribution of nanoparticles, it can be used to successfully achieve their properties and the nature and also the intensity of their interaction with the subsequent molecules .A main method to edit or modify the final result of a nanomaterial is to use polymeric support [3,4]. The antimicrobial activity of the metal nanoparticles have been inscripted rapidly in the last year, as an auto treatment for infected wounds, mainly due to antimicrobial resistant bacteria, and hence nanoparticles of different metals had been studied [5,6]. It has been demonstrated that gold mediated nanoparticles are inert or they have a nontoxic effect on the human cells [7]. The results have been predicted that Gold NanoParticles are not cytotoxic or immunogenic but are bio compatible, collaborating their potential in the area of nanomedicine [8]. On the other hand , good anti microbial activity has been explained against various pathogenic bacteria [9,10]. In contrast to the gold nanoparticles,silver mediated nanoparticles shows high toxicity associated with the oxidative and inflammatory nature [11]. It is insisted that Silver NanoParticles can inhibit the main mechanism of antioxidant defence through decrease in glutathione and the promotion of the lipid peroxidation [12]. Mitochondria is the cellular compartment with high sensitivity to Silver NanoParticle toxicity [13]. Hyaluronic acid is a basic component of the extracellular matrix of the skin, mucosal tissue, joints, eyes, and many other organs and tissues. It takes part in tissue repair processes and is a required component in the resurfacing of the skin and the prevention of scar formation. Its osmotic capability to bring back tissue hydration during the inflammatory process, and its viscosity helps to prevent the passage of bacteria and viruses into the pericellular area . It is a known stimulator of the inflammatory process because it acts as a barrier to tissue degradation and has antioxidant properties, including the ability to eliminate free radicals [14,15]. 1. The study was aimed to determine the effect of Hyaluronic acid mediated zinc nanoparticles on oral pathogens and their cytotoxic potential. Biosynthesis of Nanoparticles In a flask 0.1 g of Hyaluronic acid is added to the distilled water and heated for an hour. Meanwhile the solubility is checked . After checking the solubility 0.574 g of Zns is added to 0.1 g/100 ml of Hyaluronic acid. Mixed solution is kept Ina magnetisms stirrer for 1 hour at 90 degree Celsius (Fig. 1). Anti Microbial Activity The agar well diffusion method was used to determine the antibacterial activity of Hyaluronic acid medicated Zinc nanoparticles. Different concentrations of compounds were tested against streptococcus mutans, Lactobacillus. and Candida albicans. The fresh bacterial suspension was dispersed on the surface of Muller Hinton agar plates and Rose bengal agar for Antifungal activity respectively. Different concentrations of nanoparticles (50, 100 & 150 μL) were incorporated into the wells and the plates were incubated at 37°C for 24 h. The antibiotics were used as positive control. Zone of inhibition was recorded in each plate. (Figs. 2 and 3). Cytotoxicity Activity Brine shrimp eggs were added to saline water in a hatching chamber. After 24 hours, exactly 10 hatched larvae (nauplii) were suspended in 6 wells containing 10 ml of saline water, each. Different concentrations being 5 µL, 10 µL, 15 µL, 20 µL and 25 µL of the nanoparticles synthesised was dispersed in each well with the last well as a control (without any nanoparticles). Post 24 hours, the number of surviving nauplii were counted manually under dissection microscope and recorded. (Fig.-4) Antimicrobial Activity The antimicrobial activity was carried out using the Agar well diffusion method. Three agar plates for identifying the inhibitory effect over Lactobacillus, S. mutans and respectively, were used (Figs. 2 & 3). Each plate had four wells each with different nanoparticle concentrations being 50 µL, 100 µL and 150 µL, while the fourth was a standard. Against RESULTS AND DISCUSSION The antimicrobial activity was carried out using the Agar well diffusion method. Three agar plates for identifying the inhibitory effect over and C. albicans . 2 & 3). Each plate h different nanoparticle concentrations being 50 µL, 100 µL and 150 µL, while the fourth was a standard. Against Lactobacillus, the diameter of zone of inhibition is observed to be 8 mm, 11 mm and 15 mm respectively,with S. mutans, the diameter of zone of inhibition was obtained as 9 mm, 14 mm and 16 mm respectively and against C. albicans, the diameter of zone of inhibition was observed as 11 mm, 15 mm and 16 mm respectively. Thus, maximum activity for all the three was observed at 150 µL against standard. Cytotoxic Activity The test for cytotoxic properties was assessed using brine shrimps. Ten nauplii were placed in each of six wells with one standard and the remaining with nanoparticle concentrations 5 µL, 10 µL, 15 µL, 20 µL and 25 µL. LD50 concentration was obtained to be 25 µL, with half the population of nauplii in the respective well surviving post incubation. Showing process of preparation of hyaluronic acid mediated zinc nanoparticles antimicrobial activity in MIC(minimal inhibitory concentration hyaluronic acid mediated zinc nano particle ; Article no. JPRI.59849 Lactobacillus, the diameter of zone of inhibition is observed to be 8 mm, 11 mm and 15 mm , the diameter of zone inhibition was obtained as 9 mm, 14 mm and 16 mm respectively and against C. albicans, the diameter of zone of inhibition was observed as 11 mm, 15 mm and 16 mm respectively. Thus, maximum activity for all the three was observed The test for cytotoxic properties was assessed using brine shrimps. Ten nauplii were placed in each of six wells with one standard and the remaining with nanoparticle concentrations 5 µL, 10 µL, 15 µL, 20 µL and 25 µL. The biologically synthesized zinc nanoparticles using Hyaluronic acid were found to be highly toxic against different pathogenic bacteria and fungi of selected species. The zinc nanoparticles synthesized are highly toxic towards fungal species when compared to bacterial species . Hyaluronic mediated zinc nanoparticles is proved to be effective against a wide range of foodborne and clinically relevant Gram-positive and Gram negative bacteria using several assays such as disk diffusion, agar or broth dilution. acid mediated Zinc nanoparticles have high potent cytotoxic potential; it has been proved with the help of brine shrimps. antimicrobial activity of hyaluronic acid mediated zinc nano particle cytotoxic activity of hyaluronic acid mediated zinc nanoparticles The biologically synthesized zinc nanoparticles using Hyaluronic acid were found to be highly toxic against different pathogenic bacteria and of selected species. The zinc nanoparticles synthesized are highly toxic towards fungal species when compared to bacterial species . Hyaluronic mediated zinc nanoparticles is proved to be effective against a wide range of foodborne positive and Gramnegative bacteria using several assays such as disk diffusion, agar or broth dilution. Hyaluronic nanoparticles have high potent cytotoxic potential; it has been proved with CONCLUSION From the observed result, it has been concluded that Hyaluronic acid has a lot of medicinal values and it has antimicrobial activity and it has good cytotoxic potential. CONSENT It is not applicable. ETHICAL APPROVAL It is not applicable. ; Article no.JPRI.59849 acid mediated zinc nano particle cid mediated zinc nanoparticles From the observed result, it has been concluded that Hyaluronic acid has a lot of medicinal values and it has antimicrobial activity and it has good
2020-09-03T09:03:59.969Z
2020-08-26T00:00:00.000
{ "year": 2020, "sha1": "16809b95cd5ebd42a00ac8478d7f39c3a462610f", "oa_license": "CCBY", "oa_url": "https://www.journaljpri.com/index.php/JPRI/article/download/30716/57626", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "0c896163bc94c92ef8cb2bc1cdeeb61e7ab8c776", "s2fieldsofstudy": [ "Medicine", "Chemistry" ], "extfieldsofstudy": [ "Chemistry" ] }
216862805
pes2o/s2orc
v3-fos-license
The Establishment of Azad School Utmanzai and Anjuman -i- Islahul Afaghina: A Successful Methodology of Organizational Excellence (1921-1946) This historical paper explores the role of one of the indigenous educational movement of the British Indian North West Frontier Province, now Khyber Pakhtunkhwa in Pakistan. The movement ‘Anjuman-iislahul Afaghina’ established in 1921 by Abdul Ghaffar Khan also known as Bacha Khan (1890-1988) and his companions to educate the unprivileged Pakhtuns in the early decades of the 20 century. The Anjuman established 104 Azad schools in the settled as well as tribal territory to educate the nation in formal way, besides education and training was imparted through non formal mode, as well. The strength of the Anjuman was its organizational excellence in the shape of its components like propagation, fund raising, management committees, faculty of intellectuals, cocurricular activities, annual jamborees, conflict resolution committees, literary activities and social reformation. No doubt, the movement contributed to educate the unprivileged Pakhtun nation in ensuring the quantity as well quality of education. Promotion of Pashtu language and literature in Azad school, Medium of instruction, Awakening of Pakhtuns for social reformation, Annual meetings, Bacha Khan addresses to students, Curriculum, Administration and Examination system. The Formation of First Azad School Utmanzai and Anjuman-i-Islahul Afaghina The failure of Hijrat movement to Afghanistan, not only awakened the Pakhtun intellectuals, but force them to struggle for the social uplift of the community. Before the formation of the Anjuman-i-Islahul Afaghina, Bacha Khan successfully founded the Azad School in Utmanzai on first April 1921. According to Akbar (1977), Bacha Khan sent a delegation of nobles to Lakaro -a tribal area in Mohmand Agency where he was staying with Haji Sahib of Turangzai. He just returned after Hijrat movement and was planning to do something in education in the tribal belt. The delegation told him that Bacha Khan had opened a Madrassa in Utmanzai, and that he was invited to work with him to serve the nation, instead of visiting abroad.It was March, 1921. However, Shah (2007 is of the opinion that Anjuman-i-Islahul Afaghina was established before the formation of the first Azad School in Utmanzai.He narrated that; To pursue some of these goals and objectives, on 1st April 1921, the Anjuman-I-Islahul Afaghina (the Society for the Reformation of Afghans) was formed with Ghaffar Khan as its President and Mian Ahmad Shah as Secretary. On 10 April, 1921, nine days after the formation of the Anjuman, the first branch of Azad Islamia Madrassa was opened at Utmanzai, followed by many more branches in different areas of the Peshawar Valley. Abdul is of the opinion that Azad School Utmanzai was established before the Anjuman -I-Islahul Afaghina. He explained that after the establishment of the first Azad School, it was needed to form an association to take care of the school. He elaborates that the founders felt the intense need for an association which could undertake the responsibility for its supervision. Bacha Khan in his own autobiography ' My life and struggle ' did not show any date of the formation of school and Anjuman , however the title of one of the chapters is arranged as ' Intellectual revolution, educational and reformative struggle, Azad high school of Utmanzai and Anjuman-i-Islahul Afaghina, the presidency of Khilafat Committee. Likewise, while explaining the subtitle in the heading as 'Azad school of Utmanzai and Anjuman ', it is concluded that while mentioning Azad school before the Anjuman showing that Azad school was Having gone through the archive records, some primary sources and authenticity and genuineness of the secondary sources, it is proved that the Azad School Utmanzai was established on first April 1921, however at the same time the struggle for the creation of an Anjuman was initiated. The development of a constitution for the Anjuman, took some time to launch it, after the school was founded. The Anjuman was made in the second week of the April, 1921. The school was set up in the house of Akram Khan and boarding in his guest house (Hujra). It was situated near' Dhab Bridge' called ' Dhab Pul' in the local accent, just in front of the shrine of the Syed Shikkh Jalal Bukhari in the entrance of village Utmanzai coming from Charsadda. There were five classrooms initially, besides the three boarding rooms and an office for Anjuman in the building, according to, Abdul Majid and Dost Mohammad Khan. The inaugural ceremony was attended by more than 200 people, including persons from all walks of life. The honor of being the host was given to Akram Khan, who whole heartedly sponsored in cash and kind. The treat in lunch was offered by the Abbas Khan Khan Akbar, Haji Abdu Ghaffar (Utmanzai) and Bacha Khan. The discussion was aimed at, social, political and educational situation of the region. All the mentors expressed their views. It was decided that Azad School Utmanzai will be considered as central institution, and other likewise schools be established in other villages. It was also suggested that an association for the social uplift be formed to stop un-Islamic traditions in the society. Furthermore, the unity amongst Pakhtuns, eradicating social evils, discouraging disharmony, and creation of real love for Islam and brethren, were selected as areas of the target. A three-member committee comprising of Abdul Akbar Khan, Khadim Mohammad Akbar, and Barrister Mian Ahmed Shah, was asked to prepare the constitution for the purpose. The proposed constitution was prepared in twelve days, with following clauses. 1. The agreed name was selected as Anjuman -I-Islahul Afaghina-the association for the reformation of Afghans. 2. All the decisions shall be made by consensus. 3. The association will comprise of regular office bearers. 4. The matters of accounts be kept on and be presented annually in its meeting. 5. The office of the Anjuman is to be decided by the members. 6. The advisory council will comprise of fifty members. 7. The advisory council will necessarily meet twice in a month. 8. The advisory council will take the responsibility to bear the expenses of the Anjuman as well as the school. 9. No subscriptions be allowed from common people whatsoever, like other organizations. 10. All the members of the advisory council will donate a minimum sum of Rs. 50 annually. 11. The sub Anjuman be established in other villages, abiding the constitution be mandatory on all such associations. The members were asked to refrain themselves from all kinds of enmity with the people. The members will also avoid any kind of unethical disputes in the court of law. The members will not work for the Britishers. All the members will love their native language and will work for its promotion. 12. The Anjuman will organize a Pashtu poetic competition annually in Utmanzai. The poets will be given an invitation to attend annual meeting of the school, too. 13. The Anjuman will encourage the patriotic poets, and will help the poor's one. 14. The Anjuman will publish a newspaper, if permission was given to it. 15. The association nomenclature was decided as; one president, two vice presidents, one general secretary, one assistant secretary and one treasurer. 16. An inspector is to be appointed for the examination of the schools. The proposed constitution of the Anjuman-i-Islahul Afaghina was presented to its members at a meeting held in front of the school. After thorough discussion, a clause was included, on the eve of Maulana Mohammad Israel. It was to open a new section in the school for the instruction of religious learning that is Quran, Hadith, Fiqqa and Arabic language. The suggestion was approved. The office bearers were also finalized at the meeting. They were; Mohammad Abbas Khan as president, Mian Abdullah Shah as secretary, Khadim Mohammad Akbar as vice president. Mian Abdul Maroof Shah was selected as the in charge examinations and vocational education in the schools. The following members of the advisory council levy the subscriptions. The Anjuman was then divided into three separate bodies. One committee was deputed the authority to increase membership, the other to make propagation and the third one to generate funds. However, Bacha khan, while deputizing the duties to the right persons, selected educationists to care about school matters. In one of the annual meeting, while emphasizing the division of work amongst right persons, he stated his famous saying which proved a proverb in the Pashtu language in latter days. He used to say kar la khalak ogoray, khalko la kar ma goray; find right persons for the job demanded, and don't spoil the job by incorrect selection. The propagation task was of enormous challenge. This assignment was beautifully and skillfully undertaken by Maulana Mohammad Israel, Mian Ahmed Shah, Abdul Akbar Khan, Master Karim, Ahmed and to a great extent the success was delivered by the students of Azad School Utmanzai. Ahmed, while describing the situation, he too was inspired by a student's speech on the occasion of Eid Prayers in Eidgah, Charsadda. The oratory of the students and other members of the Anjuman, no doubt, played an important role in mobilizing people. The group of students and Anjuman members visited the villages, delivered speeches in the Mosques and community centers (Hujra). The patriotic songs, multiplied the inspirational feelings, by the students. There was a DB school in Utmanzai, which was opened as a missionary school before in 1907. However, after complete failure as missionary, it was discontinued. In 1907, the same school was given to District Board authorities. The land for both the schools was donated by the Bahram Khan-father of Bacha Khan and Mohammad Abbas Khan-a landlord of the village. After taken by District Board, the school could not attract the common masses, except few land owners and families of some upper class, in 1907. In later years, the DB school attracted the Hindu, Sikhs families, as shown by the record of admissions. But comparing both the DB and Azad School Utmanzai, it is concluded that the previous one had 849 students since its birth to 1947, the latter was an example of quantity and quality for the uplifting of the society to sublime norms, traditions and contributions. The DB school created servants, while the Azad School produced the mentors and pioneers of the society. The land for DB school was donated by Behram Khan-the father of Bacha Khan. The preliminary land was 3 Canals. Mohammad Akram Khan donated his home and guest house for the Azad School Utmanzai. There was a land of 16 jirib (80 kanals) in front of the Azad School Utmanzai, which was used for annual meetings and all the dramas were staged in the same area. The land was used for khilafat and other meetings, too. The land was donated to the DB school after merging of Azad School in it. The land was given away by Mohammad Abbas Khan for Azad School in 1921, when he was elected as the first president of the AIA. Both the lands are now part of the Government Higher Secondary school in Utmanzai. After the removal of Dr. Khan Sahib from the ministry in 1947, Qayum Khan became the new chief minister. He was a follower of Khudai Khidmatgar movement, but latter detached with the movement and found the ministry in connection with the strong opposition with Bacha Khan and his movement. Qayum Khan ordered to demolish the Centre of the Khudai Khidmatgars in Sardaryab. He also issued the orders to build a government school on the combine land of Mohammad Abbas Khan and Bacha Khan. Sher Afzal Khan, the son in law, of Mr. Abbas Khan was told by the Tehsildar that the CM issued an order to establish a school in the land, far to remove the heritage value of the place used by the AIA for the freedom movement and school meetings. All the residents of mohalla Parich Khel gathered and hurriedly established homes in the land, which belonged to Mohammad Abbas Khan. However after checking the revenue record it was found that the remaining land was purchased by Bacha Khan. It was about 3 Jaribs. The government school was established in that very land by Qayum Khan in 1949. However, the present land of the school is more than 3 Jaribs, showing that the land from Mohammad Abbas Khan was also taken away for the purpose. 84, 43, 85, 1, 14, 14 A and interviews All these teachers were not paid attractive salaries, rather they served the nation on a very meager pay. The salary of the Headmaster was only Rs. 30 per month while, teachers designated as second master and third master were given Rs. 25 and 20 respectively. All the teachers were earning more in their previous jobs. Fazli Memood Makhfi was working as publicity officer in the Government, was on high pay roll as Rs. 55. Similarly Khadim Mohammad Akbar, patwari by profession, was taking much salary, scarified only to serve the nation without taking a single penny in AIA. Teachers of the Azad School Utmanzai Hussnuddin one of the teachers of the Azad School was the father of a famous poet Shamsuddin Muflis Durrani and Saddudin alias Jan khan. He was born in 1890 and died in 1944. He was naib tehsildar in Malaknd. On the eve of Bacha Khan, he resigned from the service and joined the Azad School as a teacher. He was also imprisoned by the British raj during his teaching. Maulana Mohammad Israel was, in this regard, another exceptional case. He was in charge of religious education in the AIA. He worked in a school as a teacher and used to visit villages with the Anjuman members to incline the people towards trade, commerce and to remove false and un-Islamic beliefs. Most of the teachers were qualified from Deoband Madrassa, Aligarh College, Islamia College Peshawar and missionary school Peshawar. However, the Deoband school of thought dominated over the Azad school system. Some of the students of Azad School later joined the institution as teachers after completion of their study. Master Karim was amongst them, who took a scholarship for higher studies by the Anjuman , and later, after graduation, both from university of the Punjab and Aligarh, joined the Azad School Utmanzai as a teacher. He was promoted to Head Master afterwards his deep involvement and services in Azad School Utmanzai. Management of the School The most significant feature of the all the Azad schools was the strong management committees. The fiction of the Anjuman that was deputed to look after the schools, were influential in the masses that inspired people to admit their children in the schools. The composition and configuration of the management committee of the Azad School Utmanzai proved as sublime inspirational, worth, however, the traditional feudal system sometime discouraged the environment towards educational dapple of the area. The learned religious scholars played a pivotal role in promoting mobilization and strength of trust. For the purpose of collective trust, Bacha Khan appointed Haji Sahib of Turangzai, as patron in chief of Azad schools. Curriculum The school curriculum was initially taken from the Islamia Colligate School Peshawar. English, Mathematics, History, Geography, Urdu, Islamiyat and vocational subjects were compulsory. In the coming years, it was refined and some more subjects were included in the course. In 1922, in its annual meeting, the Anjuman decided to introduce technical and vocational education in the course contents. Teachers and skillful persons were appointed to demonstrate the technical skills of tailoring, hosiery, carpentry, cap making, weaving etc. The section of religious education took the plan to enhance the conceptual learning of the holy Quran and Hadith. In this way the traditional religious figures of Mosques who were totally dependent on villagers, were made independent to teach and earn by making their own stuff and to sell it in the village. Normally this stuff was set in a show in the mosques, and was sold out to the villagers. Medium of Instruction The medium of instruction was Pashtu, being mother tongue; it received gigantic recognition from the masses. English language was considered as medium of civilization and Urdu as a sophisticated communication language in those days. Initiating Pashtu as medium of instruction was the first ever example, in any kind of institution in the history of the province. Education in the maternal language not only enhances the conceptual learning, but to create, develop and invent new things and theories. The medium of instruction in the Azad School worked in developing creative thinking, logical display of expressions and happen to make new ideas and thinking. This artistic inculcation given birth too many new things in the history of Pashtu language and literature. The first ever drama in the history of Pashtu literature was written, staged, and directed in the Azad School Utmanzai in 1924. It was a sublime work in each and every department. The themes were superb, the selection of students as actors, the stage preparation, the arrangements, display and the directions was a rich historical enterprise of all times. After the dramatic display, the students of DB school rushed to the Azad School for admissions. The DB schools were involved to produce servants, and the Azad School to nurture the artists, thinkers, patriots and conscientious nation. Co-Curricular Activities in the School The school curriculum was not merely intellectual, it was focused on social behavior a lot. Co-curricular activities were accepted as part of the curriculum. The weekly Bazmi Adabliterary sessions were arranged in a meaningful way, and it transformed the youth the self-realization and the aesthetic sense. Beside it sports and games were a regular part of the school scheme of work. Abduulah Bakhtnai narrated that when Bacha Khan was arrested under FCR in 1921, he Similarly the Lady Cunningham on March 13, 1937, in a visit to school, given away a golden ring, trophy and cash prize to Abdul Wali Khan for participating and winning in a sports competition under his captaincy by the Azad school teams in different games. The prize was received by the Mr. Abdul Ghafoor-a student of the school in the absence of Abdul Wali Khan. When Bacha Khan was imprisoned in the Haripur jail in 1943, he requested the superintendent jail to provide Badminton stuff. He himself prepared the ground in the jail premises and regularly played Badminton along with his colleagues. The school sports, society was responsible to arrange school sports activities within the premises and outside the campus. Abdul Wali Khan was appointed as first secretary of the society followed by Amir Nawaz Khan Jalya, Abdul Ghafoor, and Fazli Rahim. There was another society named as a literary society, established for promoting literature, art, and poetry. Abdul Ghani Khan was its founder secretary amongst students, however the annual Pashtu poetry competition and annual meeting of the school was beyond students' capacity and was taken by the Anjuman. The annual meetings and Pashtu poetic competitions are the rich creative contribution of the school and Anjuman . The first annual meeting was held on 27 th April 1922. The first drama was staged by the students, which was viewed by 900 audiences. The meeting was arranged by a committee comprised of the Mian Akbar Shah, Akram Khan, Mohammad Akbar Khadim, Abdul Akbar Khan, Abdullah Shah Deobandi, Aziz Khan of Munaf Kili, Abbas Khan, Maulvi Shakirullah and Dad Mohammad. The school teachers Mohammad Jan of utmanzai and Mohammad Anwar of Prang worked as stage secretaries. To motivate the students towards study some prizes were distributed amongst the students. Abbas Khan's mother and widow of Shahbaz Khan donated Rs. 500 in school funds. The prizes included copies, books, pencils and other learning material. A monthly, hand written magazine 'Nargis' had also been published from the Azad high school Utmanzai in 1933. It contained articles written by the students of the school. The magazine not only cultivated the students taste towards journalism, but to creativity. The magazine was also inculcating the art of running a newspaper. Conclusion The Anjuman-i-Islahul Afaghina was a comprehensive movement which served and struggled not only for the educational uplift of the society, but for the social, moral, intellectual, physical, anthropological, literal, cultural and political awareness of the Pakhtun nation. The Pakhtun society was full of social evils, which was keenly observed by strong senses, a prescription was suggested, through the formation of the Anjuman-i-Islahul Afaghina. The Azad high school Utmanzai, as a centralized institution worked as a sublime educational light house for the whole province. The school worked as diverging and Anjuman as converging, in the sphere of lightening. The history of Pashtuns reveals that integrating the diversified nation, left the public figure in creating antiquity. The Anjuman strived for the integration, reformation, refinement and civilization of the society through its systematic educational movement. The sole way of success to a desired destination was declared as collective efforts, for which the Anjuman , was launched. The initial success was the indulgence of the Landlords of the area of Hashtnagar, whose psyche of feudalism, if contemplated, is very intricate. Winning the sympathies and contribution of the Islamic scholars of the time, was another step forward towards gaining the trust, particularly at the gross root level. The fulfillment of organizational excellence, administrative transparency and close supervision were the other characteristics of the movement of the Anjuman-i-Islahul Afaghina. Above all, it was the charismatic leadership, which, contrary to the other paradigms, focused on 'training' as one of the innovative style of leadership. Reformation of the society through educational growth of quality and quantity in formal mode plus working in the society directly using informal and non-formal means, both continued side by side. Education, as the only trajectory, will not yield fruit, if society was not reformed, the Anjuman analyzed. The introduction of stage Dramas, poetic competition and huge annual meetings were steps of direct involvement of the society towards educating it, in a non-formal way. It made a massive awakening, which latter, played a formidable role in the freedom movement. In latter days, after 1930, it was assumed that freedom of the nation from the foreign rule, is more superior to education and reformation. Anjuman-i-Islahul Afaghina and Khudai Khidmatgar movement, the two sides of a coin, one the contemplatorary the other as pragmatic, culturally introduced the Pakhtun nation to the rest of the globe as a civilized nation.
2019-08-18T14:30:53.006Z
2018-09-30T00:00:00.000
{ "year": 2018, "sha1": "05aa3cde3f8db862945fadd2868798f0cc981536", "oa_license": "CCBYNC", "oa_url": "https://gssrjournal.com/jadmin/Auther/31rvIolA2LALJouq9hkR/5w1Orzs6HV.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "af94627b94bbaded3356c8686ac47e4c8790d575", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [ "Political Science" ] }
118948358
pes2o/s2orc
v3-fos-license
Thermodynamic stability of ice models in the vicinity of a critical point The properties of the two-dimensional exactly solvable Lieb and Baxter models in the critical region are considered based on the thermodynamic method of investigation of a one-component system critical state. From the point of view of the thermodynamic stability the behaviour of adiabatic and isodynamic parameters for these models is analyzed and the types of their critical behaviour are determined. The reasons for the violation of the scaling law hypothesis and the universality hypothesis for the models are clarified. Introduction The description of the behaviour of thermodynamic parameters near the critical points is one of the basic problems of the critical state theory. Direct statistical calculations connected with the evaluation of the partition function of real systems are unavailable at present because of the impossibility of accounting exactly for the interactions and, moreover, for the fluctuations which are large near the critical point. So, solving the problem by the methods of statistical physics one considers either the simplest models, for which the partition function can be evaluated exactly, or an approximate solution of the problem. At the first approach the exactly solvable two-dimensional models (the Ising, Lieb, Baxter models and others [1] forming the most valuable possession of statistical mechanics) are of great importance. The second approach is connected mainly with the examination of the asymptotic behaviour of thermodynamic parameters near the critical points, as well as with the development of the scaling law hypothesis, the universality hypothesis and the renormalization group approximation in various variants and has appreciably succeeded. Indeed, the large class of real systems and models satisfies the scaling law and the universality hypotheses. The existence of real systems and exactly solvable two-dimensional models, for which these hypotheses are violated is also remarkable. The six-vertex ferroelectric Lieb model and the eight-vertex Baxter model [1] are such examples. Our aim is the examination of the critical properties of these models based on the thermodynamic method of investigation of the critical state [2]- [4] which is developed on the first principles without any hypotheses. The method is based on the constructive critical state definition and the critical state stability conditions. The method describes a variety of critical state nature manifestations. The violation of the scaling law and universality hypotheses in the Lieb and Baxter models is explained just by this variety. The Lieb and Baxter models give a reasonable fit to real ferroelectrics (antiferroelectrics) and ferromagnets (antiferromagnets). So, the application of the thermodynamic method [2]- [4] to them could be interesting for the development of the critical state theory. The thermodynamic method of investigation of the critical state Let us consider the basic theses of the thermodynamic method and the terminology. The critical state definition, which considers both the properties of homogeneous and heterogeneous system can be written in the form [2]- [4]: Here X is the generalized thermodynamic force, x is the conjugated thermodynamic variable (the external parameter of a system), K c is the critical slope of a phase equilibrium curve. Eq. (1) has non-trivial solutions, if the condition is fulfilled all over the spinodal. It coincides with the well-known critical state condition D = 0, where D is the stability determinant of the system [5,6]. According to the terminology of Refs. [5,6] the adiabatic stability coefficients (ASC's); whereas ∂T ∂S X and ∂X ∂x T are called the isodynamic stability coefficients (ISC'c). The stability coefficients are related to the fluctuations of the external parameters of the system (the first and the second Gibbs lemmas) which infinitely increase near the critical point. The definition (1) describes the critical state by means of the AP's. The solution of the homogeneous linear equations (1) is the critical slope K c which distinguishes the critical point on the spinodal. It is the fundamental characteristic of the critical state and it can be expressed via the ASC's: This definition, being combined with the critical state stability conditions, leads to the existence of four alternative types of the critical behaviour of thermodynamic systems [2]- [4]. The behaviour type is defined by the value of one ASC and K c . The behaviour of the whole set of the stability characteristics of the system (the AP's and IP's) is determined for each type. The fourth type of the critical behaviour is the most interesting and the most "fluctuating" one. In this case it is necessary to consider the differential equations of higher orders. Then the solution is realized by several possibilities [2]- [4]. The case of two or even three phase equilibrium curves converging at the critical point is of special interest. Such a point has not yet been found experimentally, but in this paper we demonstrate that the critical point of the ferroelectric Lieb model has just this feature. The ferroelectric 6-vertex Lieb model There are a lot of crystals with the hydrogen bonds in the nature [1]. The ions in such crystals (with the coordination number four) must obey the ice rule. The bonds between atoms via hydrogen ions form the electric dipoles. So, it is convenient to represent them as the arrows on the bond curves. These arrows are directed to that end of the bond which is occupied by the ion. There are only six such configurations of arrows, therefore the ice models are sometimes called the six-vertex models. The partition function of such a system is defined by the expression where the summation should be carried out over all the configurations of the hydrogen ions allowed by the ice rule, ε i is the energy of i-type vertex configuration and n i is the number of i-type vertices in the lattice. There are three sorts of the ice models which have been solved by E. H. Lieb [7,8]. One of them, considered in this paper, can describe KH 2 P O 4 (KDP), the crystal with hydrogen bonds, which is characterized by the coordination number four and orders ferroelectrically at low temperatures under the appropriate choice of ε 1 , ε 2 , . . . , ε 6 . For the square lattice this choice is In the ground state all the arrows are directed either up and to the right or down and to the left. Both these states are typical for the ordered ferroelectric. The expression for the free energy per lattice point in the presence of the nonzero external field is given by where P is the electric polarization [1], and A = −0.2122064kT c ; k is Boltzmann constant. The critical equation of state is expressed in the form It corresponds to the phase diagram in Fig. 1. It is necessary to emphasize that the ice model allows the investigation on the basis of the thermodynamic method. In this case the temperature T and the electric intensity E stand for the generalized thermodynamic forces. The conjugated generalized thermodynamic variables are the entropy S and the electric polarization P . Thus, the adiabatic parameters for the given (6), and as T → T − c the free energy equals simply to ε 1 −EP . Consequently, the heat capacity is finite in the subcritical region and the critical exponent is α ′ = 0. Both the phases are quite ordered and then differ from each other only by a direction of the electric polarization vector (P = ±1). This corresponds to the second critical behaviour type according to the thermodynamic classification of critical behaviour types of one-component systems [3]: the critical slope of the equilibrium curve of the phases I and II equals to zero, K c = 0. As we can see from Eq. (6), in the supercritical region (T → T + c ) the heat capacity diverges as Let us approach to the critical point from the supercritical region along the curve of the first-kind phase transition I −III and II −III. It is known that at least one of the jumps ∆P , ∆S must exist along these curves. I.e., on the transition curve At the critical point ∆P = 0. The entropy jump can be determined from the known behaviour of the heat capacity. For the phase I we have α ′ = 0, i.e. C P = const. Consequently, the entropy of the phase I is S I = C 1 ln T + const. For the phase III we have α = 1/2, i.e. S III = C 2 T c (T − T c ) + const. Then, for the jump, we have Let us analyze the behaviour of the whole set of the system stability characteristics (the AP's and the IP's). The relations between the adiabatic and isodynamic parameters exist: Using Eqs. (7) and (10), we can obtain the following expressions for the AP's and the IP's: As it follows from Eq. (11), at T → T + c all the thermodynamic stability characteristics tend to zero: According to the critical behaviour classification [3] at K c = {0, ∞} and ASC's→ 0 we have the fourth type of the critical behaviour, and two phase equilibrium lines with different critical slopes K As it is known, stability characteristics are inversely proportional to fluctuations of external parameters of the system. At the continuous transitions [6] D and the SC's pass finite minima, that corresponds to the growth of fluctuations. The locus of these minima is curve of supercritical transitions (the lowered stability curve or quasispinodal). The limit case of these continuous transitions, when fluctuations in the system are at the high and D and the SC's pass zero minima, is the critical state. The critical point is also the limit point of some first-kind transition (the limit point of phase equilibrium curve). If the phase equilibrium curve and curve of supercritical transitions pass into each other continuously, i.e. the slopes of these curves are the same, then the tricritical point is observed, where three phases become identical: two subvritical phases and supercritical one. On the quasispinodal the next condition is fulfilled [9]: or, equivalently, Using results (11) to find the determinant of stability for Lieb model, and investigating where condition (12) is fulfilled, we obtain E = 0. This is equation of quasispinodal for ferroelectric Lieb model. The resulting phase diagram is shown in Fig. 2. So, the maximal growth of fluctuations is observed under zero electric field. The critical slope of the subcritical phase equilibrium curve is K c = 0. It means, that for this model it is realized the case of continuous passage of the equilibrium curve into the lowered stability curve because of the same critical slopes. Thus, the violation of the scaling law hypothesis in the Lieb model can be explained by the fact that the model corresponds to two different critical behaviour types: at T → T + c the second type and at T − c the fourth type is fulfilled. Besides, the critical point of the Lieb model is the critical point of a special type with the convergence of three phase equilibrium lines. Moreover, the equilibrium curve continuously passes into the lowered stability curve. 8-vertex Baxter model The eight-vertex Baxter model is a generalization of the six-vertex Lieb model. The ice models as the models of the critical phenomena have some unusual properties: the ferroelectric state at these models is frozen (i.e. there is complete ordering even at the non-zero temperature); the critical behaviour of antiferroelectrics is characterized by the more complicated law instead of a simple power dependence of (T − T c ). The first of these unusual properties is connected with the ice structure. In the case of an unlimited lattice with the ferroelectric ordering the infinite energy is needed for a deformation. So, the deformation gives an infinitesimal contribution to the partition function [1]. The following generalization of the ice-type models was proposed [10]- [12]: • there is only one arrow on each square lattice edge; • the configurations with an even number of arrows getting in (and getting off) each vertex are allowed only; • eight possible configurations of the arrows to a vertex exist. The formation of j-type vertex needs the energy ε j (where j = 1, ..., 8). For such a model the partition function is given by (4) where the summation is performed over the eight vertex configurations. Thus, besides the first six vertices coinciding with the Lieb model there are another two new vertices for which all the arrows get either in a vertex or off a vertex. Now the finite energy is needed for the local deformation of the lattice state (e.g. for reversing of all the arrows which are lying on the square side), in which all the arrows are directed up or to the right. So the ferroelectric state is ordered not completely. As it was mentioned above, the Baxter model is fitted to describe the critical phenomena in ferroelectrics (antiferroelectrics). The eight-vertex model can be considered also as two Ising models with the nearest neighbours interaction (each model is on its sublattice). These sublattices are connected by means of the four-spin interplay. In this case the model corresponds to ferromagnets. The Baxter model has the exact solution only in the absence of an external field. The critical exponents of this model equal [1] Here the index e denotes the electric exponents. The exponents β, γ and δ are related to ferromagnet. The exponent α is the same both for the ferromagnet and for the ferroelectric. µ is the interaction parameter, it takes a value from (0, π). Thus, as we can see, the critical exponents depend on the interaction parameter continuously. This fact is in the contrary to the universality hypothesis. This result distinguishes the Baxter model among the others two-dimensional exactly solvable models. Taking this into the consideration, one should expect that the type of the critical behaviour according to the thermodynamic classification and the value of the critical slope change depending on the interaction parameter. Let us show this. The ferromagnetic Baxter model In the case of ferromagnet the adiabatic stability coefficients get the following asymptotic form: It is necessary to note that in the absence of the external field the behaviour of the isodynamic parameters coincides with the behaviour of the adiabatic parameters. When 0 < µ π 2 the exponent α is negative, the exponent γ takes a positive value, i.e. The ferroelectric Baxter model In the case of the ferroelectric Baxter model the stability coefficients can be written in the form: At 0 < µ π 2 , as in the previous case, α is negative and γ is positive. So ∂T ∂S P = 0, ∂E ∂P S = 0 ⇒ ∂T ∂P S = 0, K c = 0 and the second type of critical behaviour is fulfilled. At π 2 < µ < π the exponent α takes positive values 0 < α < 1, but α is less than γ, 3 2 > γ > 1 and the fourth type of critical behaviour with K c = 0 is realized. It is interesting to emphasize the fact that for real ferromagnets and ferroelectrics the critical behaviour types are also the second and the fourth ones. Conclusion Thus, in the paper the consideration of the thermodynamic stability of the Lieb and Baxter models by the method of Ref. [2]- [4] has been performed. The asymptotic expressions for the whole set of the stability characteristics are determined.The reasons for the violation of the scaling law and universality hypotheses in the given models are clarified. So, we determine that the second and the fourth type of critical behaviour is fulfilled in the subcritical and in the supercritical region of the Lieb model, correspondingly. The violation of the scaling law hypothesis in the ferroelectric Lieb model can be explained just by the difference of the behaviour types. It has been also ascertained that three phase equilibrium lines with different critical slopes converge at the critical point of the model. A possibility of the existence of such a type of the critical point has been predicted in papers [2]- [4]. The equation of quasispinodal is obtained and it is shown that the equilibrium curve continuously passes into the lowered stability curve in this model. In the Baxter model the fulfillment of the second and the fourth type of critical behaviour also occurs, moreover, the fourth type is represented by three possibilities -with three different critical slopes of the phase equilibrium line. The reason for the violation of the universality hypothesis is that each of the mentioned types (the second type, the fourth type with K c = 0, the fourth type with K c = {0, ∞} and the fourth type with K c = ∞) is connected either to the certain value or the continuous range of the interaction parameter µ. It is interesting to emphasize that in each model while one hypothesis is violated the another one is nevertheless holds. In addition, the special case of the eight-vertex Baxter model, in which the universality hypothesis is violated, is the Lieb model (µ = 0), where the universality hypothesis is satisfied, but the scaling law hypothesis is violated, and the Ising model (µ = π 2 ), where both hypotheses are fulfilled. Therefore, the abilities of the thermodynamic method of investigation of the one-component system critical state have been illustrated by the example of the above-mentioned models and the global reasons for the violation of the scaling law and universality hypotheses concerned with the variety of the critical state nature manifestation are revealed.
2010-12-02T23:59:43.000Z
2010-12-02T00:00:00.000
{ "year": 2010, "sha1": "9fc3c66f58259787819af2b9170a73966284f364", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "9fc3c66f58259787819af2b9170a73966284f364", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
257687758
pes2o/s2orc
v3-fos-license
Sub-nominal resolution Fourier transform spectrometry with chip-based combs Chip-based optical frequency combs address the demand for compact, bright, coherent light sources of equidistant phase-locked lines. Traditionally, the Fourier Transform Spectroscopy (FTS) technique has been considered a suboptimal choice for resolving comb lines in chip-based sensing applications due to the requirement of long optical delays, and spectral distortion from the instrumental line shape. Here, we develop a sub-nominal resolution FTS technique that precisely extracts the comb's offset frequency in any spectral region directly from the measured interferogram without resorting to nonlinear $f$-to-$2f$ interferometry. This in turn enables MHz-resolution spectrometry with millimeter optical retardations. Low-pressure MHz-wide absorption lines probed by widely-tunable chip-scale mid-infrared OFCs with electrical pumping are fully resolved over a span of tens of nanometers. This versatile technique paves the way for compact, electrostatically-actuated, or even all-on-chip high-fidelity FTS, and can be readily applied to boost the resolution of existing commercial instruments several hundred times. The compact footprint, low power consumption and native operation in spectral regions relevant for optical sensing makes chip-scale optical frequency combs (OFCs) 1,2,3,4 ideal candidates for broadband and highresolution spectrometers 5 .Arguably, the most popular technique to resolve comb lines relies on dual-comb beating between a pair of mismatched combs 6 on a fast microwave-bandwidth photodetector. Although dualcomb spectroscopy (DCS) enables real-time monitoring of broad optical bandwidths, it poses a significant challenge for precise, extended-timescale measurements. The difficulty lies in ensuring high mutual coherence between the sources via analog 7,8,9 or digital synchronization schemes 10,11,12 , and the need for high-speed digital signal processing. Additionally, strict requirements on the comb optical linewidth make many devices incompatible with the DCS technique.Another challenge faced by chip-scale OFCs is gap-less tunability 8,13,14 for performing measurements beyond the coarse mode spacing on the order of 5-20 GHz, which requires the presence of spectrally-matched low-phase noise regimes. Here, we show that a solution to precise high-resolution measurements using chip-scale OFCs lies in the Fourier Transform Spectroscopy (FTS) technique 15 , which fundamentally requires an identical integration time as the DCS technique to reach the same signal-to-noise ratio (SNR) 16 , yet needs only a single comb.The high brightness of OFCs 17,18 previously explored in conventional FTS systems 19,20 is now merged with MHz resolutions obtainable at millimeters of optical retardations, which exceeds 100× the nominal resolution.All that is possible to implement in an arbitrary spectral region even for free-running OFCs.The only requirement is the knowledge of the comb's repetition rate, which can be obtained directly from the device's electrical bias or a photodetector.Whereas the apparent modal sparsity of chip-scale OFCs can be seen as a disadvantage, it becomes a key enabler for compact high-resolution FTS.The FTS technique employs a Michelson or Mach-Zehnder interferometer to measure a time-averaged field autocorrelation known as the interferogram (IGM, S (int) 0 ) related to the power spectral density via the Fourier transform.The IGM results from optical interference between the optical source's waveform and its time-delayed copy on a slow photodetector.A typical IGM of an OFC is expected to possess intensive bursts separated by lower-intensity regions (Fig. 1a) with envelope periodicity relating to the inverse of the comb repetition rate f r . For an arbitrary (even incoherent) source, the nominal spectral resolution ∆ ν min in FTS is limited by the inverse of the maximum optical path difference (OPD) ∆ max between the interferometer moving and stationary arms.However, the delay range resolution limit can be greatly surpassed if the light source has OFC properties 21,22 .A special IGM sampling procedure can virtually eliminate a convolution of the measured spectrum with an instrumental line shape (ILS) function induced by a truncation of the scanned optical path.The truncation acts as a box-car window, which limits the resolution and distorts the measured spectrum.This can be mitigated via ILSfree FTS: one has to measure exactly one period of the IGM, which implies an OPD of ∆ max = c/f r .This is why a 10 GHz semiconductor laser OFC requires only ∼ 3 cm of OPD for megahertz resolution FTS spectra, which translates into 15 mm of the moving arm displacement in conventional, and 7.5 mm in double-sided mirror 23 or double-pass 24 configurations, or even sub-mm in double-pendulum arrangements 25 .This is in stark contrast to nominal MHz-resolution FTS, which requires an OPD in the range of meters. Unfortunately, the indispensable requirement of prior ILS-free FTS approaches is the knowledge (and stabilization) of the comb's carrier-envelope-offset (CEO), or simply offset frequency f 0 , which traditionally has been obtained via f -to-2f interferometry 26 of an octavespanning source.Since in many spectral regions the f -to-2f scheme is either impractical or virtually impossible to implement, this limitation has excluded spectrally narrower sources with low pulse energies or with frequency-modulated (FM) emission spectra from surpassing the nominal FTS resolution, which constitute a majority of chip-scale OFCs.From a system complexity standpoint, bulky stabilized near-infrared fiber OFCs with MHz repetition rates require meters of OPD to meet the subnominal resolution criterion, which has restricted such instruments to a laboratory environment.Additionally, access to the spectroscopically-relevant mid-infrared spectral range has relied on nonlinear frequency conversion techniques 27 , which further increased the system complexity and footprint.By adapting the subnominal FTS procedure to electrically-tunable chip-scale OFCs without an easily measurable offset frequency, we perform high-resolution (MHz) and broadband (THz-wide) spectroscopy with millimeter-long mechanical displacements using millimeter-scale free-running sources operating natively in the mid-infrared.The technique may find application in existing Fourier Transform Infrared Spectrometers (FTIRs), which have conventionally been seen as incompatible with high-resolution applications.Our technique (inspired by prior work of Maslowski et al. in Ref. 21 , where all comb parameters needed to be known and stabilized) enables turning them into instruments with equivalent OPDs on the order of meters without any modifications.This in turn gives access to Doppler-limited molecular transitions at lower pressures and temperatures such as occurring in space.Also, recent developments in on-chip FTS with millimeter-scale displacements 28 can be leveraged to perform high-resolution on-chip spectrometry with meter-scale equivalent OPDs. Experiment We will now demonstrate the application of the subnominal FTS technique to interleaved measurements of low-pressure gaseous analytes performed using a compact Michelson interferometer (Fig. 1a).Here, we use two different mid-infrared chip-scale OFCs: a widely-tunable interband cascade laser (ICL) comb, and a quantum well diode laser (QWDL) comb (see Supplementary Information for details).A full mathematical description An analogous beam path is for reference wavelength interferometry to ensure uniform sampling of the IGM.The spectrum analyzer / frequency counter can be removed once the repetition rate is characterized.b, Flowchart of the interferometric signal to meet the sub-nominal resolution criterion.c, Illustration of the sampling effect on the measured spectrum (single-burst interferogram, inspired by Ref. 21 ) when the spectrometer resolution is higher than fr and f 0 is not removed (with ILS), and when points sampled by FTS are aligned with comb teeth positions (ILS-free / subnominal).Vertical lines represent comb lines, while filled dots correspond to sampled points.Although only the center comb line is absorbed (Lorentz profile), the ringing artefact affects multiple lines, which disappears only upon matching the FTS sampling and comb frequency scales. of the technique is given in Methods, but the general idea is laid out here.The essence of the proposed ILS-free FTS procedure (Fig. 1b) is to modify the measured single-period IGM so that comb lines in the frequency domain are located at zero-crossings of the truncation-induced ILS 21,22 .This is shown schematically in Fig. 1c, which clearly shows what happens when this condition if fulfilled (ILS-free) and violated (with ILS).Conventionally, a prerequisite for implementing this technique is the knowledge of the comb's f r and f 0 .The ILS-cancelling routine digitally nulls f 0 followed The measured analyte is methane (CH 4 ) in natural isotopic ratio at 95 Torr (12.66 kPa) and room temperature, which displays complex manifold transitions with MHz linewidths at such conditions.In the nominal resolution case, capturing 2 bursts yields a moderesolved comb spectrum with peak intensities used for spectroscopy.Although in the unapodized (box-car-windowed) and triangularly-apodized case the resolution is higher than for an incoherent source (OPD sinc limit), the lines are severely distorted.Ringing artifacts, negative absorbance, and peak rounding appear.In contrast, the sub-nominal FTS technique provides an undistorted, high-fidelity spectrum comparable with an instrument with a 100× higher resolution. by IGM resampling and truncation to include exactly one signal period defined by f r .Unfortunately, whereas f r is easily measurable directly from a laser cavity or photodetector (with 10 −6 or higher precision), for most electrically-pumped OFCs it is extremely challenging to measure f 0 .Fortunately, the combination of f r knowledge with digital retrieval of the IGM phase increment allows us to bypass this limitation.The key is to acquire an asymmetric single-sided IGM that carries information about f 0 instead of the double-sided used conventionally.This technique unlocks the high-resolution FTS potential of many comb platforms, where f 0 retrieval via f -to-2f interferometry 26 would be difficult or virtually impossible to implement.By definition, the IGM centerburst has zero phase at zero delay (constructive interference of all wavelengths).However, the peak of the first IGM satellite (at a temporal delay of 1/f r ) accumulates a phase related to f 0 /f r .When corrected for the discrete number of points in the IGM (which are sampled at multiples of the FTS reference wavelength λ ref ), it can be used as a MHz-accurate estimate of f 0 . Spectroscopy To prove the exactitude of the technique, we have compared the ILS-free spectra of 95 Torr methane (CH 4 ) with those obtained by conventional treatment of IGMs acquired by an interferometer with an OPD of ∼35 mm (8.5 GHz resolution), which captures the full centerburst, and one roundtrip burst.As an OFC source, we used an ICL comb 4 with a 9.6 GHz repetition rate tuned by the injection current in a gap-less fashion (over a full f r ).Peak intensities and positions were tracked for line intensity data in conventional FTS for comparison.Fig. 2 shows the measured absorbance in the case of unapodized (nominal boxcar), apodized (triangular window), and proposed subnominal FTS technique.While the peak tracking approach enables us to bypass the conventional incoherent source resolution limit (OPD sinc limit), the lines remain nonetheless severely distorted.They possess ringing artifacts, and display negative absorbances, while some features remain unresolved with drastically reduced intensities.In contrast, the ILS-free data that required only the knowledge of the comb's f r possess clear, wellresolved, and undistorted lines.A comparison of the fullspan methane absorbance measurement including > 10 4 spectral elements spaced by 96 MHz with the HITRAN 2020 database 29 is shown in Fig. 3a, and reveals excellent agreement between the fitted, and measured spectrum covering > 40 cm −1 (1.2 THz / 42 nm) with a 96 MHz point spacing.Manifold features of lines R(4), and R(5) in the ν 3 band of CH 4 as narrow as 600 MHz at these conditions, are faithfully reproduced (Fig. 3b, and Fig. 3c, respectively).The major limitation results from the limited dynamic range of 20 dB (absorbance of 2), when the measurement noise floor is reached.Consequently, the SNR of comb lines probing such strong absorptions is drastically lower.It can be greatly improved with longer averaging (here it took 1.5 seconds per step), or a comb source with a greater power per mode. To demonstrate the versatility of the technique and its independence on the comb source, we employed a recently demonstrated mid-IR quantum well diode laser (QWDL) comb 30 at 3 µm to probe two pure molecular standards: hydrogen cyanide (H 12 C 14 N), and acetylene ( 12 C 2 H 2 ), at pressures of 2 Torr, and 10 Torr, respectively.Both analytes possess ∼340 MHz wide Doppler-limited absorption lines (FWHM), which practically require an instrumental resolution greater than 100 MHz to accurately reproduce the lineshape.Figure 4a plots the subnominal resolution FTS measurement of the fundamental ν 3 mid-infrared band of pure H 12 C 14 N, which displays pronounced amounts of noise (0.1 in absorbance units) due to a greater sensitivity to optical feedback of QWDL devices (see Methods).Also, the QWDL comb has a narrower span (∼20 nm).Nevertheless, the FTS measurement is of high fidelity, as shown in Fig. 4b for line P (12), and Fig. 4c for P (7), except for weak absorbance clipping to a value of 2 due to the SNR limit.Even more complex features due to overlapping spectroscopic bands are well resolved for the second molecular standard of 12 C 2 H 2 at 2 Torr (Fig. 4d) with two representative lines in the low-absorbance (Fig. 4d), and high-absorbance regime (Fig. 4e), both in excellent agreement with the HITRAN model.Note that in all measurements, the spectroscopic axis was retrieved solely from the measured repetition rate and a known interferometer reference wavelength (here HeNe laser).The uncertainty of the frequency scale retrieved from consecutive measurements is estimated to be ±0.005cm −1 (150 MHz, 1.5% relative to f r ).Note that the feasibly of exactly matching the spectrometer sampling points to the comb peak locations is in stark contrast to comb spectroscopy based on grating optical spectrum analyzers (OSA) 5 , which additionally lacks the Jacquinot-(throughput), Fellgett-(multiplex), and Connes-(wavelength accuracy) advantages of FTS.To date, high-resolution OSA instruments have been used to probe only multi-GHz absorption lines, while distortion effects arising due to this mismatch and entrance slit effects may become dominant in the Doppler-limited regime, as probed here. Although in this proof-of-concept, minute-scale demonstration we utilize a commercial FTIR instrument, the combination of millimeter-long optical displacements with the electrically-pumped chip-scale source paves the way for extremely compact, precise, battery-operated spectroscopic instruments with chip-size footprints 28 .Even existing on-chip spectrometers may be boosted in resolution almost a thousand times by employing widely-tunable shorter-cavity OFCs with repetitions on the order of tens of GHz like microresonators 2 or quantum cascade lasers (QCLs) 1,3 .Obviously, the sub-nominal technique does not change their coarse, GHz spectral sampling grid.It only ensures that sparse, interleaved spectra with MHz resolutions around the comb teeth are undistorted.For electrically-pumped OFC, the need for comb line positions tuning for spectral interleaving can be easily fulfilled by simply changing the injection current or temperature.This motivates the future development of broadband OFC sources with reproducible gap-less tuning capabilities yet with less focus on absolute frequency stability.This is because the proposed technique unlocks the highresolution spectroscopy potential of emerging OFC chips with DCS-incompatible optical linewidths.For instance, in some regimes, the ICL comb used here exhibits a comb linewidth of ∼50 MHz, which would render an unresolved and noisy microwave spectrum with the DCS technique, but performs well in FTS.Analogous challenges are faced by QCLs operating in the terahertz range 31 .We also envision enhancing the digitally-enabled ILS-free technique by laser modulation schemes to sense low concentrations of analytes without relying on direct absorption measurements.Beyond analytical spectroscopy, the technique may also find application in OFC coherence characterization using linear microwave interferometry techniques such as SWIFTS 32 to completely eliminate the need for lineshape deconvolution.It will also enable one to analyze the offset frequency tuning characteristics and dynamics in emerging OFC platforms without resorting to optical heterodyne techniques, which are difficult to implement in more exotic spectral regions. Methods Offset frequency retrieval: The electric field emitted by an optical frequency comb can be expressed as a superposition of its equidistant lines where En is the n-th comb line intensity, ωn = 2πfn is the angular optical frequency, and ϕn is the phase.ωn obviously follows the frequency comb model with a common offset frequency ω 0 , and a repetition frequency ωr such that ωn = ω 0 + nωr.The summation in Eq. 1 is over both positive and negative n because the field is real, which implies that En=E * −n and ωn = −ω −n .In FTS, the measured interferometric quantity is the autocorrelation of the time-averaged electric field E(t), which disregards the optical phase: ( The autocorrelation S (int) 0 (τ ), referred to as the IGM, is a function of a relative optical delay τ = ∆/c.For an infinitely long acquisition using a perfect Michelson interferometer, the Fourier transform of Eq. 2 yields an array of Dirac deltas located at ωn with intensities |En| 2 .Real interferometers, however, have limited optical displacements (|∆| < ∞), which introduce a truncation of the acquisition, corresponding in the frequency domain to a convolution of the true optical spectrum with an ILS function.For short optical displacements, not only will it limit the spectral resolution, but also introduce ringing artifacts from absorption lines.Fortunately, the influence of the ILS can be suppressed if one uses an optical frequency comb source 21 .Note that in contrast to earlier approaches that correct the IGM based on known and stabilized fr = ωr/2π and f 0 = ω 0 /2π to obtain an ILS-free spectrum, here we are solving a quasi-inverse problem.Given a measurement of free-running fr, we estimate f 0 directly from the IGM, which is next digitally removed.Both frequencies are then used for frequency axis calibration. At the heart of the ILS-free technique proposed here is the idea to convert a single-sided interferogram measured over τ ∈ [0, Tr], where Tr = 1/fr, into a harmonic signal so that it can be circularly shifted just like it was recorded in double-sided mode (τ ∈ [−Tr/2, Tr/2]).In fact, the measured interferogram S (int) 0 (τ ) can be seen as a carrier wave ωc modulated by a periodic envelope function.While the phase of the centerburst (around τ = 0) is zero (by definition of the autocorrelation requiring to have a global maximum at τ = 0), satellite bursts possess a relative phase shift ∆ϕ between the envelope peak and the carrier, which is directly related to the offset frequency.The measured 1-st satellite (roundtrip) IGM burst can be expressed as: Periodicity of the interferogram S (τ ) every Tr takes place only for harmonic (offset-free) combs i.e. such that have ω 0 = 0 because e iωrTr = 1.Therefore, ω 0 needs to be estimated and digitally removed from the IGM via frequency shifting. To computationally extract the angular offset frequency ω 0 much more precisely than via interpolating peaks in the frequency spectrum of a long interferometer scan covering multiple interferogram bursts, it is sufficient to retrieve the phase increment ∆ϕ between the complex interferogram centerburst and the first satellite occurring after Tr, namely: where the real IGM S (τ ) + iH{S It is easy to realize that ∆ϕ 0 = ∆ϕ mod 2π if one recalls the definition of the offset frequency f 0 = ∆ϕ 0 mod 2π 2π fr, which for the angular frequency is simply ω 0 = ∆ϕ 0 fr.Therefore, ω 0 Tr = ∆ϕ 0 . The power of this f 0 retrieval technique stems from the fact that even for a free-running laser, Tr is typically known with relatively high precision (10 −6 or higher), which allows one to incorporate this knowledge to obtain f 0 directly from acquired single-period IGMs.The exactitude of the retrieval will predominantly depend on the accuracy of the reference laser frequency used for measuring the optical displacement, and that of fr.It should be also noted that this technique is restricted to probe only slow offset temporal dynamics, as only time-averaged signals are measured, yet it is still sufficient for the retrieval of correction parameters for near-second acquisition timescales of FTS. What follows from Eq. 4 is that in principle only one point is sufficient to retrieve ∆ϕ; however, for a statistically more accurate estimate, it is practical to calculate the mean of many samples.Rather than calculating the mean of complex arguments, which would be statistically biased due to the nonlinear operation of the arctangent function and issues with phase unwrapping, the correct way would be to average complex vectors S(int) 0 (τ + Tr)/ S(int) 0 (τ ) first, and then calculate the complex argument of the average only. Dealing with discrete number of points: Because the actual interferometer displacement is constrained to be a multiple of the reference wavelength λ ref , the offset phase increment ∆ϕ 0 will almost never be measured at exactly Tr.This also means that the natural frequency spacing in FTS f FTS = c/(N λ ref ) will be close, but not perfectly matched to that of the comb fr, where N is the number of acquired IGM points.Fortunately, it can be corrected to arbitrary precision by means of Fourier interpolation, i. e. the IGM can have an arbitrary length N with an equivalent change of λ ref = N/N also known as resampling.Simple IGM interpolation q times also helps to suppress errors introduced by the discrete intervals of sampled data like the phase error resulting from missing the peak of the centerburst occurring at ∆ = 0. Another advantage is that when the repetition rate varies throughout the scan, IGMs can be made of the same length (number of samples) through interpolation.This greatly simplifies the analysis of interleaved spectra. To ensure that after phase-shifting the IGM is symmetric (since the measured spectrum is real), one has to acquire N = 2Nss + 1 samples, Nss unique on each side.This is equivalent to an IGM with one sample at ZPD (τ = 0) surrounded by Nss samples on each side obtained through where [. ..] stands for rounding to the nearest integer.A discrete Fourier transform (or more specifically, the Fast Fourier Transform / FFT) of the N = 2Nss+1-samples-long signal will yield a frequency spectrum with Nss +1 points between DC and the Nyquist frequency c/λr, or equivalently spaced by f FTS .Still, to avoid ILS distortion, the linear phase ramp in the IGM due to f 0 must be removed prior to FFT calculation. To obtain an estimate of the offset frequency f0 given the discrete constraints, one has to retrieve the carrier phase increment for the (k + N − 1)-th IGM point with that spaced N − 1 points apart assuming sample indexing starts at 1.The estimation exactitude can be greatly improved by averaging ( . . . ) in the complex domain using IGM samples that have non-zero magnitudes: Although this value is close to the true offset phase increment ∆ϕ 0 , the discrete domain constraints may introduce a significant estimation bias.The non-integer number of IGM points required for a bias-free estimate in the perfect ODL-comb match condition is: while measured are N discrete IGM points.Therefore, one has correct the ∆ϕ estimate by a phase shift due to the N Z −N mismatch dependent on the IGM carrier frequency.It can either be coarsely assumed to lie around the center of mass of the comb emission spectrum, or retrieved from the IGM.Here, we rely on the latter.Using the Kay frequency estimator that avoids phase unwrapping 33 , ωc in units of cycles per samples is retrieved analogously to Eq. 8: Oscillation at ωc over N Z − N samples accumulates a phase This correction term is included in the offset frequency estimate: Of course, the numerator must be phase-unwrapped to avoid large frequency jumps (f 0 typically evolves slowly, much slower than fr per current step).Please also note that this frequency is used only for retrieving the frequency axis in the FTS measurement or studying the offset frequency evolution versus injection current.In the offset phase removal routine, however, it is better to multiply by a complex exponential with a linear phase ramp from 0 to ∆ϕ in the IGM domain, as discussed in the next section. Preparation of the interferogram: The offset frequency correction to ensure signal harmonicity relies on complex multiplication.The analytic signal is phase-shifted according to a phase ramp that increases linearly from 0 up to the k/N scaled phase argument ∆ϕ retrieved from the IGM as in Eq. 8 where Re{. ..} stands for the real part of a complex argument.Consequently, f0 is used only for retrieving the frequency axis rather than employed in the correction procedure.A visual depiction of the IGM at each processing step is provided in Supplementary Figure S1. Ambiguity of the offset frequency sign: Because the IGM is a real signal, one cannot unambiguously determine the sign of f 0 solely from a single-period IGM.While it is not relevant for offset frequency cancellation, information if one deals with +f 0 or −f 0 is of critical importance for optical frequency axis calibration or for diagnostic purposes like studying the evolution of f 0 as a function of injection current or temperature.This problem is analogous to that of f -to-2f interferometry, where between DC and fr two microwave beat notes lie symmetrically between fr/2 (Nyquist frequency).Initially, one cannot distinguish between +f 0 and −f 0 , until one actuates the laser to check in what directions they move.This piece of information enables one to determine if f 0 is positive or negative.This algorithm deals with the ±f 0 ambiguity by incorporating prior knowledge about the tuning linearity of multimode (or comb) sources.Supplementary Figure S3 shows that the incorrect sign of the retrieved f 0 magnifies the oscillatory frequency change due to fr (which is measured electrically from the device).When guessed correctly, it suppresses frequency fluctuations and yields a smooth tuning curve.This behavior is typical for chip-scale emitters and has been validated (with coarse instrumental resolution), as shown in Supplementary Figures S4 and S5. Calculation of the frequency spectrum: We calculate an N -point discrete Fourier transform (DFT) of Scorr using the FFT algorithm.This yields N/2+1 unique points starting from zero frequency (DC) spaced by in the optical domain fr.The offset frequency is added globally for each comb line position, so that the n-th point of the ILS-free spectrum lies at an optical frequency Supplementary Figures S2 and S6 show the retrieved optical frequency for one individual comb line along with f 0 and frep as a function of injection current.Both comb platforms exhibit nearly linear tuning. The FFT is performed on an IGM that starts at ∆ = 0 (ZPD) and lasts exactly one period.Because in some cases the centerburst is not sampled at exactly ∆ = 0, there is a small phase error due to IGM's asymmetry, which implies phase correction.Such errors may also arise from residual offset phase persistent in the round trip burst.Here, a variation of the Mertz method is used 34 to address this issue.Assuming the Fourier transform F{. ..} yields a complex frequency spectrum: we can ensure the spectrum is real (or equivalently that the IGM is symmetric) by compensating the spectral phase term ϕ − ν using data from regions with non-zero intensity, i.e.where |B(ν)| 0. The spectral phase for the correction is defined as where Im{. ..} stands for the imaginary part of a complex argument.Finally, we find the corrected ILS-free frequency spectrum from Such-treated spectra were used for absorbance calculations. Experimental setup details: Light from the chip-scale comb devices was first collimated with a black-diamond high-numericalaperture anti-reflective (AR) coated lens, and next guided to the inteferometer through a free-space Faraday optical isolator to prevent dynamic optical feedback effects resulting from the moving mirror.To record FTS IGMs, we used a Bruker Vertex 80 FTIR spectrometer with an external thermoelectrically-cooled photodetector (PVI-4TE-3.4,VIGO), whose near-DC output was conditioned by a a low-noise current preamplifier (SR570, Stanford Research Systems) set to filter signals below 100 kHz.The internal apertures of the FTS system were set to 1 mm.At a mirror modulation frequency of 40 kHz, 25.3 mm of OPD were scanned in 1 s, which accounting for the return of the mirror to the start position, communication with the instruments, and data exchange overhead, yielded 10 IGMs with 3-3.15 cm OPD per minute.Injection current stepping was provided by a precision source meter (Keithley, 2420) connected to the external modulation input port of a low-noise laser driver (D2-105-500, Vescent Photonics), which was also responsible for device's temperature stabilization.The ICL comb device was a single-section Fabry-Pérot device with a 3 µm wide ridge waveguide (Thorlabs) run without any frequency stabilization.The QWDL device described elsewhere 30 was analogously housed and biased, except for using a different laser driver due to the insufficient modulation range of the D2-105-500.In the latter case, we used an LDX-3620, ILX Lightwave low-noise laser driver capable of being modulated by tens of mA.Simultaneously with the optical IGMs, using a microwave spectrum analyzer we measured and recorded the intermode beat note spectrum extracted electrically from the device through a bias-T. Data processing: A Lorentzian fit was performed to all microwave intermode beat note spectra for fr retrieval, while f 0 estimation followed the previously described protocols.To obtain the absorbance spectra (base-10 logarithm), we calculated the difference between two ILS-free spectra: with and without the analyte (see Supplementary Figures S7 and S8 for raw sub-nominal resolution optical spectra).The absorbance spectra were next de-fringed using a sumof-sines model due to parasitic etalons produced by the absorption cell's windows.For the methane, and acetylene measurements, simple point-wise division yielded effective cancellation of residual intensity versus current modulation produced by external cavity effects, which is also visible in the repetition rate and tuning characteristics of both comb platforms (see Supplementary Information, Section 2 and 5).For the HCN, due to the pronounced feedback sensitivity of the QWDL, the noise cancellation by means of point-wise division was less effective. Tuning of optical spectra characterized using an optical spectrum analyzer Experimental evidence of comb line tuning linearity despite the oscillatory trajectories of f r is shown in Fig. S4 and Fig. S5.Two comb platforms are characterized here.Figure S4 shows the tuning map for diode laser frequency combs, while Fig. S5 shows one for an interband cascade laser frequency comb.It is clear that the diode comb is exhibits richer f r tuning dynamics. Raw subnominal-resolution optical spectra For reader's convenience, and justification of the ∼20 dB dynamic range, Fig. S7 and Fig. S8 plot raw interleaved spectra calculated using the subnominal resolution routine for acetylene (C 2 H 2 ), and methane (CH 4 ), respectively.Panels (a) show full-span interleaved spectra with frequency axis calibration based on the estimated f 0 , known λ ref (temperature stabilized HeNe laser) and measured f r .Panels (b) are scatter-type plots where dots sharing the same color correspond to individual comb lines coexisting at a given injection current.They are separated by 0.32-0.33cm −1 , which is the comb repetition rate expressed in wavenumbers.Injection current tuning responsible for spectral interleaving is analogous to having ∼100 single-mode lasers simultaneously tuned in frequency.Each tuning curve is uniquely defined by the comb line number n, f 0 , and f r .Note that the saw-tooth-like shape of the spectral edges is caused by the appearance of new comb lines that do not exist at lower injection currents.In other words, with increased pumping, the spectrum gradually broadens and lines of the spectral edges reach higher intensities.In contrast, the central part of the spectrum is dominated by frequency tuning with minor intensity changes. Figure 1 - Figure1-Principle of chip-based self-enabled subnominal FTS.a, Experimental setup constituting a Michelson interferometer with millimeter-long displacements of the moving mirror.An analogous beam path is for reference wavelength interferometry to ensure uniform sampling of the IGM.The spectrum analyzer / frequency counter can be removed once the repetition rate is characterized.b, Flowchart of the interferometric signal to meet the sub-nominal resolution criterion.c, Illustration of the sampling effect on the measured spectrum (single-burst interferogram, inspired by Ref.21 ) when the spectrometer resolution is higher than fr and f 0 is not removed (with ILS), and when points sampled by FTS are aligned with comb teeth positions (ILS-free / subnominal).Vertical lines represent comb lines, while filled dots correspond to sampled points.Although only the center comb line is absorbed (Lorentz profile), the ringing artefact affects multiple lines, which disappears only upon matching the FTS sampling and comb frequency scales. Figure 2 - Figure 2 -Comparison of conventional FTS measurements with the subnominal technique.The measured analyte is methane (CH 4 ) in natural isotopic ratio at 95 Torr (12.66 kPa) and room temperature, which displays complex manifold transitions with MHz linewidths at such conditions.In the nominal resolution case, capturing 2 bursts yields a moderesolved comb spectrum with peak intensities used for spectroscopy.Although in the unapodized (box-car-windowed) and triangularly-apodized case the resolution is higher than for an incoherent source (OPD sinc limit), the lines are severely distorted.Ringing artifacts, negative absorbance, and peak rounding appear.In contrast, the sub-nominal FTS technique provides an undistorted, high-fidelity spectrum comparable with an instrument with a 100× higher resolution. 1 Figure S2 -Figure S3 - Figure S2 -Tuning of the offset frequency f 0 , repetition rate frep, and one of the comb lines for a quantum well diode laser frequency comb retrieved using the algorithm.Note the anti-correlation between f 0 and frep.Mutual cancellation of the oscillations yields nearly linear comb line position tuning. 1 Figure S6 - Figure S6frequency axis retrieval for ICL combs.Tuning of the offset frequency f 0 , repetition rate frep, and one of comb lines for an interband cascade laser frequency comb retrieved using the algorithm.Note the anti-correlation between f 0 and frep.Mutual cancellation of the oscillations yields nearly linear comb line position tuning. Figure S7 - Figure S7 -Sub-nominal resolution FTS spectra showing the tuning range of individual comb lines.The analyte was low pressure acetylene (C 2 H 2 at 2 Torr).The sawtooth-like spectral shape on the edges of the spectrum results from its progressive broadening and appearance of comb lines that do not exist at lower injection currents.(b) Span of 1.8 cm −1 (54 GHz), (c) Span of 0.3 cm −1 (9 GHz), (d) Span of 0.033 cm −1 (1000 MHz). Figure S8 - Figure S8 -Sub-nominal resolution FTS spectra showing the tuning range of individual comb lines.The analyte was methane.The sawtooth-like spectral shape on the edges of the spectrum results from its progressive broadening and appearance of comb lines that do not exist at lower injection currents.Panels (b-d) show individual modal intensities during the current scan.Dots sharing the same color are spaced by fr.(b) Span of 1.8 cm −1 (54 GHz), (c) Span of 0.3 cm −1 (9 GHz), (d) Span of 0.034 cm −1 (1020 MHz).
2023-03-24T01:27:25.115Z
2023-03-23T00:00:00.000
{ "year": 2023, "sha1": "a852cd136af17f5f9ff2be99da7c75ff9c50e79c", "oa_license": "CCBYNC", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/lpor.202300724", "oa_status": "HYBRID", "pdf_src": "ArXiv", "pdf_hash": "a852cd136af17f5f9ff2be99da7c75ff9c50e79c", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
210044571
pes2o/s2orc
v3-fos-license
Multi-region exome sequencing of ovarian immature teratomas reveals 2N near-diploid genomes, paucity of somatic mutations, and extensive allelic imbalances shared across mature, immature, and disseminated components Immature teratoma is a subtype of malignant germ cell tumor of the ovary that occurs most commonly in the first three decades of life, frequently with bilateral ovarian disease. Despite being the second most common malignant germ cell tumor of the ovary, little is known about its genetic underpinnings. Here we performed multi-region whole exome sequencing to interrogate the genetic zygosity, clonal relationship, DNA copy number, and mutational status of 52 pathologically distinct tumor components from 10 females with ovarian immature teratomas, with bilateral tumors present in 5 cases and peritoneal dissemination in 7 cases. We found that ovarian immature teratomas are genetically characterized by 2N near-diploid genomes with extensive loss of heterozygosity and an absence of genes harboring recurrent somatic mutations or known oncogenic variants. All components within a single ovarian tumor (immature teratoma, mature teratoma with different histologic patterns of differentiation, and yolk sac tumor) were found to harbor an identical pattern of loss of heterozygosity across the genome, indicating a shared clonal origin. In contrast, the 4 analyzed bilateral teratomas showed distinct patterns of zygosity changes in the right versus left sided tumors, indicating independent clonal origins. All disseminated teratoma components within the peritoneum (including gliomatosis peritonei) shared a clonal pattern of loss of heterozygosity with either the right or left primary ovarian tumor. The observed genomic loss of heterozygosity patterns indicate that diverse meiotic errors contribute to the formation of ovarian immature teratomas, with 11 out of the 15 genetically distinct clones determined to result from nondisjunction errors during meiosis I or II. Overall, these findings suggest that copy-neutral loss of heterozygosity resulting from meiotic abnormalities may be sufficient to generate ovarian immature teratomas from germ cells. Introduction Germ cell tumors (GCTs) are a diverse group of neoplasms that display remarkable heterogeneity in their anatomical site, histopathology, prognosis, and molecular characteristics [1]. GCTs can occur in the ovaries, testes, and extragonadal sites, with the most common extragonadal locations being the anterior mediastinum, retroperitoneum, and intracranially in the pineal region [2]. GCTs are classified by the World Health Organization into seven histological subtypes: mature teratoma, immature teratoma, seminoma/ dysgerminoma/germinoma (depending on site of origin in the testis, ovary, or extragonadal), yolk sac tumor, embryonal carcinoma, choriocarcinoma, and mixed germ cell tumor [3]. GCTs are the most common non-epithelial tumors of the ovary, but only account for approximately 3% of all ovarian cancers [4]. Greater than 90% of ovarian GCTs are composed entirely of mature teratoma (commonly termed "dermoid cyst"), which is the only benign subtype of ovarian GCT [1]. Among the malignant subtypes, dysgerminoma is the most common and immature teratoma is the second most common. Ovarian teratomas contain tissue elements from at least 2 of the 3 germ cell layers and frequently display a disorganized mixture of mature tissues including skin and hair (ectoderm), neural tissue (ectoderm), fat (mesoderm), muscle (mesoderm), cartilage (mesoderm), bone (mesoderm), respiratory epithelium (endoderm), and gastrointestinal epithelium (endoderm). Teratomas can occur in the mature form, composed exclusively of mature tissues, or the immature form, which contains variable amounts of immature elements (usually primitive neuroectodermal tissue consisting of primitive neural tubules) in a background of mature teratoma [5]. Not infrequently, malignant GCTs of the ovary contain a mixture of different histologic subtypes (e.g. both dysgerminoma and yolk sac tumor), for which the designation mixed germ cell tumor is used, often with the approximate fraction of each histologic subtype specified by the diagnostic pathologist. Extensive tissue sampling and microscopic review of ovarian GCTs are required to appropriately evaluate for the presence of admixed malignant subtypes, which is critical for appropriately guiding prognosis and patient management. are present at time of diagnosis (stage IV), 5-year survival of ovarian GCT is relatively high at 69% [4]. This long-term survival in females even with disseminated or metastatic ovarian GCTs reflects the sensitivity of these tumors to the standard cytotoxic chemotherapy regimen of bleomycin, etoposide, and cisplatin [1]. Beyond dysgerminomas, few studies have performed genome-level analysis of ovarian GCTs, and the genetic basis of ovarian teratomas (both mature and immature forms) remains unknown. Polysomy 12 and KIT mutations have been found in ovarian mixed germ cell tumors containing a dysgerminoma component, but have not been identified in pure teratomas [20]. Early studies of ovarian mature teratomas reported that tumor karyotypes were nearly always normal (i.e. 46,XX), but chromosomal zygosity markers were often homozygous in the tumor [21][22][23][24]. This loss of heterozygosity may be explained by the hypothesis that teratomas and other germ cell tumors arise from primordial germ cells due to one of five different plausible meiotic abnormalities, each producing distinct chromosomal patterns of homozygosity [23][24][25][26]. Parthenogenesis (from the Greek parthenos: 'virgin', and genesis: 'creation') is used to describe the development of germ cell tumors from unfertilized germ cells via these different mechanisms of origin, which potentially include nondisjunction errors during meiosis I, nondisjunction errors during meiosis II, whole genome duplication of a mature ovum, and fusion of two ova. However, no studies to date have used genome-level sequencing analysis to identify the specific parthenogenetic mechanism giving rise to individual ovarian GCTs. Here we present the results of multi-region whole exome sequencing of 52 pathologically distinct tumor components from 10 females with ovarian immature teratomas, with bilateral tumors present in 5 cases and peritoneal dissemination in 7 cases. Our analyses define ovarian immature teratoma as a genetically distinct entity amongst the broad spectrum of human cancer types studied to date, which is characterized by a 2N near-diploid genome, paucity of somatic mutations, and extensive allelic imbalances. Our results further shed light on the parthenogenetic origin of ovarian teratomas and reveal that diverse meiotic errors are likely to drive development of this germ cell tumor. Study population and tumor specimens This study was approved by the Institutional Review Board of the University of California, San Francisco. Ten patients who underwent resection of ovarian immature teratomas at the University of California, San Francisco Medical Center between the years 2002-2015 were included in this study. All tumor specimens were fixed in 10% neutral-buffered formalin and embedded in paraffin. Pathologic review of all tumor specimens was performed to confirm the diagnosis by a group of expert gynecologic pathologists (K.G., J.T.R., C.Z., and D.A.S.). Whole exome sequencing Tumor tissue from each of the indicated ovarian and disseminated germ cell tumor components was selectively punched from formalin-fixed, paraffin-embedded blocks using 2.0 mm disposable biopsy punches (Integra Miltex Instruments, cat# 33-31-P/25). These punches were made into areas histologically visualized to be composed entirely of the indicated germ cell component (e.g. immature teratoma, mature teratoma, yolk sac tumor, gliomatosis peritonei). Uninvolved normal fallopian tube was also selectively punched from formalin-fixed, paraffin-embedded blocks as a source of constitutional DNA for each of the ten patients. Genomic DNA was extracted from these tumor and matched normal tissue samples using the QIAamp DNA FFPE Tissue Kit (Qiagen) according to the manufacturer's protocol. 500 ng of genomic DNA was used as input for capture employing the xGen Exome Research Panel v1.0 (Integrated DNA Technologies). Hybrid-capture libraries were sequenced on an Illumina HiSeq 4000 instrument. Mutation calling and loss of heterozygosity analysis Sequence reads were aligned to the hg19 reference genome using Burrows-Wheeler Alignment tool [27]. Duplicate reads were removed and base quality scores recalibrated with GATK prior to downstream analysis [28]. Candidate somatic mutations were identified with MuTect v1.1.5 with the minimum mapping quality parameter set to 20. dbSNP build 150 was used to identify and remove SNPs. The following additional filters were applied to candidate mutations from MuTect output: minimum tumor depth 30, minimum normal depth 15, minimum variant allele frequency 15%, maximum variant allele presence in normal 2%. Finally, all candidate mutations were manually reviewed in the Integrative Genome Viewer to remove spurious variant calls likely arising from sequencing artifact [29,30]. FACETS was used to determine allele-specific copy number and loss of heterozygosity regions across the genome [31]. To determine genetic mechanism of origin, tumors were classified into one of five plausible categories based on the zygosity status at centromeric and distal regions, as described by Surti et al [25]. For visualization of zygosity changes across the genome in the tumor specimens, the absolute difference between theoretical heterozygosity (allele frequency = 0.5) of tumor versus normal was plotted. Patient cohort Clinical data from the patient cohort is summarized in Table 1. The 10 females ranged in age at time of initial surgery from 8-29 years (median 17 years). None were known to have Turner syndrome or other gonadal dysgenesis disorder, nor any known familial tumor predisposition syndrome. All patients underwent resection of a primary ovarian mass, along with debulking of disseminated disease observed in the peritoneum at time of initial oophorectomy for 5 patients. Bilateral ovarian tumors were present in 5 of the 10 patients, 2 with synchronous disease at time of initial diagnosis (a and b) and 3 with metachronous disease that was identified and resected during the period of clinical follow-up (g, h, and j). Primary ovarian tumor size ranged from 4-30 cm (median 15 cm). Four of the patients were treated with adjuvant chemotherapy using bleomycin, etoposide, and cisplatin after initial surgery based on the presence of disseminated immature teratoma in the peritoneum (a, b, e, and k). A fifth patient was treated with adjuvant chemotherapy using bleomycin, etoposide, and cisplatin following resection of a synchronous ovarian immature teratoma at 4.8 years after resection of a contralateral ovarian mature teratoma (g). One exceptional 14-year-old patient (d) initially underwent resection of a unilateral 18 cm ovarian immature teratoma and debulking of disseminated peritoneal disease. Subsequent PET/CT showed widespread bulky lymphadenopathy. She underwent resection of a supraclavicular lymph node at 0.6 years after initial oophorectomy which contained metastatic primitive neuroectodermal tumor (PNET) and atypical gliomatosis histologically resembling an anaplastic astrocytoma of the central nervous system. She was treated with intensive multiagent chemotherapy including vincristine, doxorubicin, cyclophosphamide, ifosfamide, and etoposide. Over the next three years, she underwent additional resections of recurrent/progressive disease in the peritoneum, cyberknife radiotherapy to left axilla, and multiple courses of chemotherapy, first with temozolomide and then with cyclophosphamide and topotecan. She remains alive with stable disease at last clinical follow-up (6.6 years after initial surgery). All other patients in this cohort also remain alive with stable disease or without evidence of disease recurrence at last clinical follow-up (range 2.4-15.3 years, median 6.6 years, excluding patient i with no clinical follow-up data after initial resection). Histologic features of the ovarian immature teratomas Pathologic diagnosis for the ovarian germ cell tumors is summarized in Table 1, and representative photomicrographs are shown in Figure 1. All 10 patients had primary ovarian immature teratomas composed of primitive neural tubules in a background of mature teratoma. In 2 patients, there were additionally admixed small foci of yolk sac tumor and embryonal carcinoma, thereby warranting designation as mixed germ cell tumor, although mature and immature teratoma were the predominant elements in both cases. Five patients also had teratomas involving the contralateral ovary, 2 of which were synchronous and 3 of which were metachronous. The contralateral ovarian tumors were also immature teratomas in 2 patients (a and h), whereas the contralateral ovarian tumors were composed entirely of mature teratoma in 3 patients (b, g, and j). Disseminated disease was found in the peritoneum of 7 patients, which consisted of a combination of immature and mature elements in 5 patients and mature elements only in 2 patients. The disseminated immature elements in one of these patients (d) was histologically diagnosed as primitive neuroectodermal tumor (PNET), as it was composed of sheets of primitive small round blue cells with diffuse immunoreactivity for synaptophysin and without organization into neural tubules or evidence of neuroglial differentiation. Six patients had peritoneal implants composed of mature glial tissue that has been termed gliomatosis peritonei. This gliomatosis peritonei was of low cellularity and composed of cytologically bland glial cells in 5 patients, whereas the gliomatosis peritonei was hypercellular and composed of cytologically atypical glial cells resembling anaplastic astrocytoma of the central nervous system in 1 patient (d). Multi-region whole exome sequencing of ovarian immature teratomas Genomic DNA was extracted from 52 tumor regions consisting of ovarian immature teratoma, mature teratoma, yolk sac tumor, and disseminated teratomatous elements, along with uninvolved normal fallopian tube tissue from the 10 female patients ( Table 2). Hybrid exome capture and massively parallel sequencing by synthesis on an Illumina platform was performed to an average depth of 203x per sample, as described in the Methods. Sequencing metrics are displayed in Supplementary Table 1. The number of tumor regions sequenced per patient ranged from 2 to 9, with a median of 4. Paucity of somatic single nucleotide variants in ovarian immature teratomas Based on this whole exome sequencing of 52 tumor samples, we identified a total of only 31 unique high-confidence somatic nonsynonymous mutations (Supplementary Table 2). Despite high sequencing depth, we detected somatic nonsynonymous mutations in only 21 of the 52 samples, and the average number of somatic nonsynonymous mutations in the mutated samples was 0.8 per exome. The mean somatic mutation burden (commonly also referred to as total mutation burden or TMB) per tumor sample was 0.02 nonsynonymous mutations per Mb, which is among the lowest of any human cancer type that has been analyzed to date. Only 1 of the somatic nonsynonymous mutations (PKFP p.A158V in patient a, RefSeq transcript NM_002627) was present in all tumor regions sequenced from a single patient, thereby indicating its clonality and acquisition early during tumorigenesis. However, the other 30 somatic nonsynonymous mutations were present only in a single tumor region or a subset of the tumor regions sequenced, thereby indicating their subclonality and acquisition later during tumorigenesis. For example, the PKFP p.A158V mutation was present in all tumor regions sequenced from patient a, including the immature and both mature teratoma components from the left ovary, as well as the disseminated immature teratoma, mature teratoma, and yolk sac tumor components in the peritoneum. In contrast, the CCS p.R112C (RefSeq transcript NM_005125) mutation was exclusively present in the ovarian mature teratoma component with neuroglial differentiation, and the CIITA p.R2C (RefSeq transcript NM_000246) mutation was only present in the disseminated immature teratoma and yolk sac tumor components. Thus, none of these 30 somatic nonsynonymous mutations could have plausibly been the initiating genetic driver in this cohort of ovarian immature teratomas. No genes were identified to harbor recurrent somatic nonsynonymous mutations across the 10 patients (i.e. no gene was mutated in more than a single patient). Furthermore, no welldescribed oncogenic variants (e.g. BRAF p.V600E) were identified in any of the 52 tumor samples. Among the 723 genes currently annotated in the Cancer Gene Census of the Catalog of Somatic Mutations in Cancer (COSMIC) database version 90 release, only 4 were identified to harbor somatic nonsynonymous mutations in this ovarian immature teratoma cohort. However, the variants in these 4 genes (TP53, NF1, CTNNB1, and NOTCH2) were each found in a single tumor sample in this cohort, were all non-truncating missense variants, and are not known recurrent somatic mutations in the current version of the COSMIC database. Thus, the functional significance of the identified mutations in these 4 genes is uncertain, and they may likely represent bystander alterations rather than driver mutations. Although KIT, KRAS, NRAS, and RRAS2 are recurrently mutated oncogenes that drive ovarian dysgerminomas and testicular germ cell tumors [14,19,20], we found no mutations in these genes in this cohort of ovarian immature teratomas. Ovarian immature teratomas have 2N diploid or near-diploid genomes with extensive loss of heterozygosity Using FACETS to infer copy number status and the genotype data of common polymorphisms from the exome sequencing, we next assessed the chromosomal copy number and zygosity status of the 52 tumor samples ( Table 2). All of the 52 tumor samples were found to harbor 2N diploid or near-diploid genomes. All tumor samples from 6 of the patients had normal 46,XX diploid genomes. All tumor samples from 3 of patients had neardiploid genomes with clonal gain of a single whole chromosome (+3 in patient d, +14 in patient i, and +10 in patient k). In patient b with bilateral ovarian teratomas, the mature teratoma from the right ovary harbored a normal 46,XX diploid genome, whereas all tumor samples from the left ovary and all disseminated peritoneal tumor samples harbored neardiploid genomes with clonal gains of whole chromosomes 3 and X. No focal amplification or deletion events were identified in any of the 52 tumor samples. None of the tumor samples harbored isochromosome 12p or polysomy 12p. We next plotted the absolute change in allele frequency (ΔAF) for the 52 tumor samples based on the genotype of common polymorphisms across each of the chromosomes, using an average of approximately 17,000 informative loci per genome. Whereas an allele frequency of 0.5 equals the normal heterozygous state for a diploid genome, an allele frequency of 0.0 or 1.0 equals a homozygous state, which could be due to either chromosomal copy loss or copy-neutral loss of heterozygosity. We observed extensive copyneutral loss of heterozygosity across the genomes of each of the 52 tumor samples from all 10 patients (Figure 2). Identical patterns of genomic loss of heterozygosity among mature, immature, and disseminated components in an ovarian teratoma confirm a single clonal origin We next compared the regions of the genome affected by copy-neutral loss of heterozygosity among the different tumor regions sequenced for each individual patient. In the 5 females with unilateral ovarian disease (patients c, d, e, i, and k), we observed the identical pattern of allelic imbalance across the genome in each of the different tumor components, including immature teratoma, mature teratoma with different histologic patterns of differentiation, and disseminated teratomatous elements in the peritoneum. These results confirm a single clonal origin for all teratomatous components, both in the primary ovarian tumor and disseminated in the peritoneum, for women with unilateral ovarian immature teratomas. Bilateral ovarian teratomas originate independently Four patients in this cohort (b, g, h, and j) had bilateral ovarian teratomas that were both independently sequenced and analyzed for patterns of copy-neutral loss of heterozygosity across the genome. We found that tumors from the left and right ovaries had different patterns of allelic imbalance across the genome in each of the different tumor components studied, providing evidence that bilateral ovarian teratomas originate independently. Furthermore, all of the peritoneal disseminated components harbored a pattern of allelic imbalance that was identical to one of the two ovarian tumors, enabling deduction of the specific ovarian tumor from which the disseminated disease was clonally related. For example, patient h is an 8-year-old girl who initially underwent resection of a 17 cm immature teratoma from the left ovary, and then 9 years later underwent resection of a 16 cm immature teratoma from the right ovary as well as debulking of disseminated disease in the peritoneum (gliomatosis peritonei). The immature teratoma and two mature teratoma regions studied from the left ovary had an identical pattern of allelic imbalance, whereas the immature teratoma and two mature teratoma regions studied from the right ovary had an identical pattern of allelic imbalance that was distinct from the tumor elements in the contralateral ovary. Additionally, the gliomatosis peritonei had an identical pattern of allelic imbalance as the immature and mature teratoma components from the right ovary ( Figure 3). Patterns of genomic loss of heterozygosity in ovarian immature teratomas can be used to deduce meiotic error mechanism of origin Five distinct parthenogenetic mechanisms of origin have been proposed to describe the development of germ cell tumors from unfertilized germ cells, which include nondisjunction errors during meiosis I, nondisjunction errors during meiosis II, whole genome duplication of a mature ovum, and fusion of two ova. Distinct chromosomal zygosity patterns are predicted to result from each of these different mechanisms [25], which are illustrated in Figure 4. We used the chromosomal zygosity patterns from the whole exome sequencing data to deduce the meiotic mechanism of origin for the 15 distinct tumor clones identified in the 10 female patients. Five of the tumor clones were deduced to result from nondisjunction errors during meiosis I, 6 from nondisjunction errors during meiosis II, 3 from whole genome duplication of a mature ovum, and 1 from fusion of two ova ( Table 2). These findings indicate that meiotic abnormalities at multiple stages during germ cell development can contribute to the development of ovarian teratomas. Discussion We present the first multi-region exome sequencing analysis of ovarian immature teratomas including mature, immature, and disseminated components. We report a strikingly low abundance of somatic mutations and infrequent copy number aberrations, without pathogenic mutations identified in any well-described oncogenes or tumor suppressor genes, as well as an absence of any novel genes harboring recurrent mutations across the cohort. We generated high-resolution zygosity maps of ovarian teratomas that deepen understanding of the parthenogenetic mechanisms of origin of ovarian teratomas from primordial germ cells originally proposed nearly 50 years ago [21]. Ovarian teratoma is genetically unique among all human tumor types studied to date given its extremely low mutation rate and extensive genomic loss of heterozygosity. Our findings suggest that meiotic nondisjunction events producing a 2N near-diploid genome with extensive allelic imbalances are responsible for the development of ovarian immature teratomas. Analysis of the multi-region exome sequencing data was used to study the clonal relationship of immature and mature teratoma elements, as well as admixed foci of yolk sac tumor, and also disseminated teratoma in the peritoneum. We find that all these different tumor components are indistinguishable based on chromosomal copy number alterations and loss of heterozygosity patterns, indicating a shared clonal origin. This finding suggests that epigenetic differences are likely responsible for the striking variation in differentiation patterns in teratomas, and also for the development of immature elements in ovarian teratomas. Ovarian immature teratomas may therefore be one of the only human tumor types where epigenetic dysregulation occurring in the absence of additional somatic alterations is responsible for the transformation from a benign to malignant neoplasm. Notably, gliomatosis peritonei is a rare phenomenon in which deposits of mature glial tissue are found in the peritoneum, which principally occurs in association with immature teratoma of the gonads [32, 33], but has also been reported to rarely occur in association with mature teratoma of the gonads, endometriosis, or intracranial gliomas in children with ventriculoperitoneal shunts in the absence of gonadal teratoma [34,35]. Two theories currently exist to explain the origin of gliomatosis peritonei arising in the setting of gonadal teratomas: the first being that it is derived from peritoneal dissemination of teratoma with differentiation into mature glial cells, and the other being spontaneous metaplasia of peritoneal stem cells to glial tissue [36,37]. A prior study of five samples had concluded that gliomatosis peritonei was genetically unrelated to the primary ovarian teratoma based on zygosity analysis of a small number of microsatellite markers [37]. However, our study based on genotyping data from thousands of informative polymorphic loci unequivocally demonstrated that gliomatosis peritonei was clonally related to the ovarian primary immature teratoma in all cases in this cohort, thereby supporting the first theory of origin. While all ovarian and disseminated tumor components in the 5 patients with unilateral disease in this cohort were found to be clonally related, 4 patients had bilateral ovarian teratomas that were independently analyzed and found to have distinct clonal origins. We found that tumors from the left and right ovaries had different patterns of loss of heterozygosity across the genome in each of the different tumor components that were sequenced, providing evidence that bilateral ovarian teratomas originate independently. Additionally, all of the disseminated components in the peritoneum harbored a pattern of allelic imbalance that was identical to one of the two ovarian tumors, enabling assignment of origin to the specific ovarian primary tumor. Why a significant proportion of women with ovarian teratomas also develop genetically independent teratomas in the contralateral ovary (either synchronously or metachronously) remains undefined. Analysis of the constitutional DNA sequence data from the 10 patients in our cohort, 5 of whom had bilateral ovarian teratomas, did not identify pathogenic variants in the germline known to be associated with increased cancer risk. However, the possibility of an unidentified germline risk allele(s) responsible for teratoma development remains a possibility. Given the extensive loss of heterozygosity across the genomes of ovarian teratomas, pinpointing any single responsible gene amongst the numerous common regions of allelic imbalance is a significant obstacle. In summary, our multi-region whole exome sequencing analysis of ovarian immature teratomas has revealed that multiple different meiotic errors can give rise to these genetically distinct tumors that are characterized by extensive allelic imbalances and a paucity of somatic mutations and copy number alterations. Supplementary Material Refer to Web version on PubMed Central for supplementary material. Histology images of the ovarian immature teratomas from three representative patients that were studied by whole exome sequencing. Shown are hematoxylin and eosin stained sections illustrating the different tumor regions from the primary ovarian mass as well as disseminated disease in the peritoneum from which genomic DNA was selectively extracted for analysis. a Patient a is a 16-year-old female who underwent resection of synchronous bilateral ovarian immature teratomas and debulking of disseminated peritoneal disease. b Patient e is a 29-year-old female who underwent resection of a unilateral ovarian immature teratoma and debulking of disseminated peritoneal disease. c Patient g is a 25-year-old female who underwent resection of a unilateral ovarian immature teratoma and then four years later underwent resection of a contralateral ovarian mature teratoma (no immature component present). Ovarian immature teratomas are characterized by extensive genomic loss of heterozygosity. Plots of Δallele frequency (ΔAF) were generated from the whole exome sequencing data for each of the 52 tumor regions from 10 patients with ovarian immature teratomas. Identical patterns of genomic loss of heterozygosity among all mature, immature, and disseminated components in ovarian teratomas confirm a single clonal origin, except in females with bilateral tumors. Plots of Δallele frequency (ΔAF) were generated from the whole exome sequencing data for each of the 7 different tumor regions from patient h, an 8year-old girl who initially underwent resection of a 17 cm immature teratoma from the left ovary, and then 9 years later underwent resection of a 16 cm immature teratoma from the right ovary as well as debulking of disseminated disease in the peritoneum (gliomatosis peritonei). While all tumor regions harbored a diploid genome, extensive genomic loss of heterozygosity was observed in each of the different tumor components. The immature teratoma and two mature teratoma regions studied from the left ovary had the identical pattern of allelic imbalance, whereas the immature teratoma and two mature teratoma regions studied from the right ovary shared an identical pattern of allelic imbalance that was distinct from the tumor elements in the contralateral ovary. Additionally, the gliomatosis peritonei had an identical pattern of allelic imbalance as the immature and mature teratoma components from the right ovary. Each point represents one informative polymorphic locus. Points near the top of the y-axis represent single nucleotide polymorphisms that are homozygous in the tumor, whereas points near the bottom of the y-axis are heterozygous. yaxis, ΔAF. x-axis, chromosome. Dotted line, centromere. ΔAF is calculated as the absolute difference between theoretical heterozygosity (AF=0.5). The five proposed genetic mechanisms of origin of ovarian teratomas from a germ cell. One homologous chromosome pair undergoing two genetic crossing over events is illustrated for simplicity. Orange arrows depict aberrant outcomes of meiosis. Black arrows depict the normal path through meiosis. Each plot depicts a simulated example of the chromosomal loss-of-heterozygosity pattern that arises from each of the five hypothetical mechanisms of origin, measured by the allele frequency difference of SNPs in the tumor compared to constitutional DNA. Y-axis: the two possible zygosity states in a diploid cell (top = homozygosity, bottom = heterozygosity). X-axis: position along an individual chromosome. Vertical dotted blue line depicts the centromere. For Mechanism V, the homozygosity pattern on each chromosome will vary based on the number and location of crossing over events. Adapted from Surti et al. [25]. Tumor regions studied for each of the 10 patients with ovarian immature teratomas including annotation of chromosomal gains/losses, fraction of genome with loss of heterozygosity, deduced meiosis failure mechanism of origin, and genes harboring somatic mutations.
2020-01-08T15:29:12.875Z
2019-12-16T00:00:00.000
{ "year": 2019, "sha1": "74428775cd664adcb9bfdee29adc3c94ac1798a7", "oa_license": "implied-oa", "oa_url": "https://europepmc.org/articles/pmc7286805?pdf=render", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "ec4b472acd36eee5fb50301afa254c70671a5c5d", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
23125671
pes2o/s2orc
v3-fos-license
GAK, a regulator of clathrin-mediated membrane traffic, also controls centrosome integrity and chromosome congression. Cyclin G-associated kinase (GAK) is an association partner of clathrin heavy chain (CHC) and is essential for clathrin-mediated membrane trafficking. Here, we report two novel functions of GAK: maintenance of proper centrosome maturation and of mitotic chromosome congression. Indeed, GAK knockdown by siRNA caused cell-cycle arrest at metaphase, which indicates that GAK is required for proper mitotic progression. We found that this impaired mitotic progression was due to activation of the spindle-assembly checkpoint, which senses protruded, misaligned or abnormally condensed chromosomes in GAK-siRNA-treated cells. GAK knockdown also caused multi-aster formation, which was due to abnormal fragmentation of pericentriolar material, but not of the centrioles. Moreover, GAK and CHC cooperated in the same pathway and interacted in mitosis to regulate the formation of a functional spindle. Taken together, we conclude that GAK and clathrin function cooperatively not only in endocytosis, but also in mitotic progression. Introduction Chromosome instability is a key defect of malignant cancer cells, which are primarily the result of uncontrolled mitosis caused by abnormal spindle formation (multi-aster or mono-aster), aberrant centrosome number, chromosome missegregation and/or failure of cytokinesis (Chi and Jeang, 2007;Fukasawa, 2008;Barr and Gruneberg, 2008). Recent reports indicate that endocytic proteins are also involved in mitotic events. For example, GM130, a Golgicomplex-associated protein, regulates centrosome morphology (Kodani and Sütterlin, 2008), and dynamin, a GTPase enzyme that is active in the early stages of endocytosis, is involved in centrosome separation and cytokinesis (Thompson et al., 2004). Clathrin, a major player in receptor-mediated endocytosis during interphase, but not during mitosis when no endocytosis occurs, is targeted to the mitotic spindle and functions in microtubule stability at the onset of mitosis (Royle et al., 2005). Depletion of clathrin by RNA interference (RNAi) causes abnormal chromosome segregation and activates spindle-assembly checkpoint (SAC)-mediated prometaphase-metaphase arrest. Thus, regulators of clathrin are also expected to play essential roles in mitotic progression. One of the proteins that regulates endocytosis cooperatively with clathrin is cyclin G-associated kinase (GAK), a serine/threonine kinase first identified as an association partner of cyclin G (Kanaoka et al., 1997). In addition to having a kinase domain, GAK is highly homologous with neuron-specific auxilin, which plays a pivotal role in clathrin-dependent trafficking in neural cells. As expected from this structural similarity, the ubiquitously expressed GAK protein localizes to the trans-Golgi network (TGN) and is an essential cofactor for Hsc70-dependent uncoating of clathrin-coated vesicles in many non-neural cells (Korolchuk and Banting, 2002;Kametaka et al., 2007). Indeed, clathrin-mediated endocytosis is partially blocked in GAK-knockdown cells, and the J domain of GAK was found to be important for this event (Zhang et al., 2005). Furthermore, GAK is transiently recruited to clathrin puncta, and this recruitment is dependent on the PTEN-like domain of GAK (Lee et al., 2006). Moreover, GAK phosphorylates Thr156 of the AP-2 μ2 subunit, which is important for its endocytotic activity (Zhang et al., 2005;Olusanya et al., 2001). Thus, the functions of GAK in endocytosis are well known . However, considering the subcellular colocalization of both GAK and clathrin, and the association between these molecules, it is expected that GAK also functions in mitosis. This expectation is supported by a recent report showing that GAK and clathrin heavy chain (CHC) localize to both the cytoplasm and nucleus, and that almost all nuclear CHC signals colocalize with GAK, suggesting an important function for GAK in the nucleus (Sato et al., 2009). In the present study, we provide evidence that GAK regulates mitotic progression by demonstrating that siRNA-mediated GAK knockdown caused metaphase arrest and multipolar spindles. We present two novel functions of GAK as a regulator of mitosis: first, GAK maintains centrosome structure, and second, GAK functions cooperatively with clathrin, not only during endocytosis, but also during mitotic progression. GAK is required for mitotic progression To investigate the putative novel function of GAK beyond its role in membrane trafficking, we generated two GAK siRNA constructs, designated Ki5 (#525) (Zhang et al., 2005) and Ki9 (#1309), based on their location in the mRNA sequence of GAK. When HeLa S3 cells were transfected with Ki5 or Ki9, GAK protein expression decreased (Fig. 1A). Transfection with GL2 control siRNA, which knocks down firefly luciferase mRNA, had no effect on GAK protein levels. We also performed reverse-transcriptase (RT)-PCR analysis to determine whether expression of the GAK homologues, AP-2 associated kinase (AAK1) and tensin, was affected by Ki5 and Ki9. As expected, although both Ki5 and Ki9 downregulated GAK mRNA expression compared with the GL2 control (Fig. 1B, lanes 2 and 3), AAK1 and tensin mRNA levels were unaltered by transfection with GL2, Ki5 or Ki9 (Fig. 1B). This result confirmed that Ki5 and Ki9 specifically targeted GAK mRNA. Notably, GAK knockdown that was induced by Ki5 or Ki9 (Ki5/Ki9) caused transfected cells to adopt a round shape, reminiscent of cells at the mitotic (M) phase of the cell cycle (Fig. 1C). This was not observed in the GL2 control cells (Fig. 1C). To determine whether the Ki5/Ki9-treated cells were actually undergoing mitosis, we stained their chromosomes and spindles with Hoechst-33258 dye and anti-α-tubulin antibody, respectively. Indeed, Ki5/Ki9-treated cells had condensed chromosomes and formed mitotic spindles (Fig. 1C). We quantified these observations and found that there were significantly more mitotic cells in the GAK-knockdown cell populations than in the GL2 control (Fig. 1D). Importantly, the point at which the cells were arrested was strongly restricted to prometaphase-metaphase (Fig. 1C,D). Because Ki9 altered the percentage population of prometaphase-metaphase cells more efficiently than Ki5 (Fig. 1D), we used the Ki9 construct to deplete GAK protein in all subsequent experiments. We found that Ki9 siRNA also increased the levels of cyclin B1 (CCNB1) and the phosphorylation of histone H3 at Ser10 (Fig. 1E). Moreover, fluorescence-activated cell sorting (FACS) analysis also indicated that Ki9 treatment caused an increased proportion of G2-M cells (supplementary material Fig. S1A), which confirmed that the Journal of Cell Science 122 (17) Western blot analysis was performed using α-tubulin as a loading control. GL2 indicates a negative control siRNA. The Ki9 construct depleted GAK protein more efficiently than Ki5. An arrow and an asterisk indicate a band for GAK and an uncharacterized protein, respectively. (B) RT-PCR indicated that both Ki5 and Ki9 downregulate GAK expression. Expressions of AAK1 and tensin, which are homologous to GAK, were also examined to show that Ki5 and Ki9 downregulate GAK specifically and exclusively. GAPDH was used as a loading control. (C) GAK knockdown induces mitotic arrest. Microscopy images reveal that GAK knockdown increased the frequencies of round cells (as seen by DIC imaging), condensed chromosomes (as seen with Hoechst-33258 staining; blue) and mitotic-spindle formation (as seen with anti-α-tubulin; red). Scale bar: 10 μm. (D) GAK-knockdown cells show an altered frequency of mitotic cells. More than 300 cells were counted. The error bars show the standard deviation. The bar graphs were generated using the data from three independent experiments. (E) GAKknockdown cells have elevated cyclin-B1 levels and increased phosphorylation of histone H3 at Ser10, as determined by western blot analysis. GAK-knockdown cells are mostly mitotic. These data suggest that GAK is required for proper mitotic progression. To further investigate the mitotic defects of GAK-knockdown cells, we monitored the round mitotic cells for a 2-hour period (mitosis and cytokinesis are usually completed within 1 hour). In the control GL2 cells, 27 of 32 round mitotic cells completed cytokinesis within 1 hour (representative images of a typical cell are shown in the top panels of supplementary material Fig. S1B). In this cell, the metaphase plate formed 11 minutes 57 seconds (11:57 minutes) after nuclear-envelope breakdown, after which the cells quickly entered cytokinesis; chromosome separation and cleavage-furrow formation were observed at 41:57 and 47:57 minutes, respectively. By contrast, only one of 44 GAK-knockdown cells entered cytokinesis, and none displayed chromosome separation or cleavage-furrow formation during the 2-hour observation period (middle and bottom panels of supplementary material Fig. S1B). Notably, the GAK-knockdown cells showed chromosome movement without separation. These observations of live cells provide further evidence that GAK is required for proper mitotic progression. GAK depletion activates the spindle assembly checkpoint Because Ki9-treated cells were rarely in anaphase or telophase, we speculated that SAC was activated in these cells. To examine this, we stained cells with an antibody against BubR1, an important component of SAC (Sudakin et al., 2001;Tang et al., 2001), and found that anti-BubR1 antibody signals were observed from prophase to prometaphase in both GL2 and GAK-knockdown cells (data not shown). However, whereas GL2 cells lost the BubR1 signal after successful chromosome alignment during metaphase (top panels in Fig. 2A), Ki9-treated cells retained the BubR1 signal, probably because the misaligned chromosomes maintained SAC in an activated state (arrowhead in Fig. 2A); this phenotype is similar to that of CHC-knockdown cells (Royle et al., 2005). We found the The expression of the mitotic proteins cyclin B1 and securin was also determined. α-tubulin served as a loading control. Arrows, arrowhead and asterisks indicate bands for GAK or securin, BubR1, and an uncharacterized protein, respectively. α indicates anti. (C) Concomitant depletion of BubR1 permits the resumption of proliferation of GAK-depleted cells. Cells were plated in twelve-well plates at an equal density, transfected with GAK and/or BubR1 siRNAs, harvested, and counted on the indicated days to determine the cellular growth rates. (D) Concomitant depletion of BubR1 was used to normalize the reduced population of prometaphase-metaphase cells in GAK-knockdown cells. Arrow or arrowhead indicates the mitotic index of GAK (Ki9) or BubR1 (si-BR1) cells, respectively. The data in C and D were from three independent experiments. In each experiment, more than 400 cells were scored for mitotic indices. (E) GAK-knockdown cells exhibit three kinds of abnormal metaphase plates. Representative immunofluorescence images of GL2 control and GAK-knockdown cells reveal that GAKknockdown cells exhibit three kinds of defects in metaphase chromosomes, namely protruding, misaligned or abnormally condensed chromosomes. (F) Bar graph showing the frequency of these defects. The bar graph was generated using data from three independent experiments. In each experiment, more than 54 GL2-treated cells and more than 200 Ki9-treated cells were scored. The errors bars in C, D and F show the standard deviation. Scale bars: 10 μm. localization of other kinetochore-related markers, such as CENP-A, Aurora-B and survivin, to be normal in Ki9-treated cells (supplementary material Fig. S2A,B). In order to confirm that the prevention of mitotic progression in GAK-knockdown cells is due to retained SAC activation, we depleted GAK (Ki9) and/or BubR1 (si-BR1) by siRNA. If the metaphase arrest of GAK-knockdown cells is due to SAC activation, knockdown of BubR1 would allow these cells to continue through mitosis. Indeed, when GAK alone was depleted (lane 2 of Fig. 2B), a portion of the BubR1 proteins in Ki9-treated cells migrated more slowly through a polyacrylamide gel [see arrowhead in anti (α)-BubR1 panel of Fig. 2B] than proteins extracted from GL2 cells (lane 1 in α-BubR1 panel of Fig. 2B); this implies that at least a portion of the BubR1 proteins remained phosphorylated in GAK-knockdown cells. Because activated BubR1 is phosphorylated (Chan et al., 1999), this observation indicates that GAK depletion caused the activation of BubR1. Similarly, whereas cells transfected with GL2 control alone or together with BubR1 showed low levels of the mitotic proteins cyclin B1 and securin (lanes 1 and 3 of α-cyclin B1 and α-securin panels in Fig. 2B), which are known to be degraded in anaphase (Musacchio and Salmon, 2007), high levels of both of these proteins were observed in Ki9-treated cells (lane 2, Fig. 2B). By contrast, when both GAK and BubR1 were knocked down, low levels of cyclin B1 and securin were detected (lane 4, Fig. 2B). When we measured the proliferation rate of the single-knockdown cells over 3 days, we found that Ki9-treated cells grew very slowly, whereas GL2 control and BubR1-knockdown cells grew at normal rates (Fig. 2C). However, when both GAK and BubR1 were knocked down, the proliferation rate returned to the normal level (Fig. 2C, turquoise curve). To confirm that this recovery was due to release from mitotic arrest, we determined the mitotic indices of the different cell types and examined the levels of relevant mitotic proteins. As expected, Ki9-treated cells harboured increased mitotic indices due to arrest in metaphase (arrow in Fig. 2D), whereas GL2 control and BubR1-knockdown cells showed normal mitotic indices. By contrast, the percentage of prometaphase-metaphase cells after GAK and BubR1 double knockdown was almost normal (arrowhead in Fig. 2D). These phenotypes were also confirmed by FACS analysis (supplementary material Fig. S1A). Because the SAC was activated in Ki9-treated cells, we next observed the metaphase plates and found them to be abnormal in many of these cells (Fig. 2E). Three kinds of defects were detected in the metaphase chromosomes, namely, chromosomes were protruded, misaligned or abnormally condensed. As shown in Fig. 2F, these abnormal cell types occurred at almost equal frequencies in Ki9-treated cells, whereas these abnormalities were rarely observed in GL2-treated cells. Because these phenotypes were also observed in Ki5-treated cells, these abnormalities were not off-target effects of the Ki9 siRNA construct. Taken together, these results suggest that the impaired mitotic progression resulting from GAK knockdown is due to SAC activation associated with defective chromosome congression and alignment. GAK knockdown causes multi-aster formation When we immunostained Ki9-treated cells with anti-γ-tubulin antibody to detect centrosomes and anti-α-tubulin antibody to detect spindles, we found that many of the these cells harboured additional asters; i.e. they carried more than two centrosomes (or γ-tubulin foci) from which spindles (composed of α-tubulin) extended radially (Fig. 3A). More than 50% of the Ki9-treated mitotic cells had more than two γ-tubulin foci, whereas less than 5% of the GL2treated cells in mitosis displayed this abnormality (Fig. 3B). Multipolar spindles can arise either from over-replication of the centrosomes or from defects in cytokinesis. To establish the primary cause of this defect, we determined in which stage of the cell cycle an abnormal number of γ-tubulin foci first appeared. For this purpose, we chemically fixed Ki9-treated cells at various time points (Fig. 3C), immunostained the cells with anti-γ-tubulin antibody and counted the percentage of cells harbouring more than two γ-tubulin signals in interphase (39, 42 and 45 hours) and mitosis (48 hours). We found that most Ki9-and GL2-treated cells had only one or two γ-tubulin signals during interphase (Fig. 3C), indicating that GAK depletion did not trigger abnormal centrosome amplification or cytokinesis, and that Ki9-treated cells entered mitosis with two γ-tubulin signals, as is normal. By contrast, more than 50% of Ki9treated cells had additional asters 48 hours after Ki9 treatment, when cells are expected to be at the M phase of the cell cycle; only about 5% of GL2-treated cells harboured such multi-asters (rightmost bars, Fig. 3C). We next measured the intensity of the integrated γ-tubulin signal, because if the centrosome was fragmented, the signal intensity of Ki9 cells might have become weaker than that of the control cells. Indeed, when GL2-or Ki9-treated cells were probed with γ-tubulin 48 hours after siRNA treatment, we found that the integrated γtubulin intensity of Ki9-treated cells carrying more than two γtubulin foci was much lower than that of GL2-treated cells (Fig. 3A,D). Moreover, the integrated γ-tubulin intensity of Ki9-treated cells having only two foci was also slightly lower than that of GL2treated cells (supplementary material Fig. S3A), which indicates that centrosome maturation was abnormal. Because the centrosome was composed of two centrioles surrounded by pericentriolar material (PCM), we next examined whether multiple centrioles were also observed in Ki9-treated cells during mitosis. We found that additional centrioles were rarely observed in Ki9-treated cells harbouring extra γ-tubulin foci; this suggests that the PCM, but not the centrioles, was fragmented (supplementary material Fig. S3B,C). Because centrosomes are subjected to microtubule-dependent pulling and pushing forces from various directions during mitosis, we surmised that these forces caused the fragmentation of PCM in Ki9-treated cells. To explore this possibility, we determined the number of γ-tubulin foci in the presence of Taxol (paclitaxel; a tubulin-depolymerization inhibitor) using immunofluorescence microscopy. When microtubule dynamics in Ki9-treated cells were perturbed and the forces generated by the mitotic spindle towards the centrosome were attenuated by Taxol, PCM fragmentation was blocked and the number of γ-tubulin foci was normal (i.e. two were present) (Fig. 3E). These results indicate that microtubule-mediated forces caused PCM fragmentation in Ki9-treated cells. GAK and CHC function cooperatively in mitotic progression Next, we examined the reasons for the other phenotypes of Ki9treated cells, i.e. misaligned and abnormally condensed chromosomes (Fig. 2E,F). It is reported that phosphorylation of CENP-A (phosphorylated on Ser7; pS7) by Aurora-A and Aurora-B is important for proper chromosome alignment (Kunitoku et al., 2003). Because Aurora-A and Aurora-B are expressed normally in Ki9-treated cells (data not shown), we examined the phosphorylation state of CENP-A. Western blot analysis indicated that the amount of phosphorylated CENP-A was higher in Ki9-treated cells than in GL2-treated cells (supplementary material Fig. S4A); namely, phosphorylation of CENP-A was normal in Ki9-treated cells. This was confirmed by indirect immunofluorescence using CENP-A pS7 (supplementary material Fig. S4B). CHC is known to be an essential factor for mitotic progression, and CHC knockdown produces an abnormal metaphase plate with misaligned and abnormally condensed chromosomes (Royle et al., 2005). Because these phenotypes are similar to those of GAKknockdown cells, we surmised that CHC and GAK act closely together in the pathway that leads to mitotic progression. Because the percentage reduction of prometaphase-metaphase cells after CHC knockdown was not as aberrant as that of GAK-knockdown cells (compare Fig. 1 with Fig. 4G), we speculated that GAK functions epistatically upstream of CHC. To test this, we first examined the localization of CHC in GAK-knockdown cells and found that CHC completely colocalized with α-tubulin in control cells during metaphase (Fig. 4A, upper panels). This result is consistent with a previous report (Royle et al., 2005). By contrast, CHC diffused away from the mitotic spindle into the cytoplasm in GAK-knockdown cells (Fig. 4A, lower panels), whereas GAK localization was not affected even when CHC was knocked down (supplementary material Fig. S5). This abnormality was confirmed by determining the frequency of cells showing such diffusion (Fig. 4B). Where does GAK localize during mitosis? To answer this, we established a HeLa S3 cell line that constitutively expresses pEGFP-GAK (Fig. 4D), which showed that GAK colocalizes with CHC at the mitotic spindle ( Fig. 4C; supplementary material Fig. S6). Moreover, this GAK localization was not affected in CHC-depleted cells (supplementary material Fig. S5). We next performed immunoprecipitation to confirm associations between GAK and CHC during mitosis. Indeed, pEGFP-GAK associated with CHC at interphase, which is consistent with our previous report that GAK associates with CHC during interphase (Sato et al., 2009). Here, we also found that GAK associated with CHC during mitosis (Fig. 4E). These observations suggest that GAK and CHC cooperate in the same pathway to regulate the formation of a functional spindle. If this interpretation is correct, the GAK and CHC doubleknockdown cells would have a similar percentage of prometaphasemetaphase cells as the GAK single-knockdown cells. To test this, we depleted the cells of GAK and CHC proteins (Fig. 4F, arrow and arrowhead in upper two rows) and examined the protein levels of cyclin B1 and Plk1, another M-phase marker (Petronczki et al., 2008). As expected, we found that the cyclin-B1 level was almost equal between GAK and CHC double-depleted cells and Ki9-treated cells (Fig. 4Fii). Moreover, cells depleted of GAK alone or together The frequency of cells harbouring extra γ-tubulin foci is shown. More than 50 GL2-treated cells and more than 100 Ki9treated cells were scored. The bar graph was generated using data from three independent experiments. (C) The frequency of abnormal cells that harbour more than two γ-tubulin foci at the indicated times. In interphase (39, 42 and 45 hours after Ki9 treatment), both GL2-and Ki9-treated cells were normal. In mitotic cells (at 48 hours), only Ki9-treated cells had a greater than 50% frequency of abnormal cells. In each experiment, more than 100 cells were scored. (D) Comparison of integrated intensity of immunofluorescence by probing with anti-γ-tubulin antibody and using Metaview software. GL2-treated (n=14) or Ki9-treated (n=51) cells were scored. The dot graph shows the average ± s.e. values of these measurements that were statistically significant when GL2-and Ki9-treated cells were compared (P<0.01). (E) Frequency of cells, treated with Ki9 alone or with Ki9 + Taxol, that harboured extra γ-tubulin foci. Cells were transfected with GL2 or Ki9; after 33 hours, cells were incubated in the presence of 33 nM Taxol for 15 hours. Then, cells were fixed and immunofluorescence was performed. More than 100 Ki9-treated cells and more than 200 cells treated with Ki9 + Taxol were scored. The data are from three independent experiments. The errors bars in B and E show the standard deviation. with CHC showed similar mitotic indices (Fig. 4G). From these results, we conclude that GAK and CHC act cooperatively in the same pathway to regulate proper spindle assembly: GAK seems to function upstream of CHC. Discussion We showed here that knockdown of GAK results in mitotic arrest during metaphase, as does knockdown of CHC (Royle et al., 2005). Although GAK-knockdown or -knockout experiments have already been performed in several laboratories (Zhang et al., 2005;Lee et al., 2008), the mitotic arrest caused by GAK depletion has not been described. This is partly because endocytotic experiments were performed before mitotic arrest was observed in these studies, or because vector-based small hairpin RNA was employed after antibiotic selection. In the latter case, it is surmised that some unknown factors complemented the GAK functions during the selection of transfected cells. Why, then, is the percentage of prometaphasemetaphase cells in the Ki9-treated cell population higher than that in CHC-knockdown cells? Interestingly, CHC-knockdown cells showed increased Plk1 protein levels compared with those in GAKknockdown cells despite having lower mitotic indices than those of GAK-knockdown cells (Fig. 4Fi, lane 3; and Fig. 4Fii). Thus, one possible explanation is that Plk1 is involved in SAC signalling at the kinetochore and that its depletion causes defects in mitotic structure Journal of Cell Science 122 (17) Fig. 4. GAK functions upstream of CHC. (A) Immunofluorescence analysis probed with anti-α-tubulin and anti-CHC antibodies indicates that CHC was mislocalized in GAKdepleted cells. (B) Frequency of cells in which CHC and α-tubulin signals failed to colocalize completely. The bar graph represents the average value of three independent experiments. More than 50 GL2-treated cells and more than 100 Ki9-treated cells were scored in each experiment. (C) Localization of pEGFP-GAK during mitosis. GL2 or Ki9 were introduced into HeLa S3 cells that constitutively expressed pEGFP-GAK. Then, cells were subjected to immunofluorescence using anti-GFP and antiγ-tubulin antibodies. (D) Ki9, but not GL2, treatment abolished the pEGFP-GAK band from HeLa S3 cells constitutively expressing pEGFP-GAK. The western blot was probed with anti-GFP antibody. Anti-α-tubulin antibody was also used as a loading control. (E) Association of pEGFP-GAK with CHC not only at interphase, but also during mitosis. To collect mitotic cells, pEGFP-GAK-expressing cells were treated with Taxol for 15 hours. Then, cells were collected and western blot analysis was performed for whole-cell extract (WCE) and immunoprecipitant (IP). Asyn., asynchronized cells; M, mitotic cells; Vec., vector control. (Fi) Western blot analysis showing the successful depletion of CHC and GAK proteins. Arrow, arrowhead and asterisk indicate the bands for GAK, CHC and an uncharacterized protein, respectively. (Fii) The bar graph represents the relative intensity of the denoted band, which was calculated by comparing its intensity with that of the loading control (α-tubulin). (G) Depletion of CHC in GAKknockdown cells did not alter the percentage of prometaphase-metaphase cells. Each percentage value of mitotic cells was scored by means of an immunofluorescence assay. To identify the mitotic cells, α-tubulin and chromosomes were stained. The bar graph represents the average value of three independent experiments; in each experiment, more than 500 cells were scored. The errors bars in B and G show the standard deviation. Scale bars: 10 μm. and SAC-mediated prometaphase-metaphase arrest. Indeed, the Plk1 protein level at the kinetochore is reduced in Ki9-treated cells (supplementary material Fig. S6A,B), suggesting that this is why the percentage of prometaphase-metaphase cells of Ki9-treated cells is higher than that of CHC-knockdown cells. This hypothesis is consistent with previous results showing that Plk1 knockdown results in SAC-mediated metaphase arrest (Xie et al., 2005). Identification of the downstream target of Plk1 at the kinetochore should be an important subject of future research. Indirect immunofluorescence analyses HeLa S3 cells were fixed by sequential incubations at room temperature in 3.7% formaldehyde in PBS, 0.1% Triton X-100 in PBS, and 0.05% Tween-20 in PBS, each for 10 minutes. For centrin staining, cells were fixed at -20°C in methanol for 10 minutes and washed with PBS(-). Then, they were incubated with primary antibody for 3 hours at room temperature, followed by incubation with Alexa-Fluor-594 and -488 (Molecular Probes, Eugene, OR)-conjugated anti-rabbit/mouse immunoglobulin G in TBST (100 mM Tris-Cl, pH 7.5, 150 mM NaCl, 0.05% Tween-20) containing 5% FBS. DNA was stained with Hoechst 33258 (Sigma). The stained cells were observed with the confocal laser scanning microscope LSM510 (Zeiss) or a BX51 microscope (Olympus).
2018-04-03T05:23:55.994Z
2009-09-01T00:00:00.000
{ "year": 2009, "sha1": "24fbf313b14246c1680fe8f33b945a62122fe533", "oa_license": "CCBY", "oa_url": "http://jcs.biologists.org/content/joces/122/17/3145.full.pdf", "oa_status": "HYBRID", "pdf_src": "Highwire", "pdf_hash": "82dbadb62f4c831ec0aa3ed125bdfd9206048ea7", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
85718657
pes2o/s2orc
v3-fos-license
Interorgan Signaling following Pollination in Carnations A BSTRACT . Following a compatible pollination in carnation ( Dianthus caryophyllus L. ‘White Sim’), a signal that coordinates postpollination events is translocated from the style to the ovary and petals. In this paper the roles of ethylene and its direct precursor, 1-aminocyclopropane-1-carboxylic acid (ACC), in this signaling were investigated. Following pollination, ethylene and ACC increased sequentially in styles, ovaries, and petals. Ethylene and ACC were highest initially in the stigmatic region of the style but by 24 hours after pollination were highest in the base. Activity of ACC synthase correlated well with ethylene production in styles and petals. In ovaries, ACC synthase activity decreased after pollination despite elevated ethylene production. Lack of ACC synthase activity in pollinated ovaries, coupled with high ACC content, suggests that ACC is translocated within the gynoecium. Further, detection of propylene from petals following application to the ovary provided evidence for movement of ethylene within the flower. Experiments that removed styles and petals at various times after pollination suggest there is a transmissible pollination signal in carnations that has reached the ovary by 12 hours and the petals by 14 to 16 hours. pollination to the ovary and petals. Reports of sequential increases in ACC and ethylene within carnation floral organs following pollination have suggested that either may be involved in interorgan signaling in carnations (Jones and Woodson, 1997;Nichols, 1977;Nichols et al., 1983;Woltering et al., 1995;Woodson et al., 1992). Reid et al., (1984) provided evidence for ACC translocation in carnations by measuring the evolution of radiolabelled ethylene from petals after application of radiolabelled ACC to the stigma. In moth orchid (Phalaenopsis Blume sp.) flowers, the perianth produces ethylene following pollination despite the absence of detectable ACC synthase mRNAs or activity (O'Neill et al., 1993). These observations, coupled with the fact that increases in ACC oxidase transcripts and activity are detected in the petals following pollination provides evidence for translocation of ACC from the gynoecium to the perianth (O'Neill et al., 1993). O'Neill et al., (1993) suggested that this ACC is merely a substrate for ethylene biosynthesis and that ethylene is the actual pollination signal perceived by the various organs. This hypothesis is based in part on evidence that the induction of ACC oxidase mRNAs in the perianth following pollination is dependent on ethylene (O'Neill et al., 1993). In carnations we have recently demonstrated that pollination induces increases in ACC synthase and ACC oxidase transcripts in the styles and petals Woodson 1997, 1999;Woodson et al., 1992). In contrast, ACC synthase transcripts are not up regulated by pollination in the ovary despite significant pollination-induced increases in ethylene evolution (Jones and Woodson, 1999). To determine if ACC is synthesized in the ovary by an unidentified ACC synthase gene or if ethylene biosynthesis in the ovary relies on translocation of ACC from other floral organs, ACC synthase activity needs to be measured in the ovary following pollination. In carnations, we have demonstrated previously that ethylene perception in the pollinated style is required for propagation of the pollination signal to the petals, and that ethylene biosynthetic genes in the ovary and petals are regulated by ethylene Woodson, 1997, 1999). In light of this evidence we hypothesize that ethylene is the primary translocated signal in pollinated carnations. To begin to understand the role of ethylene in interorgan signaling, it is necessary to determine where in the flower ethylene is synthesized and which floral organs rely on translocated ACC for ethylene production. In this paper we report ethylene production rates, accumulation of the ethylene precursor ACC, and activity of ACC synthase in styles, ovaries, and petals following pollination. Through a series of dissection experiments we have also demonstrated the The phytohormone ethylene has been implicated in the regulation of flower senescence (Burg and Dijkman, 1967;Kende and Baumgartner, 1974;Mayak et al., 1977). In carnation (Dianthus caryophyllus) flowers, senescence is characterized by a climacteric increase in ethylene biosynthesis that coincides with the first visual symptoms of senescence, petal inrolling (Borochov and Woodson, 1989;Nichols, 1966Nichols, , 1971. In higher plants, ethylene is synthesized from methionine via a pathway involving the conversion of Sadenosylmethionine (SAM) to 1-amino-cyclopropane-1-carboxylic acid (ACC) and the oxidation of ACC to ethylene . The enzyme ACC synthase converts SAM to ACC while ACC oxidase catalyzes the conversion of ACC to ethylene. ACC synthase is generally considered to represent the rate-limiting step in the ethylene biosynthetic pathway (Kende, 1993;Yang and Hoffman, 1984). In many flowers, pollination accelerates ethylene biosynthesis and developmental changes observed during the natural senescence of unpollinated flowers (Stead, 1992). While petal wilting and abscission are the most visual symptoms of pollination, an increase in ethylene biosynthesis from the stigma is the first detectable postpollination event in many species. This ethylene production occurs within a few hours after pollination, before pollen germination (Hoekstra and Weges, 1986;Larsen et al., 1995;Nichols, 1977;Nichols et al., 1983;O'Neill et al., 1993;Pech et al., 1987). The nature of the pollen-pistil interaction that induces this ethylene biosynthesis is unclear, but it is thought to coordinate postpollination development. Flowers that are insensitive to ethylene due to treatment with ethylene action inhibitors or the expression of a mutated ethylene receptor (etr1-1), do not exhibit pollinationinduced corolla senescence (Jones and Woodson, 1997;O'Neill et al., 1993;Wilkinson et al., 1997). The induction of physiological and biochemical processes at sites distal to the site of pollen perception suggest a translocated signal which precedes the growing pollen tube signals a compatible existence of a translocated pollination signal in carnations and determined the timing of this signal. The relatively short life of cut flowers and flowering potted plants reduces their commercial value, restricts markets, and restricts the number of species in production. The postproduction quality of these crops is limited by flower senescence. While some flowers senesce in an ethylene independent manner, in many flowers of horticultural significance senescence is regulated by ethylene. Experiments similar to ones reported in this paper serve to increase our understanding of how ethylene regulates flower senescence and thereby provide information that can be used to improve the postproduction quality of high value flower crops. Materials and Methods PLANT MATERIAL. Greenhouse-grown carnations (Dianthus caryophyllus 'White Sim') were used in all experiments. Mature flowers were harvested at anthesis when the styles were elongated and receptive to pollination. 'White Sim' flowers were pollinated by brushing the stigmatic surface with pollen from freshly dehiscent anthers of 'Starlight' carnations. It has been shown previously in our laboratory that crosses between 'Starlight' and male sterile 'White Sim' carnations result in production of viable seed and induce premature corolla senescence (Larsen et al., 1995). TREATMENT OF OVARIES WITH PROPYLENE. The ethylene analog propylene was used to demonstrate that a gaseous molecule could be translocated between the gynoecium and the petals of carnations. Propylene was applied into the carnation ovary at a concentration of 10 µL·L -1 via a continuous flow system. Ten microliters per liter of propylene is not a biologically active concentration, but can be detected by gas chromatography and distinguished from endogenously produced ethylene. The continuous flow system consisted of a jar of 10 µL·L -1 propylene containing coils of tubing that were gas permeable. This tubing was then connected on both ends to lengths of Tygon tubing which exited the jar. One end of the tubing terminated in a needle that was injected into the locule air space of the ovary. The other end of the tubing was used to initiate the flow of propylene by applying air briefly to the tubing and then clamping off the end. The propylene was then allowed to flow passively by diffusion into the carnation ovary. ETHYLENE MEASUREMENTS. To determine the contribution of individual floral organs to pollination-induced ethylene production by the flower, the rate of ethylene production from stigma/styles, ovaries, and petals was measured. Hereafter the term style will be used to refer to a floral organ that includes the style and the stigmatic region, a tissue that is not morphologically distinct in carnations. At various times after pollination, flowers were dissected and floral organs were enclosed in 6-mL vials. Vials were capped with septa, incubated for 15 min, and a 1-mL gas sample was withdrawn for analysis of ethylene. For ethylene determination, samples were injected into a gas chromatograph with an activated alumina column (Varian, Walnut Creek, Calif.). Floral organs from control (unpollinated) flowers were collected and analyzed similarly. Propylene evolution from individual petals was also determined using gas chromatography. To measure ethylene production from style sections, styles were removed from flowers at various times after pollination and cut into three sections. Top, middle, and bottom sections were then sealed in 6-mL vials for ethylene analysis. Each experiment presented in this paper utilized a replication of six flowers per pollination or control time point. For each flower all styles from the flower and four petals per flower were collected for ethylene measurements. Ethylene biosynthesis experiments were repeated three times within a 3-month period with similar results. Only the results of the first experiment are presented. Graphed values represent mean ethylene production ± SE for the replications. Following ethylene measurements, tissues were weighed, quick frozen in liquid nitrogen, and stored at -80 °C until they were used for ACC and ACC synthase assays. ACC AND ACC SYNTHASE ASSAYS. ACC accumulation was measured using the method of Lizada and Yang (1979). The concentration of ACC (nmol) was determined by comparing values to an ACC standard. ACC synthase activity was assayed as described by Yu et al., (1979) with modifications. Carnation tissue was powdered in liquid N 2 and homogenized with a mortar and pestle in extraction buffer containing 100 mM HEPES-KOH (pH 8.0), 4 mM dithiothreitol (DTT), 5 µM pyridoxal phosphate (PLP), and 30% (v/v) glycerol. The extract was centrifuged at 10,000 g n for 10 min, and then filtered through two layers of cheese cloth. The supernatant was dialyzed overnight, with one change of buffer, in 2 L of dialysis buffer containing 10 mM HEPES-KOH (pH 8.0), 0.2 mM DTT, and 5 µM PLP. After dialysis to remove endogenous ACC, 400 mL of extract was incubated with 100 mL of 200 mM SAM, and 100 mL of assay buffer (assay buffer was 100 mM HEPES-KOH, 4 mM DTT, and 5 µM PLP) for 1 h at 30 °C. Control reactions had 100 mL of water in place of SAM. The reactions were stopped on ice and 500 mL of the enzyme extract was then used in the ACC assay. One unit of enzyme activity was defined as that which converted 1 nmol of SAM to ACC per h at 30 °C. POSTPOLLINATION RESPONSES IN CARNATIONS. Ethylene production is first detectable from the gynoecium before visual symptoms of senescence. In pollinated carnation styles, ethylene biosynthesis can be defined temporally by three peaks (Fig. 1). This pattern of stylar ethylene biosynthesis has been described previously (Jones and Woodson, 1997;Larsen et al., 1995), but is included in this paper to provide direct comparison among ethylene, ACC, and ACC synthase activity within the same flowers. The first small peak of ethylene production occurs from 1 to 4 h after pollination. The second burst of ethylene from pollinated styles peaks at 12 h after pollination, and this is followed by a sustained peak of ethylene production from 24 to 48 h after pollination. At 36 h after pollination, ethylene production was >800 nL·g -1 ·h -1 . Ethylene production by unpollinated control styles was below 15 nL·g -1 ·h -1 throughout the experiment. Measurable increases in ACC were detected by 1 h after pollination in styles, and peaks of ACC were measured at 12 and 36 h after pollination corresponding to the peaks of ethylene production. Control styles showed no significant increase in ACC content from 0 to 72 h. Increased ACC synthase activity following pollination slightly preceded the increases in ACC. In pollinated styles, the early ACC synthase activity measured at 6 h after pollination, corresponding to the second peak of ethylene, was significantly greater than the activity during the later peak. Low levels of ACC synthase activity (2.1 nmol·g -1 ·h -1 and below) were detected in unpollinated styles. In ovaries isolated from carnations at various times after pollination, elevated ethylene production was first detected at 6 h after pollination (Fig. 2). This ethylene production increased to 120 nL·g -1 ·h -1 at 24 h after pollination and then declined steadily. Ethylene production from control (unpollinated) ovaries remained below 5 nL·g -1 ·h -1 . Pollination-induced increases in ACC were also first measured at 6 h after pollination. ACC continued to increase and peaked at 48 h. Levels of ACC detected in styles and ovaries were similar despite seven fold lower ethylene production rates by ovaries. No significant increase in ACC was measured from 0 to 72 h in unpollinated ovaries. At 0 h ACC synthase activity in ovaries was 0.7 nmol·g -1 ·h -1 . After pollination, the activity of ACC synthase in ovaries decreased to below 0.2 nmol·g -1 ·h -1 from 3 to 72 h. Unpollinated control ovaries had significantly higher ACC synthase activity than ovaries from pollinated flowers but showed no increase or decrease in activity from 0 to 72 h. In petals from pollinated carnations, ethylene production was first detected at 24 h after pollination, just before inrolling of the petal margins (Fig. 3). This ethylene production was sustained until the last measurement at 72 h after pollination. By this time the corollas had wilted. As was observed in ovaries, ethylene production by the petals following pollination was significantly less than that of styles on a fresh weight basis. Ethylene production by control petals was barely detectable from 0 to 72 h with the highest rate of 3.7 nL·g -1 ·h -1 measured at 72 h. Accumulation of ACC peaked at almost 2 nmol·g -1 at 48 h after pollination. This level of ACC was only one-third and one-fourth that measured from styles and ovaries respectively. No significant increase in ACC was measured in control petals. The rise in ACC synthase activity preceded the increase in ACC, peaking at 24 h after pollination. ACC synthase activity was detected in unpollinated petals but remained below 0.5 nmol·g -1 ·h -1 . SPATIAL PRODUCTION OF ACC AND ETHYLENE WITHIN POLLINATED STYLES. To investigate the style as a potential source of ACC substrate for ethylene biosynthesis in the ovary, the spatial distribution of ACC and ethylene production within the style was determined by dissecting the style into three sections (top, middle, and bottom). The top section contained the stigmatic surface where pollen was applied. Figure 4A shows ethylene production from top, middle, and bottom style sections expressed as a percent of the ethylene production by the entire pollinated style. At 0 h (unpollinated control styles), ethylene production was slightly higher in top sections, but by 3 h after pollination the top section of the style was producing >60% of the total stylar ethylene production. At 12 h, the middle section produced the most ethylene, and by 24 h after pollination, >80% of the pollinated styles' ethylene production was from the base of the style. A similar trend in ACC was measured, with the majority of the pollinated styles' ACC detected in the top section at 3 h after pollination and >80% detected in the base by 24 h after pollination (Fig. 4B). The percentage of ACC and ethylene from unpollinated style sections at 3, 12, and 24 h was the same as control styles at 0 h (data not presented). TIMING THE POLLINATION SIGNAL. To investigate the timing of a translocated pollination signal in carnations, pollinated styles or petals were removed at various times after pollination. When pollinated styles were removed at 8 h after pollination or earlier, none of the flowers exhibited accelerated corolla senescence (Fig. 5). When pollinated styles were removed at 10 h after pollination, 50% of the flowers exhibited premature petal inrolling. This increased to 80% to 100% at 12 h and later. In dissection experiments involving petals, ethylene production by the petals was measured when the petals were removed and again at 48 h after pollination when petals still attached to the flower were inrolling. After petals were removed, they were held with their bases in water in an Fig. 3. Ethylene production, ACC, and ACC synthase activity from pollinated petals (w) and control petals from unpollinated flowers (∇). Each time point represents the average value for petals from six flowers ± SE. Fig. 4. Spatial production of (A) ethylene and (B) ACC within pollinated styles. Styles were removed from the flower at 0, 3, 12, and 24 h after pollination, divided into three sections, and ACC and ethylene production from the sections was determined. ACC and ethylene production by style sections is presented as a percent of ACC accumulation or ethylene production by the entire style. Eppendorf tube until 48 h after pollination. Petals removed from pollinated flowers at 12 h after pollination and earlier did not produce detectable levels of ethylene when they were isolated, or at 48 h after pollination (Fig. 6A). Moreover, these petals did not show any signs of inrolling at 48 h after pollination when intact petals were senescent (Fig. 6B). Petals removed at 14 h were not producing ethylene when removed, but were producing ethylene and inrolling when evaluated 48 h after pollination. Petals that were isolated at 16 or 18 h after pollination were also not producing detectable levels of ethylene when removed, but by 48 h after pollination had ethylene production rates higher than those measured from petals that remained on the pollinated flower until 48 h. Unpollinated carnation petals isolated from the flower between 0 to 48 h did not produce detectable levels of ethylene (data not presented). ETHYLENE AS THE TRANSLOCATED SIGNAL. To assess the ability of ethylene itself to be translocated from the ovary to the petals, the ethylene analog propylene was applied via a continuous flow system into the carnation ovary at a concentration of 10 µL·L -1 . Petals were removed from the flower at various times after application of propylene and enclosed in a 6-mL vial for measurement of propylene. Propylene evolution from the petals was detected between 3 and 6 h after treatment initiation. The concentration of propylene obtained from single petals varied from 0.05 to 0.8 µL·L -1 (data not presented). Discussion Pollination initiates many developmental events that are essential for successful reproduction. Signals that originate in the style at the site of pollination are translocated to the ovary and petals where they trigger ovary development and corolla senescence. In this paper we have presented differential regulation of ethylene biosynthesis in floral organs by measuring ethylene, ACC, and ACC synthase activity in styles, ovaries, and petals. These data in addition to what we have already reported about the differential expression of ethylene biosynthetic genes in floral organs provide a better understanding of how ethylene biosynthesis is regulated by pollination within different parts of the flower. Increases in ethylene biosynthesis were observed in all floral organs following pollination. In styles and petals ACC synthase and ACC oxidase transcripts are up regulated by pollination Woodson, 1997, 1999;Woodson Fig. 5 (above). Timing of the pollination factors translocation through the pollinated style. Pollinated styles were removed from flowers at various times after pollination. These flowers were maintained until 48 h after pollination (HAP) to determine which flowers exhibited pollination-induced senescence. This is presented as the percent of flowers inrolling at 48 HAP. , 1992). While ACC oxidase transcripts are up regulated in ovaries following pollination, ACC synthase transcript abundance does not increase. Our ACC synthase enzyme activity data does not support the existence of other ACC synthase genes in the pollinated ovary but suggest ACC synthase is down regulated by pollination in ovaries. Differences in the levels of ACC synthase activity between the petals and styles did not correlate well with differences in the ethylene production rates of these two floral organs. This is difficult to reconcile without measuring the activity of the ACC oxidase in these organs. While ACC synthase is often considered to be rate limiting, ACC oxidase has also been found to limit ethylene production in senescing flowers (Yang and Hoffman, 1984). Woodson et al., (1992) reported significant increases in ACC synthase activity in all organs (including ovaries) of 6-d senescing flowers. We have shown different results when measuring ACC synthase activity in ovaries from pollinated flowers. While pollination accelerates senescence of styles and petals, the ovary continues to grow and develop into a mature fruit. Despite these dramatic differences in development, pollination induces increased ethylene production from all floral organs. The differential regulation of ethylene biosynthesis observed in pollinated versus senescing ovaries may be a mechanism by which ethylene can regulate such different developmental processes. When it was first shown that ACC and ethylene increased sequentially within the carnation flower it was proposed that translocation of ethylene and/or ACC throughout the flower served to propagate the pollination signal from the gynoecium to the petals (Nichols, 1977;Nichols et al., 1983). As a soluble hormone precursor it was thought that ACC was more amenable to targeted translocation than a gaseous molecule. In support of ACC as a translocated signal, transport of ACC has been reported to occur in the xylem and the phloem (Bradford and Yang, 1980;Hume and Lovell, 1983) and has been proposed as a mechanism for the interorgan regulation of ethylene biosynthesis in response to flooding (Bradford and Yang, 1980). Translocation of ACC within petals, from the base to the top, has been reported previously to occur in carnations (Overbeek and Woltering, 1990). Our recent studies with carnations have indicated that ACC translocation within the gynoecium provides ACC for ethylene biosynthesis in the ovary. By applying propylene to the carnation ovary we have also demonstrated that targeted translocation of a gaseous molecule from the gynoecium to the petals can occur. Similarly, ethylene applied to the central column of cymbidium orchid (Cymbidium Swartz sp.) flowers is readily translocated to the perianth (Woltering et al., 1995). While there is ample evidence for translocation of ethylene and ACC between floral organs, these results provide only circumstantial evidence that either is the translocated pollination signal. For interorgan communication of the pollination event, all organs of the flower must perceive the pollination signal and this perception must result in the full postpollination syndrome. While ACC is translocated between floral organs this ACC most likely serves as a substrate for production of ethylene, the actual translocated signal. In experiments investigating the pollination-induced accumulation of ethylene biosynthetic genes in carnations, we have shown that induction of ACC oxidase transcripts in the ovary is dependent on ethylene (Jones and Woodson, 1997;Woodson et al., 1992). ACC synthase and ACC oxidase gene expression in petals following pollination is also regulated by ethylene (ten Have and Woltering, 1997;Woodson, 1997, 1999;Woodson et al., 1992). Treatment of pollinated carnations with inhibitors of ethylene action completely prevents postpollination events, including pollinationinduced gene expression and petal senescence Woodson, 1997, 1999). Similar results have been observed in moth orchid flowers (Bui and O'Neill, 1998;O'Neill et al., 1993). This is consistent with the model proposed for moth orchid in which autocatalytic ethylene production by the flower is initiated by pollen-borne factors that induce ethylene biosynthetic genes in the stigma/style (Bui and O'Neill, 1998;O'Neill et al., 1993). In further support of this model, we have identified a pollination responsive ACC synthase gene in carnations (DCACS3) that is induced by pollination in styles independent of ethylene (Jones and Woodson, 1999). Subsequent increases in ethylene biosynthetic gene transcripts in the style are regulated by ethylene similarly to the regulation observed in ovaries and petals (Jones and Woodson, 1999). Consistent with the role of stylar ethylene in propagating the pollination signal to the rest of the flower, inhibiting ethylene action only in the style prevents all subsequent postpollination events in the ovary and petals (Jones and Woodson, 1997). Similar to what has been reported in petunia (Petunia hybrida Hort) flowers (Gilissen and Hoekstra, 1984), we have demonstrated the existence of a translocated pollination signal in carnations using a series of dissection experiments. This pollination signal reached the ovary by 10 to 12 h and the petals by 14 to 16 h after pollination. We have demonstrated previously in carnations that ethylene production by the style from 3 to 18 h after pollination must be above a certain threshold level to induce autocatalytic ethylene production from the style, ovary, and petals (Jones and Woodson, 1997). If ethylene serves as the translocated signal in carnations, the times identified in our dissection experiments likely represent the amount of time required for enough ethylene to be perceived by the ovary and petals to induce ethylene biosynthetic genes and subsequently result in autocatalytic ethylene production from these organs. In ovaries, ACC oxidase transcripts and ethylene production can first be detected at low levels by 6 h after pollination with large increases in both observed at 12 h after pollination (Jones and Woodson, 1997). These observations fall into the time frame of a pollination signal initiating pollination events within the ovary by 10 to 12 h as was indicated by our dissection experiments. In petals, transcripts of ACC oxidase and ACC synthase are first detected at 12 and 18 h after pollination respectively (Jones and Woodson, 1997). Because these experiments did not include time points at 13 through 17 h after pollination, it should be noted that ACC synthase transcripts may be induced as early as 13 h after pollination. When carnation petals are treated with 2 µL·L -1 exogenous ethylene, ACC oxidase and ACC synthase transcripts are first detected after 3 and 6 h, while ethylene production by the petals is not detected until 9 to 12 h after treatment. The lag time of 6 to 9 h between induction of ethylene biosynthetic genes and ethylene production by the petals is similar to the lag time observed in our dissection experiments. By 14 to 16 h after pollination, ethylene produced by the gynoecium is translocated to the petals inducing transcription of ethylene biosynthetic genes. Ethylene production is then detected from the petals ≈8 to 10 h later, at 24 h after pollination. While less is known about the identity of the pollen factor that induces ethylene biosynthesis in the stigma, there is increasing evidence in carnations and moth orchid that ethylene serves as the translocated pollination signal. Identification of ACC synthase genes that are regulated by the primary pollination signal should provide a useful tool for identifying these pollen factors (Bui and O'Neill, 1998;Jones and Woodson, 1999). While regulation of ethylene biosynthetic genes by ethylene has been well studied, it is necessary to investigate the transcriptional regulation of these genes by ACC itself before the role of ACC in postpollination signaling can be understood.
2019-03-30T13:12:19.612Z
1999-11-01T00:00:00.000
{ "year": 1999, "sha1": "d5be763060ca4fc1901b4a98490dee6bd3bc474d", "oa_license": null, "oa_url": "https://journals.ashs.org/downloadpdf/journals/jashs/124/6/article-p598.pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "01f6a5d1abcb115dc63b972d4ecde90024570593", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Biology" ] }
235681865
pes2o/s2orc
v3-fos-license
Lower neck organs at risk sparing in nasopharyngeal carcinoma using hybrid volumetric-modulated arc therapy (hybrid-VMAT): a case report Abstract Introduction: Nasopharyngeal carcinoma (NPC) is a prevalent disease in Southern China. Radiation therapy remains the primary treatment modality for NPC due to its high radiation sensitivity. Conventional volumetric-modulated arc therapy (VMAT) can achieve excellent target volume coverage and superior conformal dose distributions while sparing organs at risk (OARs). However, VMAT may also produce substantial volume of low-dose region in the surrounding normal tissue. Our oncology centre has incorporated the concept of anterior cervical field with VMAT in clinical practice of NPC treatment planning. The purpose of this treatment-comparison case study is to demonstrate the lower neck OARs sparing ability of hybrid volumetric-modulated arc therapy (hybrid-VMAT) over conventional VMAT for NPC. Methods: Four patients diagnosed with NPC of different clinical lymph node staging (N staging) were enrolled for this treatment-comparison case study. Planning target volumes and OARs were delineated with reference to Radiation Therapy Oncology Group (RTOG) 0225/0615. Additional OARs from lower neck region, including thyroid, trachea, cervical spine and pharyngeal constrictor muscles (PCMs), were also delineated. Two treatment techniques, hybrid-VMAT and VMAT, were created for each patient’s dataset. Results and findings: Both treatment techniques produced adequate target coverage and reduced radiation dose to the OARs as suggested in RTOG 0225/0615. Hybrid-VMAT plans achieved superior dose reduction in larynx, oesophagus, middle PCM, inferior PCM, cervical spine and trachea comparing with VMAT plans. Hence, the clinical usability and functional outcome of hybrid-VMAT should be further investigated for NPC radiation therapy. Introduction Nasopharyngeal carcinoma (NPC) is characterised by its unique geographic distribution. Southern China has one of the highest incidence rates of NPC in the world. Radiation therapy remains the primary treatment modality for NPC due to its high radiation sensitivity. Radiation Therapy Oncology Group (RTOG) 0225 and 0615 have recommended detailed dose criteria for NPC using intensity-modulated radiation therapy (IMRT) ( Table 1). These trials have resulted in excellent loco-regional control and encouragingly low rates of grade 3-4 acute toxicities. 1,2 Volumetric-modulated arc therapy (VMAT) has gained extensive clinical interest in the field of radiation oncology over the years. It can achieve excellent target volume coverage and superior conformal dose distributions while sparing organs at risk (OARs) through simultaneous variation of gantry rotation speed, treatment aperture shape and dose rate during NPC treatment delivery compared to IMRT. 3,4 However, VMAT may also produce substantial volume of low-dose region in the surrounding normal tissue. 5,6 Since treatment fields of radiation therapy for NPC traditionally encompass the primary disease and involve cervical lymph nodes, as well as the entire draining lymphatic regions to the lower neck, wide distribution of low-dose region to the lower neck OARs (such as larynx, thyroid, cervical spine, pharyngeal constrictor muscles (PCMs), trachea and oesophagus) can be harmful to the patient. Although RTOG 0225/0615 protocol does not provide dosimetric criteria for all of these OARs, radiation-induced toxicity to these structures during radiation therapy has been described in previous publications with negative impact on patients' quality of life. [7][8][9] Therefore, radiation dose to the lower neck OARs should not be overlooked and should also be considered during radiation therapy planning of NPC. In order to reduce low-dose volume to the lower neck region, our oncology centre has incorporated the concept of anterior cervical field with VMAT in clinical practice of NPC treatment planning. The purpose of this treatment-comparison case study is to demonstrate the lower neck OARs sparing ability of hybrid volumetric-modulated arc therapy (hybrid-VMAT) over conventional VMAT for NPC. Patient selection and simulation Four patients diagnosed with NPC of different clinical lymph node staging (N staging) were enrolled for this treatment-comparison case study. Patient demographic, clinical features and treatment prescription were summarised in Table 2. The dose-fractionation scheme was individualised to each patient based on clinical judgement of the attended oncologists in accordance with RTOG 0225/0615 ( Table 2). All patients were simulated in the supine position. TIMO Head & Neck Support Cushions (Med-Tec, Orange City, IA) and thermoplastic mask (Klarity Medical & Equipment Co. Ltd, Guangzhou, China) were used for immobilisation. The computed tomography (CT) simulation images (native, 120 kV, 80 mA, slice thickness 3 mm, in-plane resolution 1 mm) were acquired using dual-source CT scanner (SOMATOM Definition, Siemens Healthcare, Forchheim, Germany). CT simulation images were electronically transferred to the Eclipse™ (Varian Medical System, Palo Alto, CA) version 15.5 treatment planning system for treatment planning. Targets and OARs delineation The delineated targets included the gross tumour volume (GTV), clinical target volume (CTV) and planning target volume (PTV). The GTV covered the visible primary tumour and neck nodes of NPC shown on the image studies. The CTV encompassed the GTV with a 1·5 cm margin, the subclinical region and the prophylactic area of neck. The PTV included the CTV with 5-mm extensions in all dimensions to account for patient set-up error and motion uncertainties, except for situations where the GTV or the CTV is adjacent to the brain stem, where the margin can be as small as 1 mm. Treatment planning A total of eight treatment plans (four VMAT plans and four hybrid-VMAT plans) were optimised using Eclipse™ (Varian Medical System, Palo Alto, CA) version 15.5 treatment planning system. All plans were scheduled on a Varian TrueBeam™ linear accelerator with a millennium 120-leaf multi-leaf collimator (MLC) (Varian Medical Systems, Palo Alto, CA). Jaw tracking was enabled. The Photon Optimizer (PO, ver.15.5.11, Varian Medical Systems) was used for treatment plans optimisation. For dose calculation, the anisotropic analytic algorithm (AAA, ver.15.5.11, Varian Medical Systems) was used with a dose calculation grid of 1 mm. Volumetric-modulated arc therapy (VMAT) Considering that the shape of NPC target volume is highly irregular based on the unique anatomy of the patient and the extensiveness of disease, the fields arrangement and gantry rotation of VMAT were individualised for each patient through the beam's eye view option available on the treatment planning system. The arc fields were positioned to adequately cover all target volumes. Optimisation constraints and priorities were added to reduce radiation dose to the OARs. The isocentre was placed at the central of all PTVs. The arc fields were scheduled using 6-MV photon beams with maximum dose rate of 600 MU/min (Figures 1-4). Journal of Radiotherapy in Practice 435 436 Adams Hei Long Yuen et al. Hybrid-volumetric-modulated arc therapy (Hybrid-VMAT) Hybrid-VMAT plans concurrently combined arc fields and anterior static fields. The beam arrangements of the arc fields were identical to the VMAT plan of each patient. The arc fields were also scheduled using 6-MV photon beams with maximum dose rate of 600 MU/min. Two additional 3D anterior static fields were added to the lower neck. The 3D static fields were scheduled using 6-MV photon beams with maximum dose rate of 600 MU/min. In static field 1 and 2, X2 and X1 collimator jaws were reduced respectively so that vast majority of the centrally located lower neck OARs were shield, while part of the PTV SC and PTV neg was covered. MLCs . The isocentre was placed at the same location as the VMAT plans. Dose splitting between 3D anterior static fields (range from approximately 40 to 50% of the prescribed dose to PTV neg ) and arc fields (range from approximately 50 to 60% of the prescribed dose to PTV neg ) were determined by that provided the most optimal combination to maximise the tumour dose and minimise the lower neck OARs dose. The beam weights of the static fields for patient 1-4 were set to deliver 40%, 50%, 45% and 50% of the prescribed dose to the PTV neg , respectively, by certified medical dosimetrist (Figures 1-4). The contouring of target volumes and OARs used in the present study were demonstrated in Figure 5. The same contoured structures and margins were used to optimise both treatment plans. To avoid introducing bias, optimisation objectives of major structures were standardised between techniques of each patient. Plan analysis In the present case study, a total of eight treatment plans (four VMAT and four hybrid-VMAT) were created for four patients with different clinical N staging. The main goal for treatment planning optimisation was to reduce radiation dose to the OARs (OARs as suggested in RTOG 0225/0615 and additional lower neck OARs in the present study) while distributing adequate prescribed dose to the target volumes. The dosimetric parameters of all patients using hybrid-VMAT and VMAT were presented in Table 3. Patient 1 Patient 1 had no evidence of cervical lymph node involvement. Due to the less complexity of the target volume, two and a half arc fields were used in VMAT planning. Two static fields were added on VMAT to form the hybrid-VMAT. Both hybrid-VMAT and VMAT plans for patient 1 delivered adequate dose to the target volume. This patient presented a challenge for dose reduction during optimisation in neck OARs due to proximity of PTV neg in both sides. Comparing hybrid-VMAT with VMAT, there were resulting mean doses reduction of 5·4%, 15%, 1·4%, 11·9%, 5·8% and 17·9% to larynx, oesophagus, middle PCM, inferior PCM, cervical spine and trachea, respectively. The plan comparison and dose-volume histogram (DVH) analysis of the two treatment techniques were shown in Figures 6 and 7, respectively. Patient 2 In patient 2, both treatment techniques have delivered adequate dose to the target volume. Using hybrid-VMAT, dose reduction in OARs as suggested by RTOG 0225/0615 was comparable to 438 Adams Hei Long Yuen et al. Patient 3 Due to more separation distance between the PTV neg in lower neck, dose reduction of lower neck OARs in this patient was less challenging. In this patient, the resulting mean doses reduction of larynx, oesophagus, middle PCM, inferior PCM, cervical spine and trachea were 36·5%, 41·4%, 47%, 38·2%, 14·7% and 41·6%, respectively. The plan comparison and DVH analysis of the two treatment techniques were shown in Figures 10 and 11, respectively. Patient 4 The treatment plans optimisation of patient 4 were technically demanding due to the enlarged and complex shape of the cervical lymph node. In order to deliver adequate dose coverage to the target volume, four full arcs were used in VMAT plans and arc fields of hybrid-VMAT. In this patient, the PTV did not follow an inverted U shape, instead, the caudal part of the PTV neg merged together and formed an O shape PTV. Therefore, the gantry angles of the static fields for hybrid-VMAT were set to 0°so that the static fields can fully cover the caudal part of the PTV. Comparing hybrid-VMAT with VMAT, there were resulting mean doses reduction of 19·5%, 11·9%, 12%, 29·7%, 4·2% and 11·4% to larynx, oesophagus, middle PCM, inferior PCM, cervical spine and trachea, respectively. The plan comparison and DVH analysis of the two treatment techniques were shown in Figures 12 and 13, respectively. Abbreviations: OARs, organs at risk; VMAT, volumetric-modulated arc therapy; D max , maximum dose; D mean , mean dose; PCM, pharyngeal constrictor muscles. * RTOG protocol 0225/0615;^additional lower neck OARs. Discussion The treatment plans have been evaluated based on the dose criteria of RTOG 0225/0615 and the mean dose to the lower neck OARs. The results of the present study have shown that hybrid-VMAT plans are comparable to VMAT plans in terms of target coverage and doses received by OARs as suggested in RTOG 0225/0615. In the present study, hybrid-VMAT plans have demonstrated a consistent pattern of dose reduction in mean dose of larynx, oesophagus, middle and inferior PCM, cervical spine and trachea. In hybrid-VMAT plans, with approximately 40%-50% of the prescribed dose to the clinically negative neck region delivered through the 3D anterior static fields, vast majority of these centrally The results have indicated that hybrid-VMAT may be capable to reduce the incidence of acute and/or late toxicity of these organs, such as radiation-induced dysphagia, osteoradionecrosis of cervical spine and radiation-induced chondronecrosis. Among the four patients in this treatment-comparison case study, patient 1 has demonstrated relatively less dose reduction in lower neck OARs. The underlying reason would be the less weighting (40%) of the static fields that has been used for patient 1. As the proportion of the arc fields weighting increased, wide distribution of low-dose volume was created within the lower neck region, hence lowering the efficacy of dose reduction to the lower neck OARs in patient 1. Thus, it is recommended that an appropriate weighting of the static fields should be chosen to balance with the target coverage during treatment planning. Theoretically, VMAT plans are also able to produce radiation dose distribution similar to hybrid-VMAT by modulating the gantry speed, MLC shape and dose rate, and generating a similar treatment delivery sequence (i.e., speed of gantry rotation and MLC slowed at the angles of the anterior cervical static fields, simulating the delivery of the static fields). However, the delivery sequence is solely determined by the optimiser of the treatment planning system during optimisation in accordance with the dose constraints given. In the present study, the treatment planning system has appeared to decline the aforementioned delivery sequence for VMAT plans due to its incapability. Therefore, VMAT plans have delivered radiation dose in simply arc sequence to fulfil the dosimetric requirements. The addition of avoidance sectors or avoidance regions of interest (ROIs) might be a possibility to reduce dose to the lower neck OARs in VMAT. However, invert treatment 442 Adams Hei Long Yuen et al. Journal of Radiotherapy in Practice 443 planning is strongly user-dependent, 10 substantial experience in treatment planning may be needed to determine the most optimal angles or values for avoidance sectors/avoidance ROIs in order to achieve dose distribution similar as hybrid-VMAT. Therefore, manual beam selection in anterior fields of hybrid-VMAT may be a standard/easy alternative to dosimetrists with varying levels of planning experience to spare lower neck OARs. The improved dose sparing in lower neck OARs using hybrid-VMAT is at the expense of increased treatment time. The increment in treatment delivery time is primarily attributable to the additional gantry travel time for the static fields. Nonetheless, it is foreseeable that more advanced optimisation system in future may be capable to achieve comparable plan quality with reduced treatment time. Conclusion Advancements in radiotherapy have enabled better care to be given to patients, thus improving their quality of life. Therefore, consideration of all OARs during treatment planning forms an integral part of the patient's holistic care. Purely from the dosimetric point of view, incorporation of static fields with VMAT planning has been associated with dose reduction in lower neck OARs without compromising plan quality, which is often a challenge to NPC radiation therapy since many of these OARs are in close proximity to the PTV. Hence, the clinical usability and functional outcome of hybrid-VMAT should be further investigated for NPC radiation therapy.
2021-06-29T18:10:27.258Z
2021-04-23T00:00:00.000
{ "year": 2021, "sha1": "ce0cf5aa06933c51172e30614b0d85c03e3b75fd", "oa_license": "CCBY", "oa_url": "https://www.cambridge.org/core/services/aop-cambridge-core/content/view/A3081B94855152896970AC9562672A0D/S1460396920001156a.pdf/div-class-title-lower-neck-organs-at-risk-sparing-in-nasopharyngeal-carcinoma-using-hybrid-volumetric-modulated-arc-therapy-hybrid-vmat-a-case-report-div.pdf", "oa_status": "HYBRID", "pdf_src": "Cambridge", "pdf_hash": "c8df282b1cde37d04d6ec7a4e15bee2cf72645cd", "s2fieldsofstudy": [ "Medicine", "Engineering" ], "extfieldsofstudy": [] }
91673993
pes2o/s2orc
v3-fos-license
Effects of Using Mentha pulegium and Ziziphora clinopodioides Essential Oils as Additive on in vitro study Published by Oriental Scientific Publishing Company © 2018 This is an Open Access article licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License (https://creativecommons.org/licenses/by-nc-sa/4.0/ ), which permits unrestricted Non Commercial use, distribution and reproduction in any medium, provided the original work is properly cited. Effects of Using Mentha pulegium and Ziziphora clinopodioides Essential Oils as Additive on in vitro study Williams, 2000; Reddy et al. 2003;Trouillas et al. 2003). These properties rely on upon their capacity to discover free radicals, harness peroxidation of lipids that find a structure of membrane cell. Also, these properties raised the movement of cell reinforcement proteins (Gutierrez et al. , 2003;Lee et al. 2003). Antiseptics and antimicrobials are considered as the most important activities of these compounds. disinfecting properties of many plants have been known from many years ago. For the first time, Borchers (1965) mention the possible advantage of using essential oils on microbial fermentation in the rumen. Borchers (1965) also in an in vitro study viewed that eke out thymol to ýrumen fluid caused the aggregation of amino acid Nitrogen (AA-N) and the decreased from claiming Ammodytidae N ýconcentrations, offering that thymol prevents amino acid catabolism. After that, Oh et al. (1967Oh et al. ( , 1968) guessed that maybe the slight palatability of some plants by ruminants is related to both of organoleptic effects, and their negative impact on fermentation by rumen microbial as well as digestion of nutrient. Considering these cases, the purpose of the current research was investigating impacts of two essential oils (Ziziphora clinopodioides and Mentha pulegium) on rumen fermentation with gas production method. MATERIALS AND METHODS Rumen fluid was taken from four lactating, Holstein cows added with rumenfistulate (body weight 620± 8.9kg; day in milk 45 ± 13). The cows were fed a total blended diet (chemical analysis mention in table 1) containing silage of barley (46.5%), corn (6.8%), silage of hay alfalfa (4.5%), steam rolled barley (17.7%), dairy supplement pellets (24.5%). Formulating The ration to supply nutritious needs of rumen fluid donor cows according to NRC, 2001 nutrition requirement table, and became feeding two times each day (9:00 and 16:00) ad libitum. After achieving Rumen liquid before they fed morning meal, the rumen fluid was filtered with cheesecloth and then transferred into a completely thermo insulated flasks. Because of the method of Menke et al. (1979), a tight anaerobic condition was used in time of rumen fluid collection. Later on, it was transported to the laboratory. To harvesting Alfalfa forage with 28-30% DM, a New Holland harvester (New Holland North America, New Holland, PA) were used. Chop length might have been situated on accomplishing the cut of 0.95cm. Three piles of chopped forage (10kg chopped forage in each pile) were treated with the following: 1) 0, 10, 20 and 30 mL of Ziziphora clinopodioides essential oil, 2) 0, 10, 20 and 30 mL of Mentha pulegium essential oil. Alfalfa was ensiled in each trial (500g of DM/kg) from October 1 to November 12, respectively. Silos were stored at 20°C temperatures at the dark and opened 42 days after ensiling. Aerobic Stability Each silo was completely blended then 1kg sample was kept from each silo. Each sample transferred to 1 Liter capacity containers (3 containers for each treatment) after growing the silos. Each container has been installed with three Thermochron buttons (Embedded Data Systems, Lawrenceburg, KY) in the top, mid and bottom layers of the silage container to keep the temperature every 20 minutes. Each container was impenetraded with a cheesecloth and kept at the temperature of 20° C up to 7d. Moreover, the temperature of surrounding environment was estimated every 20min at this stage. After 1, 3 and 7 days of aerobic exposure, silages were sampled from each container for chemical investigation, and pH measuring (Tables 2, 3). Compounds identification The identification of the parts might have been dependent upon correlation of their mass spectral with those of NIST mass spectral library (Masada, 1976 andNIST, 2002), also, the individuals portrayed by Adams (2001), and comparing their maintenance indices either with those of accurate mixture or with written works qualities (Adams, 2001). Chemical analysis Dry matter (DM) specified with drying each sample for 24 hours in an oven drier at 105 ºC, to estimating ash content each sample was burned with muffle furnace, at 500 ºC for 9 hours. Also, by measuring Nitrogen content, utilizing the Kjeldahl method was used (AOAC, 1990). Acid detergent fiber (ADF) and neutral detergent fiber (NDF) estimation were with respect to the Van Soest et al. (1991) by applying an ANKOM fiber analyzer. Two EOs were purchased of a commercial mill in Kashan (Iran). Also, all chemical analyses were repeated in triplicate. Gas production technique The in vitro gas production procedure was according to the method of Menke et al. (1979). Approximately 200mg dry weights of samples (Alfalfa silage non-essential oils and Alfalfa silage with 10, 20 and 30ML of each essential oils) were estimated in triplicate into 100ml glass syringes with respect to the processes of Menke and Steingass (1988). First of all, each syringe was pre-warmed at 39 ºC, later 30ml of rumen liquor buffer mixture (1:2) injected to all syringes, then heated to 39 ºC in a bain marie. artificial saliva was Prepared with respect to the method of Menke and Steingass (1988). The artificial saliva was made of 237ml buffer solution and 237ml important element solution plus 0.12ml solution of trace element and 1.22 ml resazurin was prepared and stored at 39°C, one day before incubation. Then, the reduction solution (Na2S.9H2O, 0.625g; NaOH 1N, 4ml; distilled water, 95ml) was being added to the incubation of the samples. The ratio of artificial saliva to ruminal liquor was 2:1 (v/v). In each test and level, three repetitions were utilized, whereas they were mildly shaken for 30min after starting the incubation and for every 1 hour in the first 10h after the incubation. Gas production was evaluated in the calibrated syringes in 2, 4, 6, 8, 12, 16, 24, 48, 72 and 96 hours of incubation. McIntosh et al. (2003) stated that if levels of essential oils are less than 100 p.p.m, could not change rumen performance. The study of Cardozo (2005) also was in agreement with this findings. In order to specify the impacts of essential oils on the kinetics of gas production on in vitrostudies, data were adjusted to the Orskov and McDonald (1979) model formula as well as the following: Y= a +b (1-e -ct ). Y= gas product. a+ b= potential of gas production. a= rate of gas production of the fast soluble fractions. b= rate of gas production of the insoluble fractions. c= constant rate of gas production for the insoluble fractions (ml/h). t= time of incubation (hours). Statistical Data Analysis The data of gas production and related parameters were restricted to the one-way analysis of variance by using variance model ANOVA in SAS software (2006). Although, multiple comparison tests or Duncan's multiplet-test (1980) has been used in similar studies. Significant differences were tested by the multiple range Duncan's method. Mean differences were significant at p<0.05 level. The standard error of the means was estimated by using the method of the residual mean square in the analysis of variance. Therefore, all data collected from three times repetition tests (n= 3). RESULTS The chemical combination of alfalfa silage, concentrate and wheat straw of ingredients used in the diet of animals, which rumen liquor was taken, is showed in table 1. The chemical composition of alfalfa and their silage (%) treated with various doses of Mentha pulegium and Ziziphora clinopodioides is seen in table 2 and 3, respectively. As shown in these tables, there were positive differences between silages. In addition, a considerable difference was observed between the forages in terms of chemical composition. According to table 2, the content of crude protein in forages ranged from 20.05 to 20.40%. The silage that treated with 30ML of Mentha pulegium had higher crude protein than the other doses of Mentha pulegium. Aerobic stability had a positive difference and it was developed in silage treated with Mentha pulegium in comparison with control. The data demonstrates the chemical compositions improved by adding various doses of Ziziphora clinopodioides in table 3. pH difference was not positive, but it will diminish slightly in doses of 20 and 30ML of Ziziphora clinopodioides. The effect of incubating the materials in vitro during 2, 4, 6, 8, 12, 24, 48, 72 and 96 hours by a various dose of essential oils of Ziziphora clinopodioides and Mentha pulegium on gas production from the gas test is listed in the tables 4 and 5. The result of the findings indicated adding essential oils on alfalfa silage reduced gas production. It seem reasonable that part of this activity is due to the hydrophobic nature of the cyclic hydrocarbons, which let them associate with cell membranes as well as amass in the Two-layer lipid from bacteria, taking a space between the fatty acids chains (Sikkema et al. 1994;Ultee et al. 1999). This action and reaction Cause structural changes and changes in the structure of the membrane, resulting in its fluidity and enlargement (Griffin et al. 1999). In this situation, lack of membrane stableness brings about the discharge of ions over the cell membranes, that leads to a reduction in the transmembrane ionic difference. In many ways, bacteria can balance these effects with utilizing ionic pumps To prevent cell death, although the great quantity of energy is turned into this mechanism and bacterial growth was restricted (Griffin et al. 1999;Ultee et al. 1999; Cox et al. 2001). With respect to increasing the time of incubation, the cumulative content of gas production will be enhanced. According to Table 4, there are significant differences between control treatment and silages treated with various doses of Ziziphora clinopodioides with respect to gas production at each time of incubation. After 96 h incubation, gas produced ranged between 68.82 and 56.12 ml per 200 g of substrate. Table 5 shows that gas production of silage content with 30ML of Mentha pulegium was significantly and positively (p<0.001) lower than the other. In addition, as it can be seen, there were positive and significant (p<0.001) differences between silages with respect to gas production at all incubation times. Silage content with mentha pulegium had more reduction impact than Ziziphora clinopodioides in gas production in comparison with control silage and it was significant and positive at (p<0.001) level. At 96 h incubation, gas production values of silage no added essential oils, 30ML of Ziziphora clinopodioides and 30ML of Mentha pulegium were 68.82, 56.12 and 49.74, respectively. All incubation times presented that gas production in experimental silages was lower than control silage, but also silages treated with essential oils had higher protein content than control silage. In alfalfa silage, protein was defectively used, especially when the diet was low in energy. (Buxton, 1996). within harvesting and storage of silage (Albrecht and Muck, 1991), extensive degradation happens, and it was followed by more microbial depression in the rumen (Buxton, 1996). On in vitro study, gas is achieved not only due to fermentation (CO2 and CH4) but also consequentially, from the acidify impact of VFAs (volatile fatty acids) on CO2 which released from bicarbonate buffer solution (Getachew et al. 1998). The breakdown of protein is produced ammonia, combining with H + that released from the buffer that remains in the solution produced NH4 + and indirectly inhibits the production of gas. The antimicrobial activity of essential oils in other treatments is one reason that why gas production value of control silage was lower than other treatments. Also, Nagy and Tengerdy (1968) assessed the ruminal microorganisms sensitivity to the essential oils. Forty three Mentha pulegium essential oil compounds were identified, suggesting 99.53% of the total mass of essential oil, of which 29 compounds were clarified. The major components were pulegone (38.83%), menthone (19.24%), pipériténone (16.53%), pipéritone (6.35%) and isomenthone (6.09%), Limonène (4.29%), Octanol (1.85%). The major component of the Ziziphora clinopodioides essential oil was monoterpene hydrocarbon and among that, the important ingredients were pulegone (79.34%), Limonene (6.77%) and piperitenone (4.21%). Findings of present study showed that essential oils and doses had positive and significant impacts on gas production. The results of the experiments were different based on each essential oil, doses, incubation time and feed substrate. In this study, essential oils reclined gas production of all feed samples. These results of the findings are in agreement with the Cardozo et al. findings (2004). reduced in vitro gas production by essential oils show more effective energy utilizing because of the waste of energy as well as methane. Effects of EOs on rumen fermentation were significantly different. Ziziphora clinopodioides and Mentha pulegium against a wide range of gram-positive bacteria and gram-negative bacteria had bactericidal effects. Both are found in ORE (origanum essential oils) (Sivropoulou et al. 1996). Castillejos et al. (2006) presented that small amount of oregano (less than 50mg/l) didn't effect on microbial fermentation. however, a higher amount of thymol or oregano decreased total VFA Although mainly emphasized on the antibacterial properties of essential oils, this is not their only effect. Gustafson and Bowen (1997) also represented that among other effects of essential oils, it's known they are capable to coagulation of some of the cellular components through the denaturation of the proteins. In addition, many types of research had revealed the capability of some phenolic and non-phenolic ingredients of essential oils to act reciprocally with chemical groups in protein structure and other active molecules, such as enzymes (Juven et al. 1994). Generally speaking, phenolic compounds communicate with protein ingredients via hydrogen bonds and electrostatic or hydrophobic bridges (Prescott et al. 2004), while non-phenolic ingredients communicate via other functional groups including the carbonyl group of cinnamaldehyde (Ouattara et al. 1997). In this study, silages treated with essential oils had higher protein content than control silage. Busquet et al. (2005) stated as the essential oil dose increased, the gas production was reduced. These results are in consistent with the present study. Oh et al. (1967Oh et al. ( , 1968) reported that the low palatability of some plants to ruminants could be for both organoleptic effects, and their negative impact on rumen microbial fermentation, and nutrient digestion. In another study, Oh et al. (1967) examined the antibacterial activity of the EOs of Pseudotsuga menziesii and related ingredients on 24-h in vitro bath cultures by using of the ruminal fluid of deer and sheep. Their findings showed that doses of injection (4 to 8mL/L of fluid) were low and had not any positive impact on rumen fermentation performance, although higher doses (12mL/L of liquor) decreased gas yield during incubation time. DISCUSSION When the fundamental compounds isolated from the EO was utilized in same levels (3mL/L of liquor), those circular hydrocarbons (limonene and pinene) didn't change whether seldom motivated microbial activity, yet the cyclic hydrocarbons enrich with oxygen and particular alcohols (as terpinene and a-terpineol) hindered microbial activity in the rumen. It is famous that there is a strong link between gas production in laboratory studies and other in vivo experiments on rumen fermentation and microbial activity (Menke et al. 1979). The outcome of their findings indicated that essential oils could have a positive impact on the rumen microbial fermentation and nutrient digestion. This is reported by benchaar et al. (2007) that gas production of carvacrol, thymol, and eugenol treatments reduced in comparison with control. These outcomes of their findings are not in agreement with the findings of oregano in the present experiment. Oregano possesses more carvacrol and eugenol compounds respectively than other essential oils. Therefore, garlic and oregano reduced gas production in barley on in vitro test. These results of the findings indicated that essential oils can be used to enhance digestion of leisurely starch degradation and monitor rate of releasing rapidly degradable starch to keep ruminal pH in the physiological range in the rumen. Recently, the researchers have studied the impacts of active components of EOs on the performance of rumen microbial population. It is worth noting that the first challenge is to specify and determine which potential impacts are studied, and this may be varied with respect to the ration, cows, and production stage. Nevertheless, it is rational, to begin with recognizing and determining additives which develop propionic acid production and decline acetic acid and methane yields without any diminishing effect on VFA production. Also, the EOs which decrease microorganism activities such peptidolysis, proteolysis, deamination, and their association. A series of in vitro short-term batch culture researchers have been utilized for monitor of capability helpful EOs (Cardozo et The outcome of findings showed that essential oil of garlic and cinnamaldehyde, and eugenol that is the active element of the clove bud, and also capsaicin (important element of pepper), and anethol (main element of anise oil) enhance the profile of fermentation in continuous culture bath of microorganisms are located in rumen, and they Have been investigated in several in vitro, and some cases of in vivo studies (Cardozo et al. 2006). Gas production on In vitro tests was reduced positively with essential oils. Most of the essential oils can be utilized to increase cellulose digestion and it can be regarded as a feed additive. One of the most important elements of Mentha pulegium and Ziziphora clinopodioides (up to79.33% in Ziziphora clinopodioides) is pulegone. In this present study, the pulegone content in essential oils is in consistent with the outcomes of Davidson and Naidu study (2000), that proposed, by using optimal doses, the efficiency of nutrient compound utilization in the rumen would be enhanced. Also, eugenol could increase production of VFA, and utilization of N in lactating animals rumen . In a commercial form of essential oils, their main ingredients are thymol, eugenol, vanillin, carvacrol, and limonene, which can alter rumen fermentation. (McIntosh et al. 2003;Benchaar et al. 2007). the present experiment, CUM indicated the greatest gas production in comparison to control treatment at each test. ORE (origanum essential oils) also demonstrated the lowest gas production in all in vitro tests. In addition, observed essential Interactions related to the type of feeds, the duration of incubation and the dose that contradicts with findings of McIntosh et al. (2003). In this respect, Benchaar et al. (2007) claimed that observed variations and manipulation in the fermentation of rumen created by EOs ingredients e.g carvacrol, eugenol and thymol may not be useful in lactation cows. And also proposed that type and amount of essential oils and related ingredients should be meticulously specified and determined. In recent years, considerable knowledge has been achieved on the capability use of EOs to change microbial activity in the rumen. Although, before specifying recommendations for commercial use, several problems need to be addressed to be established. Most of these limitations of the present knowledge required to be resolved. For instance, the number of active ingredients in EOs Extensively depend on the variety, developing situations, and also technical method of extraction. CONCLUSIONS According to the present study, it can be found that estimated EOs and their compounds have a positive effect on rumen degradation due to relying on the essential oils and feeds applied. Although in vitro studies is still required to monitoring and checking new findings, then specifying functions of behavior, and also an important requirement to perform in vivo research so as to specify and specify the optimum doses in active element unit, the adaptation ability of rumen microorganisms to the action of this essential oils, the destiny of these additives in the gastrointestinal trace and the existence of residues in some products as milk and meat, and the impacts on performance of dairy cattle.
2019-04-03T13:07:52.271Z
2018-03-28T00:00:00.000
{ "year": 2018, "sha1": "cfb7747aa770e1c368cd49e77fce271f9a27d192", "oa_license": "CCBY", "oa_url": "https://doi.org/10.13005/bbra/2625", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "d65b1908d8052d29109bfc3b42007f08845f66a2", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology" ] }