id
stringlengths 3
9
| source
stringclasses 1
value | version
stringclasses 1
value | text
stringlengths 1.54k
298k
| added
stringdate 1993-11-25 05:05:38
2024-09-20 15:30:25
| created
stringdate 1-01-01 00:00:00
2024-07-31 00:00:00
| metadata
dict |
|---|---|---|---|---|---|---|
252838340
|
pes2o/s2orc
|
v3-fos-license
|
Effect of 2-Methylthiazole Group on Photoinduced Birefringence of Thiazole-Azo Dye Host–Guest Systems at Different Wavelengths of Irradiation
The photoinduced birefringence behaviors of host–guest systems based on heterocyclic thiazole–azo dyes with different substituents, dispersed into PMMA matrix, were investigated under three excitation wavelengths, i.e., 405 nm, 445 nm or 532 nm. The wavelengths fell on the blue side, near the maximum or on the red side of the absorption bands of trans-azo dyes, respectively. We found that photoinduced birefringence was generated at a similar extent in all studied systems, except the system containing a 2-methyl-5-benzothiazolyl as thiazole–azo dye substituent. For this material, the achieved birefringence value was the highest among the whole series, regardless of the excitation wavelength. Moreover, we identified the optimal irradiation wavelength for efficient birefringence generation and showed that large absorption of excitation light by trans isomer does not account for achieving a significant degree of molecular alignment. The obtained results indicate that thiazole–azo dye with a 2-methyl-5-benzothiazolyl substituent shows promising photoinduced birefringence, and can be considered a dye potentially suitable for optical applications.
Introduction
In the last decades, light-responsive molecules have been of interest in many applications, such as optical data storage, optical switches, optical memory, etc., [1][2][3][4].
Azo dyes are the best known family of photoresponsive compounds, which have different properties and can also have a refractive index depending on the polarization and propagation direction of light, namely birefringence.
Azobenzenes attract the attention of the scientific community due to their photochromic nature, ease of processing and simplicity of design [5][6][7]. These features allow changing their physico-chemical properties for a particular application by appropriate modification of their chemical structure [8][9][10][11]. It is well known that the core of azo dyes is formed by the conjugated azo (-N = N-) chromophore group in combination with one or more aromatic or heterocyclic systems. The addition of electron withdrawing and/or electron donating substituents to the backbone of azo moiety can significantly influence the absorption spectra of azo dyes by affecting the reorganization of electronic density [12][13][14]. The D-π-A system of azo compounds can provide a prerequisite ground state charge asymmetry [15] as well as efficient intramolecular charge transfer (ICT) between donor and acceptor groups [16,17] because the π-conjugated bridge ensures a pathway for electronic charges movement [15]. The transition from the ground state to the excited state upon excitation causes almost instantaneous electronic polarization, which changes the dipole moment of the molecules and generates a dipolar push-pull system [16][17][18][19]. Therefore, substitution of one or more benzene rings with easily delocalizable electron-excessive and/or electron-deficient hetero-aromatic rings, acting as an auxiliary electron donors and/or acceptors, can result in enhanced intramolecular charge transfer [15][16][17].
The most important feature of azo compounds is a possibility of trans-cis photoisomerization, which can be induced and reversed depending on the wavelength of the incident light. It is well known that this photochromic behavior can differ due to the structure of studied compounds (i.e., location and the shape of π-π* and n-π* bands) [6,8,9,20,21].
Another interesting feature of azo compounds is the photoinduced orientation. Generation of optical anisotropy in azo dye-containing materials (e.g., azo dyes dispersed in polymer matrix or azopolymers) results from the orientation of azo chromophores induced by linearly polarized light due to processes of selective absorption and reactions of trans-cis isomerization [22,23]. After numerous trans-cis-trans isomerization processes, the long axes of azo molecules tend to align in directions perpendicular to the polarization of light. As a result, the material becomes birefringent and dichroic in the plane perpendicular to the direction of light propagation [6,24,25].
The efficiency and dynamics of the light-induced birefringence generation strongly depend on various factors, which are related to the chemical structure of azo dye (such as substituents of the azo group and its bulkiness), a chromophore content and a type of polymer matrix, but also with the experimental conditions (e.g., an excitation wavelength and intensity) [26][27][28]. While some general principles govern the efficiency of the lightinduced processes in azo compounds, the optical response of a given material may be substantially different than expected [29,30].
A polymeric material with unique properties, such as light weight, high flexibility and low-cost of production, can be used to improve the quality of a prepared thin layer [31]. One of the most popular and widely used polymeric materials is poly(methyl methacrylate) (PMMA). Its main advantages are excellent mechanical properties, high chemical resistance, simple synthesis, low cost, good tensile strength, low optical loss in visible spectral range, good insulation properties and thermal stability. PMMA-based matrices are well known not only for their good optical transparency, but also for high resistance to laser damage [32,33]. PMMA is an excellent and suitable host material in the host-guest systems due to its optical clarity and known chemical and physical properties. It should also be added that for the samples based on PMMA it is possible to conduct research of the structure of matrices and photophysical transitions connected with changes in the mobility of low molecular structural units.
Studying the correlation between structure and material properties is a fascinating field of research, which is very important for the development of novel materials for specific applications such as optical data storage. Typically, measurements of photoinduced birefringence generation were carried out at a single excitation wavelength located on the red side of the azo moiety absorption band. However, the measurements performed for various excitation wavelengths may provide valuable information on the optimal experimental conditions leading to the most efficient process of azo chromophore alignment.
The aim of this work was to characterize the photoinduced birefringence generation in thiazole-azo dyes host-guest systems under irradiation with linearly polarized violet, blue or green light. The motivation for this research is the possibility of using thiazole-azo dyes in photonic devices for recording optical information (optical data storage), which are becoming increasingly important in many fields. In this article, we focus on the effect of an additional 2-methylthiazole group on the efficiency photoinduced birefringence generated in thiazole-azo dyes dispersed into a poly(methyl methacrylate) (PMMA) matrix. Furthermore, we introduce the benzene ring into a thiazole fragment in the position of 4, which can improve solubility of the compounds. The key structural feature of the investigated materials is the presence of a heterocyclic thiazole fragment in the azo molecule, which leads to the change in the distribution of the electron density of the conjugation system in comparison with azobenzenes without a heterocyclic fragment. We show that upon irradiation with polarized violet, blue or green light, the studied azo dye systems can exhibit photoinduced birefringence. To the best of our knowledge, the photoinduced birefringence generation for these heterocyclic thiazole-azo dyes dispersed into a PMMA matrix at 405 nm, 445 nm and 532 nm are presented for the first time. Figure 1 shows the UV-Vis spectra of the studied thiazole-azo dyes dispersed in the PMMA matrix thin films (T-azo-OCH 3 , T-azo2-OCH 3 , T-azo-H). One can see that the π-π* and n-π* bands are completely overlapped in this region, and the absorption bands of T-azo-OCH 3 , T-azo2-OCH 3 samples are redshifted relative to the thin film of T-azo-H without any substitution in para-position [8]. We also found that the absorption band of T-azo2-OCH 3 thin film with 2-methyl-5-benzothiazolyl moiety is redshifted compared to T-azo-OCH 3 film with a phenyl ring.
UV-Vis Spectra
It should also be noted that the excitation wavelength of 445 nm used in the photoinduced birefringence measurement was the most strongly absorbed by the examined samples compared to the excitation wavelengths of 405 nm and 532 nm. The samples were transparent at the probing wavelengths (690 nm or 783 nm, respectively). (PMMA) matrix. Furthermore, we introduce the benzene ring into a thiazole fragment in the position of 4, which can improve solubility of the compounds. The key structural feature of the investigated materials is the presence of a heterocyclic thiazole fragment in the azo molecule, which leads to the change in the distribution of the electron density of the conjugation system in comparison with azobenzenes without a heterocyclic fragment. We show that upon irradiation with polarized violet, blue or green light, the studied azo dye systems can exhibit photoinduced birefringence. To the best of our knowledge, the photoinduced birefringence generation for these heterocyclic thiazole-azo dyes dispersed into a PMMA matrix at 405 nm, 445 nm and 532 nm are presented for the first time. Figure 1 shows the UV-Vis spectra of the studied thiazole-azo dyes dispersed in the PMMA matrix thin films (T-azo-OCH3, T-azo2-OCH3, T-azo-H). One can see that the π-π* and n-π* bands are completely overlapped in this region, and the absorption bands of T-azo-OCH3, T-azo2-OCH3 samples are redshifted relative to the thin film of T-azo-H without any substitution in para-position [8]. We also found that the absorption band of T-azo2-OCH3 thin film with 2-methyl-5-benzothiazolyl moiety is redshifted compared to T-azo-OCH3 film with a phenyl ring.
UV-Vis Spectra
It should also be noted that the excitation wavelength of 445 nm used in the photoinduced birefringence measurement was the most strongly absorbed by the examined samples compared to the excitation wavelengths of 405 nm and 532 nm. The samples were transparent at the probing wavelengths (690 nm or 783 nm, respectively). Figure 3 present the birefringence growth and relaxation curves for the thiazole-azo dyes dispersed in PMMA matrix thin films, where λexc. = 405 nm and λprobe = 690 nm were used. We found that the irradiation time of a few hundred seconds was already sufficient to observe saturation of birefringence in the studied compounds (see Figure 3a). We also found that thiazole-azo-PMMA samples T-azo-OCH3 and T-azo2-OCH3 have a higher saturation level of birefringence compare to the T-azo-H one without substituent in paraposition. Moreover, T-azo2-OCH3 with a heterocyclic fragment (R1) has the highest final birefringence, which is almost twice as that for T-azo-OCH3 with a phenyl fragment (R1). At the same time, T-azo2-OCH3 exhibits the most stable birefringence after irradiation among the series (Figure 3b). One can also see that the T-azo-OCH3 sample with a phenyl ring (R1) and electron donating group (R2) has almost a similar birefringence value after relaxation with T-azo-H and it is almost two times smaller than for T-azo2-OCH3. Figure 3 present the birefringence growth and relaxation curves for the thiazole-azo dyes dispersed in PMMA matrix thin films, where λ exc. = 405 nm and λ probe = 690 nm were used. We found that the irradiation time of a few hundred seconds was already sufficient to observe saturation of birefringence in the studied compounds (see Figure 3a). We also found that thiazole-azo-PMMA samples T-azo-OCH 3 and T-azo2-OCH 3 have a higher saturation level of birefringence compare to the T-azo-H one without substituent in paraposition. Moreover, T-azo2-OCH 3 with a heterocyclic fragment (R1) has the highest final birefringence, which is almost twice as that for T-azo-OCH 3 with a phenyl fragment (R1). At the same time, T-azo2-OCH 3 exhibits the most stable birefringence after irradiation among the series (Figure 3b). One can also see that the T-azo-OCH 3 sample with a phenyl ring (R 1 ) and electron donating group (R 2 ) has almost a similar birefringence value after relaxation with T-azo-H and it is almost two times smaller than for T-azo2-OCH 3 . The curves of birefringence growth under 445 nm excitation and birefringence relaxation for the studied films are shown in Figure 4. It is interesting that despite a strong film The curves of birefringence growth under 445 nm excitation and birefringence relaxation for the studied films are shown in Figure 4. It is interesting that despite a strong film absorbance at this wavelength, the values of final birefringence observed for all the samples were lower than the values obtained in the case of 405 nm excitation. The result may be explained on the basis of the recorded changes in the absorption spectra under irradiation. Both transand cis-isomers are involved in the process of optical birefringence generation, and thus, light absorption by the cis form is essential for obtaining a significant degree of molecular order. Cis-isomers more effectively absorb the 405 nm wavelength than 445 nm light, which compensates for the effect of a lower absorption of 405 nm light by trans-isomers.
Photoinduced Birefringence
As in the case of 405 nm excitation, we found that the highest birefringence, under 445 nm, was also induced in T-azo2-OCH 3 film (thiazole-azo dye with 2-methyl-5benzothiazolyl substituent-R 1 and methoxy group R 2 ), which again correlates with the slowest birefringence relaxation rate (see Figure 4b). Its final birefringence is almost twice as large as the final induced birefringence compared to T-azo-OCH 3 due to an additional 2-methylthiazole group. We also found that host-guest film of thiazole-azo compound T-azo-OCH 3 with a phenyl ring (R 1 ) and electron donating group (R 2 ) demonstrate higher birefringence saturation level compared to T-azo-H without a substituent in para-position (R 2 ). Figure 4b shows the normalized birefringence relaxation curves after turning off the beam at 445 nm. One can see that the type of substituent strongly affects the relaxation of birefringence. We found that T-azo2-OCH 3 exhibits the lowest relaxation, which may be associated with different geometry of chromophores T-azo2-OCH 3 vs. T-azo-OCH 3 and T-azo-H with more compact structure. It is difficult to relax the molecules to the isotropic state by thermal movement if the chromophores have a big volume. The 2-methyl-5-benzothiazolyl group in T-azo2-OCH 3 increases the steric effect and slows down the relaxation of birefringence of T-azo2-OCH 3 . However, a thiazole-azo compound with a phenyl ring (R 1 ) and electron donating group (R 2 ) (T-azo-OCH 3 ) has similar birefringence to T-azo-H. Therefore, the role of various substituents in thiazole-azo dyes in the photoinduced birefringence measurements is evident. Figure 5 presents the birefringence growth and relaxation curves for the thiazole-azo dyes dispersed in PMMA matrix thin films, where λexc. = 532 nm and λprobe = 783 nm were used. Apart from T--azo2-OCH3, the irradiation time of about 300 s was sufficient to observe birefringence saturation for all samples. We also found that the highest birefringence saturation level was induced in T-azo2-OCH3 with the highest absorption value at 532 nm. This can be explained by a significantly red-shifted absorption band and arising strongest absorption of 532 nm light among the series. Nevertheless, the values of photoinduced birefringence generated under 532 nm irradiation were very low. The result can be attributed to low sample absorbance, i.e., the excitation wavelength falls on the tails Figure 5 presents the birefringence growth and relaxation curves for the thiazole-azo dyes dispersed in PMMA matrix thin films, where λ exc. = 532 nm and λ probe = 783 nm were used. Apart from T-azo2-OCH 3 , the irradiation time of about 300 s was sufficient to observe birefringence saturation for all samples. We also found that the highest birefringence saturation level was induced in T-azo2-OCH 3 with the highest absorption value at 532 nm. This can be explained by a significantly red-shifted absorption band and arising strongest absorption of 532 nm light among the series. Nevertheless, the values of photoinduced birefringence generated under 532 nm irradiation were very low. The result can be attributed to low sample absorbance, i.e., the excitation wavelength falls on the tails of trans absorption bands for all the samples. Figure 5 presents the birefringence growth and relaxation curves for the thiazole-azo dyes dispersed in PMMA matrix thin films, where λexc. = 532 nm and λprobe = 783 nm were used. Apart from T--azo2-OCH3, the irradiation time of about 300 s was sufficient to observe birefringence saturation for all samples. We also found that the highest birefringence saturation level was induced in T-azo2-OCH3 with the highest absorption value at 532 nm. This can be explained by a significantly red-shifted absorption band and arising strongest absorption of 532 nm light among the series. Nevertheless, the values of photoinduced birefringence generated under 532 nm irradiation were very low. The result can be attributed to low sample absorbance, i.e., the excitation wavelength falls on the tails of trans absorption bands for all the samples. From Figure 5a, it can be seen that the birefringence saturation level decreases as follows: T-azo2-OCH 3 > T-azo-OCH 3 > T-azo-H. Thus, the thiazole-azo compound with 2-methyl-5-benzothiazolyl substituent (R 1 ) (T-azo2-OCH 3 ) has a higher saturation level of birefringence compared to thiazole-azo dyes with a phenyl ring (R 1 ). Similar behavior was visible in absorbance. For 532 nm excitation (see Figure 5b), the relaxation of birefringence was similar for all studied compounds. Figure 6 presents the examples of birefringence growth and relaxation curves for the thiazole-azo dyes dispersed in PMMA matrix thin films for three excitation wavelengths, i.e., 405 nm, 445 nm and 532 nm. In all cases, we observed the rapid increase of birefringence at the beginning of pumping (see Figure 6), which was due to the molecular arrangement orientation of the thiazole-azo dye, which gradually tended to be perpendicular to the polarization direction of the pumping light; thus the detecting light intensity began to increase gradually. Then we can see a slow increase to the saturation level with different speeds depending on the type of substituent. When the pumping light was turned off, the curves decreased sharply due to the molecular relaxation. The anisotropic state reestablishes the originally mixed and disordered distribution. However, this type of recovery is not complete, because some azo molecules achieve equilibrium, and some still remain at an orientation distribution state.
The decay of birefringence after turning off the excitation light was caused by the thermal cis-trans isomerization of thiazole-azo chromophores and a thermal randomization of the molecular orientation.
We found that the final birefringence generated after irradiation with 405 nm light was the highest for all studied thiazole-azo dyes. The difference between the increase in the birefringence for the studied wavelengths strongly depends on the type of substituents in thiazole-azo compounds. state re-establishes the originally mixed and disordered distribution. However, this type of recovery is not complete, because some azo molecules achieve equilibrium, and some still remain at an orientation distribution state.
The decay of birefringence after turning off the excitation light was caused by the thermal cis-trans isomerization of thiazole-azo chromophores and a thermal randomization of the molecular orientation. The birefringence growth and birefringence relaxation with time are often described by the following biexponential equations [25]: where τ 1 , τ 2 are time constants for writing processes, τ 3 , τ 4 are time constants for relaxation processes, A, B, C and D are amplitudes associated with different physical processes appearing upon illumination, and E is the residual birefringence. Using Equations (1) and (2), one can perform the curve fitting, which allows us to quantitatively compare the obtained birefringence signals. It should be noted that the biexponential growth and biexponential relaxation reproduced the results of the experiment well. The values of the fitted parameter for T-azo-H, T-azo-OCH 3 and T-azo2-OCH 3 are presented in Tables 1 and 2. The contributions of various processes to birefringence growth and relaxation were calculated using Equation (3): where X i = A, B and C, D, E for birefringence growth and relaxation, respectively. 3 and T-azo2-OCH 3 for 405 nm, 445 nm and 532 nm. 3 and T-azo2-OCH 3 for 405 nm, 445 nm and 532 nm. When excited with 405 nm light, for samples T-azo-H and T-azo-OCH 3, the fast and slow processes contributions to the birefringence growth are the same for both materials and are 0.65 and 0.35, respectively. The time factors for both components are of the same order. For the sample T-azo2-OCH 3 , the slow component has more impact on birefringence growth than for the other materials, and the fast and slow processes' contributions are 0.51 and 0.49, respectively. The time factors are noticeably longer than for the other two samples, especially for the slow component, whose time factor is one order of magnitude higher.
T-azo-H T-azo-OCH 3 T-azo2-OCH
Upon excitation with 445 nm light, the fast process contribution to birefringence growth is higher than the slow process contribution. For the samples T-azo-H and T-azo2-OCH 3 T-azo-H and T-azo2-OCH 3 , and it is the lowest for the Tazo-OCH 3 sample, while for the fast process, the time factors are similar for the T-azo-H and T-azo-OCH 3 samples, and it is slightly higher for the T-azo2-OCH 3 sample. Table 2 shows the fitted parameters for the birefringence relaxation. After excitation with 405 nm light, the fast process contribution to birefringence relaxation is slightly higher than the slow process contribution. The sample T-azo2-OCH 3 exhibits the highest residual birefringence, while for the sample T-azo-H, it is the lowest. Both time factors for the fast and slow processes are the lowest for the T-azo-H sample, and they are the highest for the T-azo2-OCH 3 sample.
The fast processes contribution to birefringence relaxation after 445 nm excitation is slightly higher than the slow process contribution. Again, the sample T-azo2-OCH 3 exhibits the highest residual birefringence, and the sample T-azo-H has the lowest. Time factors are the same order of magnitude, and both time factors are the lowest for the T-azo-OCH 3 sample. They are the highest for the T-azo2-OCH 3 sample.
The fast process contribution to the birefringence relaxation is considerably higher than the slow process contribution after excitation with 532 nm light, for all the samples. The residual birefringence is similar, and it is around 0.1 of the maximum birefringence value. Time factors, separately, are the same order of magnitude. Both time factors are the lowest for the T-azo-OCH 3 sample. The fast process time factor is the highest for the T-azo2-OCH 3 sample, while the slow process time factor is the highest for the T-azo-H sample. Table 3 summarizes the ratios between the maximum birefringence and absorbance for the given excitation wavelengths. As can be seen, there is no clear influence of the amount of the absorbed light on the maximum birefringence value. Even though the absorbance at 405 nm and 445 nm is the lowest for the T-azo2-OCH 3 sample, the birefringence values are the highest. For the 532 nm light, the absorbance of the T-zo2-OCH 3 sample is the highest amongst the three studied samples, and the birefringence value is the highest as well. However, the ratio between the two parameters is the lowest. Table 3. Maximum birefringence values, absorbance at the excitation wavelengths and their ratio for T-azo-H, T-azo-OCH 3 and T-azo2-OCH 3 samples. Figure 7 shows the chemical structure of the studied thiazole-azo dyes. The synthesis procedure for T-azo-H, T-azo-OCH 3 and T-azo2-OCH 3 is described elsewhere [8,9,20,34]. 1
Preparation of Thin Films
The standard procedure was used to prepare thin films of studied thiazole-azo dyes dispersed in the PMMA (poly(methylmethacrylate)) matrix using a spin-coating method [8,9]. THF solutions including PMMA and the thiazole-azo dyes were prepared first. PMMA was purchased from Sigma-Aldrich and was used as it was. Films were formed on glass substrates using a spin-coating method with the spinning time of 60 s. After that, films were baked at 60 °C for 3 h in a vacuum chamber. The thickness of the samples was in the range of 900-1300 nm.
UV-Vis Absorption
The absorption spectra of all studied thin layers of heterocyclic thiazole-azo compounds dispersed in the PMMA matrix were measured with a spectrometer (Shimadzu UV-1800) in the range 350-600 nm.
Photoinduced Birefringence Measurements
Photoinduced birefringence measurements were performed for 405 nm, 445 nm and 532 nm excitation wavelengths. The experimental configuration used in the studies with violet and blue irradiation was presented elsewhere [35]. The intensity of each beam (from diode lasers) was 100 mW/cm 2 . The time-evolution of birefringence generation and birefringence decrease after switching on and off the excitation light was probed by 690 nm wave. The excitation and probe beams were linearly polarized in the directions forming an angle of 45°. The measurement technique is based on detecting the intensity of the probe beam after passing through the thin film situated between two crossed polarizers [35]. The details of the experimental configuration were described elsewhere [36][37][38], whereas Figure 8 shows the experimental configuration of photoinduced birefringence at excitation of a CW laser (λexc. = 532 nm, 0.365 mW, I~29 mW/cm 2 ). The details of this setup were described elsewhere [24].
Preparation of Thin Films
The standard procedure was used to prepare thin films of studied thiazole-azo dyes dispersed in the PMMA (poly(methylmethacrylate)) matrix using a spin-coating method [8,9]. THF solutions including PMMA and the thiazole-azo dyes were prepared first. PMMA was purchased from Sigma-Aldrich and was used as it was. Films were formed on glass substrates using a spin-coating method with the spinning time of 60 s. After that, films were baked at 60 • C for 3 h in a vacuum chamber. The thickness of the samples was in the range of 900-1300 nm.
UV-Vis Absorption
The absorption spectra of all studied thin layers of heterocyclic thiazole-azo compounds dispersed in the PMMA matrix were measured with a spectrometer (Shimadzu UV-1800) in the range 350-600 nm.
Photoinduced Birefringence Measurements
Photoinduced birefringence measurements were performed for 405 nm, 445 nm and 532 nm excitation wavelengths. The experimental configuration used in the studies with violet and blue irradiation was presented elsewhere [35]. The intensity of each beam (from diode lasers) was 100 mW/cm 2 . The time-evolution of birefringence generation and birefringence decrease after switching on and off the excitation light was probed by 690 nm wave. The excitation and probe beams were linearly polarized in the directions forming an angle of 45 • . The measurement technique is based on detecting the intensity of the probe beam after passing through the thin film situated between two crossed polarizers [35]. The details of the experimental configuration were described elsewhere [36][37][38], whereas Figure 8 shows the experimental configuration of photoinduced birefringence at excitation of a CW laser (λ exc. = 532 nm, 0.365 mW, I~29 mW/cm 2 ). The details of this setup were described elsewhere [24]. 532 nm excitation wavelengths. The experimental configuration used in the studies with violet and blue irradiation was presented elsewhere [35]. The intensity of each beam (from diode lasers) was 100 mW/cm 2 . The time-evolution of birefringence generation and birefringence decrease after switching on and off the excitation light was probed by 690 nm wave. The excitation and probe beams were linearly polarized in the directions forming an angle of 45°. The measurement technique is based on detecting the intensity of the probe beam after passing through the thin film situated between two crossed polarizers [35]. The details of the experimental configuration were described elsewhere [36][37][38], whereas Figure 8 shows the experimental configuration of photoinduced birefringence at excitation of a CW laser (λexc. = 532 nm, 0.365 mW, I~29 mW/cm 2 ). The details of this setup were described elsewhere [24].
Conclusions
The optical birefringence was induced in three heterocyclic thiazole-azo dyes with different substituents dispersed in a PMMA matrix, upon polarized violet, blue and green irradiation. We found that the role of the substituents in thiazole-azo dyes and irradiation wavelength is visible during the birefringence generation.
We noticed that the photoinduced birefringence response at 405 nm, 445 nm or 532 nm of most studied host-guest thin films of PMMA-thiazole-azo dyes with different substituents is similar, except for thiazole-azo dye with 2-methyl-5-benzothiazolyl substituent (i.e., T-azo2-OCH 3 ). It was found that this molecule had the highest saturation level of birefringence compared to other studied thiazole-azo dyes for all three induced irradiation wavelengths (i.e., 405 nm, 445 nm and 532 nm). This dye also exhibited the lowest relaxation after ceasing the irradiation 405 nm and 445 nm wavelengths. We suppose that the high ∆n value obtained for T-azo2-OCH 3 , despite its lower absorption at these wavelengths, can be attributed to the free space in the polymer, created by the bulky T-azo2-OCH 3 chromophores, giving them the opportunity to reorient.
The chemical structure is the main factor influencing the photoinduced behavior of the studied thiazole-azo dyes. The introduction of the thiazole-azobenzene unit into PMMA matrix, restricts the chromophore motions during the writing process. In the hostguest polymers, the chromophores are typically more mobile, which can induce a faster inscription of ∆n. Therefore, the appropriate design of thiazole-azo dyes can increase the properties of photoinduced birefringence, which contributes to their use in new photonic devices such as optical data storage. Data Availability Statement: Data supporting the results of this study are available from the appropriate author upon reasonable request.
Conflicts of Interest:
The authors declare no conflict of interest.
Sample Availability: Samples of the compounds T-azo-H, T-azo2-OCH 3 and T-azo-OCH 3 are available from the appropriate authors upon reasonable request.
|
2022-10-12T15:17:18.325Z
|
2022-10-01T00:00:00.000
|
{
"year": 2022,
"sha1": "1fea12848e2ac2051e3280b40484c324a23f195a",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1420-3049/27/19/6655/pdf?version=1665198648",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "5484ae5dc4bbc9698d27c7c2639d45e71c2b6cea",
"s2fieldsofstudy": [
"Chemistry",
"Physics"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
259012731
|
pes2o/s2orc
|
v3-fos-license
|
Assessing Consumer Gratification in financial services
Establishing and handling relationships with clients is crucial in the new, aggressive marketplace where businesses must compete for existence. Client fulfillment is the cornerstone of any lasting connection. The evaluation is given even more weight in the service sector since the construct of pleasure is linked to the relationship one has with the service provider. This essay provides a methodology for measuring how satisfied consumers are with banking services, with a particular emphasis on an Indian bank.
Introduction
Companies need to develop more distinctive interactions with customers in the globalized, fiercely competitive environment (Mehta et al., 2012). According to research, organizations believe that implementing CRM, or customer relationship administration, can help them thrive in new market conditions by enhancing their interaction with their clients. (Mendoza, et al., 2007).
Successful CRM implementation projects that have already been completed offer evidence for the theory as well as significant competitive advantages. (Kotorov, 2003). As a result, other businesses are pressured to follow suit.
Satisfaction with clients is the cornerstone of any relationship-building effort. It can be used to establish consumer loyalty and create a "stable, mutually beneficial, and long-term connection." In the previous two decades, measuring consumer happiness has grown in popularity, and as a result, market research firms have seen significant revenue growth. This appeal results from the realization that customer happiness is a reliable indicator of future buying intent, referrals, and Turkish Journal of Computer and Mathematics Education Vol.09 No.03 (2018) loyalty. Consumer happiness, rather than service quality, is a stronger predictor of intentions to repurchase, as noted by Ravald and Grönroos. Consumers who are happy are more likely to stay in touch with the business, purchase more goods or services, and do so more frequently. Peyton et al. also include the possibility that other items in the line will be accepted as well as the positive(A. Bansal & Bansal, 2012).
The confirmation model is a popular theory for how customers build a sense of contentment (Dash, 2015). The decision to purchase a good or service is made by the customer at a certain point in time. The perceived effectiveness of the product triggers a comparison process in which it is measured against one or more standards, such as expectations. Confirmation, affirmative disconfirmation, and disapproval are the three possible outcomes. When the performance is judged to be up to par, a neutral sense of satisfactory follows. If the performance meets or surpasses the customer's expectations, this is known as a positive disagreement, the customer is also satisfied. Poor performance results in disappointment and, thus, discontent (Khan, 2013).
The confirmation/disconfirmation model is widely accepted in the literature, although the exact definition of satisfaction remains up for debate. Research mostly concentrated on cognition, or the process of comparing something to a standard. Contrarily, the sensation of satisfaction is associated with an emotive state of mind. The perception of being fulfilled is another thing that exists as a result of a process that is both emotional and cognitive. (Clerfeuille, et al., 2008).
Happiness is defined as "either an immediate afterwards evaluative judgement or a personal response regarding the firm utilized in the most current transaction" by Garbarino and Johnson (Garbarino, Johnson, 1999). This essay adopts the same viewpoint.
In order to identify the factor among others, such as various service qualities, strategically significant service parameters, and general preference for banks or financial products, Oppewal and Vriens (2000) employed the SERVQUAL model. 117 respondents assisted Chinwuba (2013) in measuring the level of customer satisfaction and their perception of the quality of service using the SERVQUAL methodology. They discovered a positive association between confidence, compassion, and responsiveness characteristics and no significant influence on client happiness, while reliability and customer satisfaction had an adverse connection and no Siddiq (2011) made an effort to pinpoint the connections and crucial elements between the level of service, customer happiness, and customer loyalty in Bangladesh's retail banking market. In the retail banking industry, he discovered that all service quality characteristics are closely related to client loyalty and satisfaction. The most positive association between customer contentment and empathy is seen, while the weakest positive correlation between customer satisfaction and tangibility is seen. In terms of the various technologies given to consumers and the potential growth of electronic channels in retail banking, Jani (2012) identified relative critical aspects influencing the areas of advantages and disadvantages of banks in the public and private sectors.
II. Contentment in the context of financial services
Customers are not actually able to review services prior to the service manage, unlike with items.
The key to assessing the quality of service is the relationship between the service supplier and the client, or the so-called service contact. (Gil, 2008). The client might gain a sense of how the business offers its services throughout these interactions. His or her interactions with the business, its procedures, and its staff helped to shape the way they received service. As a result, service interactions form the foundation of client pleasure.
Providers of services have numerous possibilities to control the interactions that make up the moment. (Wirtz, 1994). They may recruit, train, and manage service staff; create and maintain the service atmosphere; and carefully target, engage, and educate consumers. They may also design and the manufacturing procedure.
In an effort to determine how the type of the service affected the relative relevance of the different service quality dimensions, Sheetal et al. (2004) discovered that in the marketing of banking services, tangibleness and empathy rank last in importance. According to Agarwal (2009), factors such as the type of account a customer has, their age, their occupation, and other factors affect how they use e-banking services. The research unequivocally highlights the necessity for banks to comprehend that the financial services and products provided over the Internet have to not only be tailored to fulfil customers' wants, tastes, security expectations, and quality standards at the present time but additionally be needed to induce consumers to request and utilize e-banking on a larger scale in the years to come.
Typically, satisfaction in financial services is viewed as a multidimensional construct. Because the level of service and consumer satisfaction are tied to one another, banks must increase their service quality in order to guarantee client contentment. Meaning that only clients will be satisfied if banks deliver service in line with customer expectations. In order to adapt to this shifting business environment, banks must both keep their existing clientele and draw in new ones by offering higher-quality services.
III. Measuring client satisfaction
The following section of the paper gives the results of a survey that was carried out in the early months of 2009 to gauge client happiness for an Indian bank.
The goal was to ascertain the degree of client satisfaction with a certain Indian bank. The survey, which was founded on a survey, is an example of qualitative research. As a result, primary data was gathered (Ashima, 2016). Clients were also requested to assess their overall happiness with the bank in order to have a basis for comparison. Additionally, they disclosed their typical wait time and frequency of usage of financial services.
IV. The outcomes
According to the demographic information, 25% of respondents are between the ages of 16 and 25; 22% are among the ages of 26 and 40; 27% are between the ages of 36 and 45; and 26% are older than 55. (Figure 1). The bank's current attempts to draw and keep clients from the young sector are consistent with the significant number of young customers. In the Fig no 3, frequency of using the banking services has been shown. 32% of the respondents are using services monthly or even less than that. 20% respondents are using the financial services offered by banks fortnightly, 18% clients uses the banking services very rarely. These findings are insufficient to suggest an appropriate course of actions for the bank branch.
Frequency of using banking services
As a result, it is crucial to attempt to check "under the hood." It is well known that the majority of clients merely state their satisfaction with the service they receive. Additionally, these "satisfied" consumers aren't the ones promoting the bank. They are content and quiet.
Customers that express extreme emotions (extremely pleased or very dissatisfied) are therefore more fascinating. It is important to consider the extreme responses since unsatisfied consumers are more likely to spread negative word of reference than satisfied ones (S. Bansal & Malik, 2015).
Only good extreme responses come from attention, competence, and comprehension of demands; courtesy registers the highest. There are adverse consequences to being courteous.
Only one out of seven clients said they were extremely dissatisfied, but the ratio of both negative and positive extreme is still in the bank employees' favor. The Chi-Square experiment was used to determine whether there is a relationship between the satisfaction with the competence of the employees and the wait time. The findings (Pearson Chi-Square 0,049) show an indirect dependence between the two variables. The duration of the wait decreases as client satisfaction with the competence increases.
Average levels of fulfillment were also established for the other seven aspects of how satisfied customers are with the banking institution's offerings (execution time, access of the workplaces, price/quality connection, banks responsiveness to complaints, promotion of offerings, interaction with the bank, and operation hours). These numbers are pretty concerning. Only two characteristics-execution time and office accessibility-register average levels above 3,5 that indicate overall satisfaction. The remaining five characteristics reflect unhappiness or hesitation.
V. Conclusion
According to the study that was completed, there are some issues that come up when trying to gauge client happiness (I. Bansal & Sharma, 2008).
The first step is to define the dimensions of satisfaction in accordance with the nature of the business and the particulars of the organisation. There are disparities even within the banking framework, such as in the range of services or the contact process. Second, customers frequently click the "I'm not sure" box or express satisfaction. Thus, the scale for any future surveys that the bank conducts should remove replies that fall somewhere in the middle, forcing the consumer to take a side (Nerkar, 2016).
In the years to come, direct attention should be paid to the particulars of banking activity, such as credit and deposit; private, retail, and corporate activity, in order to highlight the differences between the different categories of clients and properly solve their problems. The survey's expansion to the bank's other locations may reveal variations in the clients' levels of satisfaction.
The issue of whether some issues need to be handled top-down by the bank's central office or whether customer happiness depends on branch activity and procedure management is raised.
The advantages of such surveys include a summary of the areas where the store needs to improve as well as an improved understanding of the clients. In this way, the bank has the potential to Research Article establish a solid rapport with its customers and attain a better level of satisfaction among clients.
This might not stop clients from making frantic withdrawals during a major financial crisis, but it might help keep them from switching banks at a time where the bank depends on them for the first time ever (Mishra & Gauba, 2014).
|
2023-06-02T15:05:29.252Z
|
2018-01-01T00:00:00.000
|
{
"year": 2018,
"sha1": "23dab9aad3516c4ac2ab2eb2516f5a43fcdb97d4",
"oa_license": "CCBY",
"oa_url": "https://turcomat.org/index.php/turkbilmat/article/download/13793/9916",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "030fd13d9081517ab48369b156b6b69c8cc1dfff",
"s2fieldsofstudy": [
"Business",
"Computer Science",
"Economics"
],
"extfieldsofstudy": []
}
|
3724165
|
pes2o/s2orc
|
v3-fos-license
|
Effects of Conjugated Linoleic Acid and Metformin on Insulin Sensitivity in Obese Children: Randomized Clinical Trial
Context
Insulin resistance precedes metabolic syndrome abnormalities and may promote cardiovascular disease and type 2 diabetes in children with obesity. Results of lifestyle modification programs have been discouraging, and the use of adjuvant strategies has been necessary.
Objective
This study aimed to evaluate the effects of metformin and conjugated linoleic acid (CLA) on insulin sensitivity, measured via euglycemic-hyperinsulinemic clamp technique and insulin pathway expression molecules in muscle biopsies of children with obesity.
Design
A randomized, double-blinded, placebo-controlled clinical trial was conducted.
Setting
Children with obesity were randomly assigned to receive metformin, CLA, or placebo.
Results
Intervention had a positive effect in all groups. For insulin sensitivity Rd value (mg/kg/min), there was a statistically significant difference between the CLA vs placebo (6.53 ± 2.54 vs 5.05 ± 1.46, P = 0.035). Insulinemia and homeostatic model assessment of insulin resistance significantly improved in the CLA group (P = 0.045). After analysis of covariance was performed and the influence of body mass index, age, Tanner stage, prescribed diet, and fitness achievement was controlled, a clinically relevant effect size on insulin sensitivity remained evident in the CLA group (37%) and exceeded lifestyle program benefits. Moreover, upregulated expression of the insulin receptor substrate 2 was evident in muscle biopsies of the CLA group.
Conclusions
Improvement of insulin sensitivity, measured via euglycemic-hyperinsulinemic clamp and IRS2 upregulation, favored patients treated with CLA.
metabolic syndrome abnormalities and may promote cardiovascular disease and type 2 diabetes in individuals with obesity (2). Lifestyle modification through healthy food selection and consumption, a regular physical activity program, and optimal sleep hygiene have been proposed as the gold standard of care in these individuals. Unfortunately, the compliance and success of these strategies are usually disappointing (3,4), making pharmacological approaches somewhat necessary. Metformin (MET) is a biguanide used for the treatment of type 2 diabetes in children and adolescents due to its ability to decrease hepatic glucose production and increase peripheral insulin sensitivity. MET has been proposed as an adjuvant treatment in pediatric obesity efforts, especially in the presence of insulin resistance and its comorbidities. MET has beneficial effects on weight reduction and insulin resistance in obese nondiabetic individuals (5,6).
Conjugated linoleic acid (CLA) is a group of isomers of linoleic acid, which are synthesized in the cud of ruminant animals by fermentative bacteria (7). CLA is present in dairy products, meat, and fat from beef and lamb. The most common CLA isomer contained in these products is cis-9,trans-11, which can be commercially synthesized from linoleic acid-rich oils and prepared as a 50% mixture with the trans-10,cis-12 isomer (8). Several studies have acknowledged the beneficial effects of CLA isomers on body composition (9,10), immune response (11), bacterial-induced colonic inflammation (12,13), as well as improvements in insulin sensitivity and lipid metabolism in experimental animals and humans (9). Additionally, CLA purportedly reduces fatty acid synthesis in adipocytes, suggesting that this supplement decreases fat deposition, directly contributing to an improvement in body composition in adults and children (14). Nonetheless, the impact of CLA on human health and disease is still controversial and research on this matter continues.
Based on the current obesity frequency in Mexico, and considering the limited and discouraging outcomes of intervention programs, adjuvant strategies must be installed. The objective of the present study was to evaluate the effects of MET and CLA on insulin sensitivity, measured via the euglycemic-hyperinsulinemic clamp technique (EHCT), in children with obesity.
Subjects and Methods
We performed a randomized, double-blinded, 16-week placebo (PLB)-controlled trial in the Pediatric Obesity Clinic at the Pediatrics Department of Hospital General de México (Mexico City, Mexico).
Patients with obesity aged 8 to 18 years who had not been previously intervened and had optimal psychological health were included in the study. Obesity was defined using Centers for Disease Control and Prevention criteria [body mass index (BMI) $ 95th percentile]. Exclusion criteria included BMI $ 35 kg/m 2 , genetic or endocrine obesity, a systemic illness, diabetes or prediabetes (according to American Diabetes Association criteria) (15), and the use of weight loss medications that could modify lipids and glucose concentrations. The study (no. DI/11/311/04/108) was approved by the hospital's institutional review board; additionally, it was registered in ClinicalTrials.gov (no. NCT02063802).
All participants were included in the standardized healthy lifestyle program addressed to children and their parents. This 4-month program consisted of a monthly visit that included a 1-hour structured physical activity session (coordinated by a physical trainer), followed by a psychoeducational group session. The following information was presented to all participants: (a) description of a balanced and healthy nutrition, (b) emotion-related eating behavior and family support, (c) the benefits of physical activity, and (d) obesity-related comorbidities. These sessions were coordinated by nutritionists, psychologists, pediatricians, pediatric endocrinologists, and a physical trainer. Afterward, all patients held a medical consultation to evaluate their anthropometry and medical condition, as well as their progression and acquisition of skills and their compliance to the program. At the beginning of the intervention a complete nutritional evaluation was performed and a diet based on age, pubertal stage, and physical activity requirements, according to the World Health Organization and Food and Agricultural Organization guidelines, was prescribed (16). The recommended diet composition was 55% carbohydrates, 20% proteins, 25% lipids (,7% saturated fat, ,300 mg/d cholesterol, and ,1% trans fat), and ,3 g of salt per day. Participants filled out a 24-hour nutritional recall questionnaire during the 3 days prior to their follow-up appointment to assess diet compliance. All patients were encouraged to participate in sports activities at last 5 days a week and for a minimum of 60 minutes.
To evaluate physical activity compliance, we tested fitness achievement using the Harvard step test modified for the pediatric population and a physical fitness score was calculated (17); evaluations were applied at baseline and at the postintervention state. The overall intervention compliance was evaluated through anthropometric, metabolic, and fitness parameter modifications, as well as through the acquisition of healthy behavior knowledge.
Clinical trial design
This trial was conducted in accordance with the Declaration of Helsinki and adhered to Good Clinical Practice Guidelines issued by the International Conference of Harmonization. The children and their parents provided written informed assent/consent. Eligible patients were included in the lifestyle intervention program (LIP) and randomized to receive either MET (1 g/d), CLA containing 50:50 isomers c9,t11 and t10,c12 (3 g/d), or PLB (1 g/d) 3 times a day for 16 weeks. Visits were scheduled monthly. Diet, exercise, and medication compliance, as well as anthropometric variables, were recorded during each visit. The final evaluation was similar to baseline; EHCT and skeletal muscle biopsies were performed at the postintervention state. Patients were eliminated when they showed poor compliance to medication (,80% or .100%) or intolerance, or when $1 workshop sessions were missed.
Anthropometric and metabolic evaluation
Baseline evaluation consisted of complete anthropometric and body composition analysis. Height and weight were obtained press.endocrine.org/journal/jcem 133 with participants in light clothes and without shoes, using a standardized stadiometer and mechanical scale. A 12-hour fasting blood sample was drawn. Laboratory measurements included glucose, lipid profile, and aminotransferases that were analyzed enzymatically with the use of commercially available reagents. Insulin was measured using Bio-Plex Pro human diabetes insulin immunoassay by Bio-Rad (Hercules, CA). Fasting insulin resistance and sensitivity surrogated indexes were calculated as follows: homeostatic model assessment of insulin resistance (HOMA-IR) = [fasting plasma insulin (mU/mL) 3 fasting plasma glucose (mmol/L)]/22.5, and quantitative insulin sensitivity check index (QUICKI) = 1/[log fasting plasma insulin (mU/mL) + log fasting plasma glucose (mg/dL)].
Clamp procedure
A 2-hour euglycemic-hyperinsulinemic clamp was performed (18) and executed during a 12-hour fasting condition. Intravenous catheters were inserted in the right and left forearm vein, one in a retrograde direction, and warmed in a box that was designed for this purpose (Kepis Keipis One Device, unpublished data). This device allowed the introduction of the complete forearm and maintenance of adequate high temperature and humidity that provided an arteriovenous shunt for blood sample supply while avoiding burns. The additional vein was used to infuse insulin and 20% dextrose solution at variable rates. Intravenous crystalline insulin (Humulin; Eli Lilly & Co., Indianapolis, IN) was used. A priming insulin dose of 120 mIU/m 2 of body surface (bs) per minute at time 0 was administered after 1 hour of baseline and during the first 5 minutes. Thereafter, the infusion was gradually reduced to 60 mIU/m 2 bs/min up to minute 10 and maintained through the end of the clamp. Glucose infusion started at minute 5 (5 mg/kg/min in all the patients according to information obtained during the standardization procedure). The samples were obtained every 5 minutes, and glucose infusion was dynamically modified to clamp plasma glucose at 85 to 95 mg/dL.
The rate of glucose disposal (Rd) was calculated and adjusted during the last 30 minutes of the clamp when plasma glucose stabilized at a fixed range.
Primary endpoints included the postintervention insulin resistance state defined as the Rd value (mg/kg/min) measured via EHCT, as well as the evaluation of surrogate indexes of insulin resistance and sensitivity (insulinemia, HOMA-IR, and QUICKI). The expression of insulin receptor substrates IRS1, IRS2, and IRS4 in muscle biopsies complemented the insulin resistance study. Secondary objectives were modifications of anthropometric and metabolic parameters. Moreover, medication safety and tolerability were important outcomes.
Muscle biopsies
The participation of MET and CLA on the insulin signaling pathway was explored with muscles biopsies from the vastus lateralis performed under local anesthesia after 16 weeks of intervention. An incision with a no. 11 surgical blade was made to insert an 8 swg (4.0 mm) Bergstrom needle (Ultramed, Milton, ON, CA).
RNA isolation
Total RNA was isolated from biopsies samples using an RNeasy fibrous tissue minikit for muscle and an RNeasy lipid tissue minikit for adipose tissue (Qiagen, Valencia, CA) following the manufacturer's protocol. RNA concentration was determined using a NanoDrop 1000 spectrophotometer (Thermo Scientific, Waltham, MA). Integrity was evaluated by agarose gel electrophoresis using a vertical chamber Enduro (Labnet International, Edison, NJ) and the UltraSlim LED Illuminator SLB-01 (Maestrogen, Las Vegas, NV).
Genetic expression of insulin receptors
The genetic expression patterns of IRS1, IRS2, and IRS4 were studied in 14 and 17 muscular tissue biopsies obtained from the MET and CLA groups, respectively. Quantitative reverse transcription polymerase chain reaction array (human insulin signaling pathway, RT 2 Profiler, PAHS-030Z, Qiagen) was performed. Complementary DNA was prepared using an RT 2 polymerase chain reaction array first-strand kit (Qiagen) according to the manufacturer's instructions. Normalization was computed with ACTB, B2M, GAPDH, HPRT1, and RPLP0. The expression patterns observed in the MET and CLA groups were compared with muscular tissue samples from the PLB group (n = 17) used as calibrator. The differential gene expression was calculated using the Qiagen software polymerase chain reaction analyzer through the 2 2DDCt analysis, and a 2.5-fold change cut-off (P , 0.05) was considered.
Statistical analysis
Descriptive statistics for all numerical variables are reported as the mean and standard deviation and standard error of the mean (SEM) for contrasts as indicated in the text or figures. Contrast among treatment groups was assessed by analysis of variance and analysis of covariance (ANCOVA) for adjustment by confounding variables. Post hoc analysis and the multiple contrast hypothesis corrected by Fisher's least significant distance were executed. The h 2 effect sizes obtained from ANCOVAs were transformed to Cohen's d. x 2 Analyses were also executed to evaluate differences in proportion among groups. SPSS software version 22 (IBM, Armonk, NY) was used to conduct the statistical analyses. A probability of a error of ,5% was considered as statistically significant.
Participants and demographics
Enrollment occurred from August 2012 to July 2014. One hundred ninety-eight individuals were potentially eligible; 83 met inclusion criteria, signed consent and assent forms, and were randomized to receive MET (n = 24), PLB (n = 30), and CLA (n = 29). Fifty patients completed the 16-week intervention; 1 external outlier was identified and excluded during the analysis (PLB group). In 1 case, EHCT performance was technically impossible (CLA group). For this reason, we report the results of 48 executed clamps (Fig. 1). Throughout the study, 1 patient was eliminated when a preexisting lipoma was surgically removed without notifying the research team (PLB group); a second patient with psychosocial anomalies and suspected pregnancy was eliminated as well (PLB group). Twenty-nine patients were eliminated due to poor medication compliance or due to lack of interest (MET, n = 10; PLB, n = 10; and CLA, n = 9). Pubertal development (Tanner stage 1, defined as prepubertal; Tanner stages 2 to 3, defined as early puberty; and Tanner stages 4 to 5, defined as late puberty) was assessed after a clinical inspection of the mammary glands, testes volume, and pubic hair. Demographic and baseline characteristics were similar among the groups (Table 1).
Anthropometric and metabolic results
No significant differences were observed in baseline anthropometric and metabolic parameters or insulin resistance measured by surrogate indexes of insulin resistance (fasting insulinemia, HOMA-IR, and QUICKI). Distribution of Tanner stage status did not differ among the groups (x 2 test, P = 0.415).
The overall impact of the intervention showed a positive effect on weight, height, BMI, and waist circumference, as well as on surrogated indexes of insulin resistance and physical fitness score in all of the groups ( Table 2). No statistically significant differences were observed in these parameters between treatment groups. No differences were evident when comparing surrogate indices of insulin resistance during the postintervention phase among the groups.
Insulin sensitivity measured by EHCT
The primary outcome, insulin sensitivity, calculated as the Rd value, showed significant difference between the CLA group compared with PLB (6.53 6 2.54 vs 5.05 6 1.46, P = 0.035, Cohen's d effect size of 74%) ( Table 3). Moreover, fasting insulinemia (Fig. 2) and HOMA-IR (Fig. 3) significantly decreased in the CLA group (P = 0.04). The adjusted analysis for controlling the influence of modifying or confounding variables such as BMI, change in BMI, age, Tanner stage, as well as dietary and physical program compliance, over the Rd value, showed that the Tanner stage had an independent effect over the Rd value (P , 0.001). When ANCOVA was executed and the aforementioned variables were controlled, no statistically significant differences were found between the three groups with regards to Rd value. Nonetheless, a clinically relevant effect size remained evident when comparing the CLA and PLB groups (Cohen's d effect size of 37%), suggesting a decrease in insulin resistance in patients receiving CLA. The effect size of MET vs PLB was 10% (5.72 6 3.1 vs 5.38 6 3) and that of MET vs CLA was 20% (5.72 6 3.1 vs 6.34 6 2.8), favoring the press.endocrine.org/journal/jcem 135 CLA-treated group. However, these effect sizes were not clinically relevant. We analyzed the changes between initial and final serum triglycerides and high-density lipoprotein (HDL) cholesterol by ANCOVA. For these particular variables, no Tanner or change in BMI modified postintervention levels. Furthermore, HDL cholesterol and triglyceride baseline levels did show an influence over the final levels.
Lipid profile and adverse effects
Patients in the CLA group had a statistically significant increase in serum triglycerides when compared with MET (169.8 6 69 vs 113.1 6 27, P = 0.027), but not significant when compared with PLB (P = 0.13). Moreover, HDL levels were lower in the CLA group when compared with MET (36.8 6 5.4 vs 44.86 6 8.7, P = 0.009), whereas there were no differences when compared with PLB (P = 0.26). The main differences favoring MET treatment over the lipid profile were evident only when compared with CLA.
Nonserious adverse events most commonly reported were abdominal pain, diarrhea, dizziness, headache, nausea, and gastritis. The frequency and severity of symptoms were similar in the three groups (analysis of variance, P = 0.314; x 2 test, P = 0.28). Patients exhibiting lack of compliance and/or dropout did not show a difference between groups. Additionally, a Little's missing completely at random analysis (P . 0.13) was conducted to ensure that patient elimination was actually random and homogeneous in all of the groups.
Muscle biopsies' analyses
The analyses of IRS1, IRS2, and IRS4 revealed that only IRS2 was modulated in the CLA group, showing a 3.56-fold increase compared with the control group (P = 0.043). The rest of the genes did not show statistically significant differences. These data support the fact that CLA has a critical effect in the molecular insulin pathway through the upregulation of ISR2. This mechanism might be related to optimal glucose uptake observed in CLAtreated patients.
Discussion
This study supports that CLA improves insulin sensitivity, as measured by EHCT in a group of obese children, and exceeds LIP benefits. Because the prevalence of metabolic syndrome in our pediatric clinic averages 35% and confers an 11-fold risk of diabetes during early adult life (19), the exploration of conventional and pharmacological strategies focusing on improving the insulin sensitivity level is imperative. Recent studies have revealed that MET has important effects on insulin sensitivity when compared with PLB, and its use in nondiabetic, obese individuals has been massively extended (6,20,21). A systematic review conducted by Brufani et al. (6) revealed a significant but moderate benefit of MET on weight reduction and fasting insulin sensitivity compared with PLB or lifestyle interventions alone. Nonetheless, when these outcomes were evaluated by frequently sampled intravenous glucose tolerance test (22) or hyperglycemic clamp technique (23), no significant differences were reported. Wiegand et al. (5) demonstrated in a randomized PLB-controlled trial a beneficial effect of MET on the insulin sensitivity index in obese, insulin-resistant adolescents; however, no differences in body composition, weight, or BMI were found. We found in our study a significant improvement in all anthropometric parameters, including weight, BMI, waist circumference, and body composition (fat mass and fat-free mass, data not shown); nonetheless, these results were not significantly different among the treatment groups. To our knowledge, no randomized PLB-controlled trial using EHCT had been executed for the evaluation of MET benefits on insulin sensitivity in children. In the present study, we found no differences on Rd value (mg/kg/min) when comparing MET and PLB in the postintervention period. These data could be consistent with the final results of the Diabetes Prevention Program Research Group (24) that showed that diabetes incidence was better reduced in a LIP group compared with PLB. Nonetheless, improvement in terms of BMI, waist circumference, HDL cholesterol, and triglycerides, with considerable effect sizes (72%, 65%, 37%, and 55%, respectively), favored patients treated with MET in our study.
Several studies have proposed beneficial effects of CLA isomers on body composition, inflammation, and insulin sensitivity, promoting differentiation, lipid metabolism regulation, and apoptotic mechanisms in adipocytes (10)(11)(12). Interestingly, evidence has suggested that the trans-10,cis-12 isomer of CLA might induce insulin resistance, whereas the CLA mixture has beneficial effects on body composition and insulin sensitivity. Risérus et al. (25) demonstrated that trans-10,cis-12 isomer-treated subjects presented insulin and glucose increases and decreased HDL and insulin sensitivity measured by 2-hour EHCT compared with PLB or CLA mixture-treated groups. No differences were observed when comparing PLB and CLA mixture-treated individuals. In our study, we showed that the CLA mixture was associated with a clinically relevant effect size (37%) over the Rd value of insulin sensitivity. The confounding variables included in the ANCOVA model showed a decline on group differences. Among these adjustments, Tanner stage was the main variable that modified insulin sensitivity, and despite our small sample size, the effect size of CLA on the Rd value remained.
Despite the fact that several CLA isomers might have deleterious effects on insulin sensitivity and resistance, certain mixtures may neutralize negative effects and even induce a synergistic positive response on these parameters, as well as on metabolic and anthropometric values. Some effects of the trans-10,cis-12 isomer promote a blunted glucose uptake that depends on decreased expression of glucose transporter 4 (GLUT-4) (26). Moreover, decreased incorporation of free fatty acids into the cells may be induced by CLA, a mechanism that could be related to diminished expression of peroxisome proliferator-activated receptor-g in adipocytes (27). CLA has been proposed as an apoptotic accelerator of adipocytes in mammals that liberates and increases fatty acid oxidation elsewhere in the body (28). Evidence of deleterious effects has mainly been reported in animal models, in which administered doses of CLA are superior to those used in humans (0.2 to 3 g/kg vs 0.015 to 0.1 g/kg, respectively) (29). These effects, if present in humans, could be hyperglycemia and hyperlipidemia, which may predispose an individual to diabetes and nonalcoholic fatty liver disease (30,31). However, few studies have been published regarding the molecular mechanisms of CLA in skeletal muscle that could explain increased glucose uptake in our treated patients. On this matter, Vaughan et al. (32), using a rabdomyosarcoma cell line, have reported that omega-3 fatty acids and CLA activate mitochondrial proliferation and glycolytic activation pathways probably by apoptosis induction and subsequent upregulation of GLUT-4. Furthermore, animal models have shown the beneficial effects of CLA on insulin sensitivity and overexpression of peroxisome proliferator-activated receptor-g and GLUT-4 in the muscle of supplemented rats (27). In the present study we were able to demonstrate that postintervention IRS2 expression in the skeletal muscle was significantly upregulated in CLA-treated patients. To our knowledge, no studies have been published regarding the effects of CLA or CLA-isomer mixtures on insulin receptor substrate molecules. Xu et al. (33) reported that MET upregulates insulin receptor b expression and the downstream IRS2/phosphatidylinositol 3-kinase/Akt signaling transduction in an insulin-resistant rat model of nonalcoholic steatohepatitis and cirrhosis. Our results evidenced a nonsignificant but marginal (P = 0.055) IRS2 upregulation in MET-treated children. The insulinsensitizing effects of MET have been mainly described in liver tissue. Although CLA effects have been mainly focused on adipose tissue modeling, the present study demonstrates that molecular mechanisms, particularly IRS2 upregulation, might mediate insulin-sensitizing effects on skeletal muscle. This phenomenon could explain the increased glucose infusion rate tolerability in our patients treated with CLA throughout the EHCT. Moreover, significant HOMA-IR improvement observed only in CLA-treated patients denotes a significant performance in skeletal muscle that promotes a lower pancreatic insulin secretion. A recently published meta-analysis demonstrated that the deleterious effects of CLA consumption might be negligible, whereas its benefits, although subtle, seem to be clinically relevant regarding weight and fat mass loss (34). In our study, BMI improvement was significant in all groups, although not significantly different among them. Nonetheless, MET displayed the highest effects over BMI (72%, compared with 43% in the PLB group and 41% in CLA-treated patients) and waist circumference (70%, compared with 60% in PLB and 30% in the CLA group). Total body fat did not improve in any group, but leptin levels significantly decreased in all patients (P , 0.014, data not shown).
Racine et al. (10) reported a clinical trial in a pediatric population randomly assigned to CLA (3 g/d, c9,t11-t10-c12, 50:50) or PLB for 6 months that showed a decrease in total body fat in the CLA group and a significant decrease in HDL cholesterol levels in CLA-treated patients. Our trial demonstrated a significant improvement in HDL cholesterol levels in PLB-treated patients (baseline vs postintervention, P = 0.045). In the CLA group, we noticed a decline in the HDL cholesterol concentration that was not statistically significant when compared with PLB.
One of the limitations of this study was the high rate of participants' withdrawal, as well as the difficulties related to the EHCT, both of which contributed to the final small number of participants, as we did not have enough power for seizing small size effects associated with the treatment.
The strength of this study is supported by its own design. For example, inclusion and, particularly, elimination criteria were strictly applied. Baseline characteristics of participants were similar regarding anthropometric and metabolic condition, particularly those related to subrogated indexes of insulin resistance. Additionally, the main outcome was evaluated by the gold standard EHCT, and the benefits of the overall LIP were evident and similar, regardless of treatment allocation. Although the withdrawal of participants was high in our study, the elimination was random and homogeneous in all groups.
Conclusions
The current study demonstrates the benefits of an LIP and additional effect of CLA over the gold standard EHCT. Lifestyle intervention, independent of any treatment, showed effects on the main outcome variables, specifically weight, height, BMI, waist circumference, surrogate indexes of insulin resistance, and fitness condition, in all of the groups. IRS2 upregulation was evident in CLAtreated patients; this mechanism might be involved in insulin-sensitizing effects on skeletal muscle.
Finally, the incidence of hypertriglyceridemia and hypo-a-lipoproteinemia in CLA-treated patients might be a concern and may be related to the types of CLA isomers used in this study. Further research to evaluate the benefits of different mixtures of CLA isomers may be warranted.
|
2018-04-03T01:04:13.100Z
|
2017-01-01T00:00:00.000
|
{
"year": 2017,
"sha1": "223bb328e2663a4f24058d74dff11598b5bfd30a",
"oa_license": "CCBYNC",
"oa_url": "https://academic.oup.com/jcem/article-pdf/102/1/132/9574923/jc.2016-2701.pdf",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "223bb328e2663a4f24058d74dff11598b5bfd30a",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
207944925
|
pes2o/s2orc
|
v3-fos-license
|
The Timing of Water and Beverage Consumption During the Day Among Children and Adults in the United States: Analyses of NHANES 2011–2016 Data
Dietary Guidelines for Americans 2015–20 recommend replacing sugar sweetened beverages (SSBs) with plain water in order to promote adequate hydration while reducing added sugar intake. This study explored how water intakes from water, beverages, and foods are distributed across the day. The dietary intake data for 7453 children (4–18 y) and 15,263 adults (>19 y) came from the National Health and Nutrition Examination Survey (NHANES 2011–2016). Water was categorized as tap or bottled. Beverages were assigned to 15 categories. Water intakes (in mL/d) from water, beverages, and food moisture showed significant differences by age group, meal occasion, and time of day. Plain water was consumed in the morning, mostly in the course of a morning snack and between 06:00 and 12:00. Milk and juices were consumed at breakfast whereas SSBs were mostly consumed at lunch, dinner, and in the afternoon. Children consumed milk and juices, mostly in the morning. Adults consumed coffee and tea in the morning, SSBs in the afternoon, and alcohol in the evening. Relatively little drinking water was consumed with lunch or after 21:00. Dietary strategies to replace caloric beverages with plain water need to build on existing drinking habits by age group and meal type.
Introduction
Dietary Guidelines for Americans 2015-2020 have recommended replacing sugar-sweetened beverages (SSBs) with plain drinking water [1]. The 2006 proposed guidance system for beverage consumption in the US also recommended choosing water over other beverages [2]. Analyses of 24 h dietary intakes from the most recent National Health and Nutrition Examination Survey (NHANES 2011(NHANES -2016 suggest that these recommendations may have been effective. In recent years, plain drinking water, bottled and tap, has been replacing SSBs in the US diet [3]. The main sources of drinking water in the US have been tap water at home (288 mL/d), tap water away from home (301 mL/d), and bottled water from supermarkets and grocery stores (339 mL/d) [3]. Most SSBs have also come from stores. Stores contributed far more SSBs to the US diet than fast food restaurants, full service restaurants, and schools combined [4,5].
The 2015 Dietary Guidelines for Americans recommended a shift to reduce added sugar consumption to less than 10 percent of calories per day [1]. Among the suggested strategies were drinking SSBs less often, reducing SSB volume, or replacing SSBs with plain water on specific eating or drinking occasions [1]. Successful implementation of those strategies may require a better understanding of water and SSB consumption patterns during the day. The timing of and the frequency of drinking bouts and the amounts of fluids consumed can vary across population subgroups. Replacing SSBs with drinking water can also be challenging if the established SSB and water consumption patterns differ by age, race/ethnicity, or socioeconomic status (SES).
For example, past analyses of NHANES data have shown a significant effect of age. Teenagers and young adults consumed the most fruit juices, SSBs, and water. Adults and older adults consumed much less SSBs but drank more coffee, tea, and alcohol [3]. Education and incomes also played a role. In past studies, lower-income groups consumed more regular soda; whereas higher-income groups tended to drink more diet soda [4,5]. Similarly, the consumption of whole milk was associated with lower SES; higher-income groups consumed more skim and reduced-fat milk [4,5].
A socio-economic gradient was recently observed for the consumption of tap water [3]. Analyses of NHANES 2011-2016 data showed, for the first time, that most tap water was consumed by groups of higher education and incomes [3]. This may be the result of powerful new marketing campaigns that hope to change the way that Americans think about water, bottled and tap [6]. The newly observed social gradient may also be a direct result of the "Flint effect" and the growing distrust of municipal water systems in low-income areas and among communities of color [7,8].
Aligning daily beverage choices with healthy eating patterns is a key component of many dietary intervention programs [1, 9,10]. However, such dietary strategies may need to build on existing beverage consumption patterns and the timing of water and beverage consumption during the day. Here, the available data are limited. Only a few studies on children in the UK and in France have examined water and beverage consumption patterns by meal and time of day [11,12]. Earlier US based studies have examined sourcing locations but not by meal type or time of day [4,5].
The timing of beverage consumption in the course of the day may have additional implications for adequate hydration. There is an emerging mythology about the correct time to drink water in the course of the day. One strategy is to drink water 30 minutes before a meal, during a meal. and after a meal, but no more [13]. Another is to drink water early in the morning, soon after waking up [14]. Additional recommendations are to drink water before, during, and after a workout, before a bath, and just before going to bed at night. Drinking water at the correct time is alleged to help prevent stomach pain, irritable bowel syndrome, fatigue, overeating, high blood pressure, and even heart attack and stroke [15]. However, evidence in support of those strategies is limited.
One recent suggestion was that mild dehydration may occur in a transient manner when water and fluids are not consumed, either because of poor access to water or beverages or because of poor drinking and eating habits [16]. The present study explored daily fluctuations in water intakes from water, beverages, and foods in a large and nationally representative sample of children and adults in the US.
Dietary Intake Databases
Consumption data for drinking water, beverages, and foods came from 3 cycles of the nationally representative National Health and Nutrition Examination Surveys (NHANES), corresponding to years 2011-2012, 2013-2014, and 2015-2016 [17]. The three NHANES cycles provided a nationally representative sample of 7453 children (aged 4-18 y) and 15,263 adults (aged ≥19 y).
The NHANES 24-hour recall uses a multi-pass method, conducted by a trained interviewer using a computerized interface. Respondents report the types and amounts of all food and beverages consumed in the preceding 24 hours, from midnight to midnight [18,19]. Respondents first identify a quick list of foods and beverages, reporting both meal occasion and time of day. A more detailed cycle then records the amounts consumed, followed by a final probe for any often-forgotten foods. Day 1 interviews are conducted by trained dietary interviewers in a mobile examination center. Day 2 interviews are conducted by telephone some days later [19]. For children 4-5 y, dietary recall is completed entirely by a proxy respondent (i.e., a parent or guardian with knowledge of the child's diet) [19]. Children 6-11 y are primary respondents, but a proxy respondent is present and able to assist. Children 12-19 y are primary respondents but can be assisted by an adult who has knowledge of their diet [19]. We used a combination of the 1-day value and the 2-day mean to make use of all available dietary data. This method included all NHANES participants, even those without a second recall.
Water and Beverage Categories
Plain drinking water was split into tap and bottled. Beverages were classified into 15 categories: milk and milk beverages, milk substitutes (soy milk), citrus juices, non-citrus juices, diet soda, regular soda, ready-to-drink tea, ready-to-drink (RTD) coffee, fruit drinks, sports drinks, energy drinks, hot tea/coffee, alcoholic beverages, flavored, carbonated or enhanced water, and supplemental beverages. The present analyses of water intakes from beverages were for beverages only; for example, milk consumed with cereal (i.e., not as a beverage) was counted in the food category. The USDA Food and Nutrient Database for Dietary Studies (FNDDS), used to establish energy and nutrient content of individual diets, has been revised in parallel to each NHANES cycle [20].
The NHANES 24-hour recall for each participant provides information on the amount in grams of each food and beverage consumed. The present results were for mL of water content from selected beverages, and not for the volume of the beverages themselves (which may not be 100% water). Moisture from foods was calculated as well.
Data Availability and Ethical Approval
The necessary IRB approval for NHANES was obtained by the National Center for Health Statistics (NCHS) [21]. Adult participants provided written informed consent. Parental/guardian written informed consent was obtained for children. Children/adolescents ≥12 y provided additional written consent. All NHANES data are publicly available on the NCHS and USDA websites [17]. Per University of Washington (UW) policies, public data that do not involve "human subjects" and their use requires neither IRB review nor an exempt determination. Such data may be used without any involvement of the Human Subjects Division or the UW Institutional Review Board.
Statistical Analyses
The survey-weighted mean intakes of total water were evaluated overall and by age group, sex, race/ethnicity, and family income-to-poverty ratio. All analyses accounted for the complex survey design of NHANES and reflected the dietary behaviors of the US adult population from 2011 to 2016. The consumption of water and beverages was evaluated for the entire population and for population sub-groups. Survey-weighted means and corresponding standard errors were obtained. All analyses were conducted using SAS software, version 9.4 (SAS Institute Inc., Cary NC, USA) by using SURVEYREG, SURVEYMEANS, and SURVEYFREQ procedures. Table 1 shows total water intakes from water, beverages and foods in mL/d by sex, eating occasion, and time of day. Total water intake was 2718 mL/d, of which 2100 mL/d (77%) came from water and beverages and 618 mL/d (23%) came from food moisture. Drinking water provided 1066 mL/d, and caloric and non-caloric beverages provided 1034 mL/d of water. Most drinking water came from the tap (tap: 661 mL/d; bottled 404 mL/d). The dietary sources of water were beverages (38%), tap water (24%), bottled water (14%), and food moisture (23%). Men consumed more total water and more beverages than did women; there was no sex effect for water consumption. Non-Hispanic Whites consumed the most water, the most beverages and the most total water; lowest water consumers were Non-Hispanic Blacks. Mexican Americans drank the most bottled water. Water and beverage consumption also increased with the income to poverty ratio (IPR). Income effects were observed for water and beverages, the effect for tap water was particularly strong (496 vs. 821 mL/d). Figure 1A (top panel) provides a visual representation of these consumption patterns. Dinner and lunch followed by breakfast were the peak times for water consumption from food moisture. Beverages were consumed throughout the day with peaks at dinner, breakfast, lunch, and the morning snack. Tap water was associated mostly with the morning snack and so was bottled water. Most water from tap and bottled water was consumed in the course of the morning snack. The least tap and bottled water was consumed at breakfast and lunch. Figure 1B (bottom panel) shows the corresponding water consumption patterns by time of day. Tap and bottled water were mostly consumed between 06:00 and 12:00. Water consumption dropped by half in the afternoon and evening. Beverages were mostly consumed between 06:00 and 12:00 and again between 18:00 and 21:00. Peaks for water from food moisture were at 18:00 and 21:00 (dinner) and 12:00 to 15:00 (lunch). Breakfast and snacks did not provide substantial food moisture.
Total Water Intakes from Water and Beverages
Nutrients 2019, 11, x FOR PEER REVIEW 5 of 13 Figure 1A (top panel) provides a visual representation of these consumption patterns. Dinner and lunch followed by breakfast were the peak times for water consumption from food moisture. Beverages were consumed throughout the day with peaks at dinner, breakfast, lunch, and the morning snack. Tap water was associated mostly with the morning snack and so was bottled water. Most water from tap and bottled water was consumed in the course of the morning snack. The least tap and bottled water was consumed at breakfast and lunch. Figure 1B (bottom panel) shows the corresponding water consumption patterns by time of day. Tap and bottled water were mostly consumed between 06:00 and 12:00. Water consumption dropped by half in the afternoon and evening. Beverages were mostly consumed between 06:00 and 12:00 and again between 18:00 and 21:00. Peaks for water from food moisture were at 18:00 and 21:00 (dinner) and 12:00 to 15:00 (lunch). Breakfast and snacks did not provide substantial food moisture. Figure 2A (top panel) shows water intakes from water and beverages for all ages (>4 y) by meal occasion. Most water, tap and bottled, was consumed during the eating occasion identified as the morning snack. Smaller amounts were consumed during the afternoon snack. Most SSBs were consumed with lunch and dinner; consumption of SSBs during breakfast and morning snack was low. Relatively little water was consumed at lunch. Figure 2B (bottom panel) shows water intakes from water and beverages for all ages (>4 y) by time of day. Tap and bottled water were mostly consumed between 06:00 and 12:00. Consistent with expectations, brewed tea and coffee were consumed between 06:00 and 12:00, times corresponding to breakfast and the morning snack. Regular and diet sodas were mostly consumed between 12:00 and 21:00. Alcohol was consumed in the afternoon and in the evening. Coffee and tea were less likely to be consumed at dinner and in the evening, as compared to morning. Beverage consumption dropped sharply after 21:00.
Water and Beverage Consumption by Time of Day.
Figure 2A (top panel) shows water intakes from water and beverages for all ages (>4 y) by meal occasion. Most water, tap and bottled, was consumed during the eating occasion identified as the morning snack. Smaller amounts were consumed during the afternoon snack. Most SSBs were consumed with lunch and dinner; consumption of SSBs during breakfast and morning snack was low. Relatively little water was consumed at lunch. Figure 2B (bottom panel) shows water intakes from water and beverages for all ages (>4 y) by time of day. Tap and bottled water were mostly consumed between 06:00 and 12:00. Consistent with expectations, brewed tea and coffee were consumed between 06:00 and 12:00, times corresponding to breakfast and the morning snack. Regular and diet sodas were mostly consumed between 12:00 and 21:00. Alcohol was consumed in the afternoon and in the evening. Coffee and tea were less likely to be consumed at dinner and in the evening, as compared to morning. Beverage consumption dropped sharply after 21:00. Figures 3-9 show the time of consumption of water from water and beverages, separately for each age group. The data showed that different age groups have very specific patterns of water and beverage consumption depending on the time of day. The dataset is in Table S1. Figures 3-9 show the time of consumption of water from water and beverages, separately for each age group. The data showed that different age groups have very specific patterns of water and beverage consumption depending on the time of day. The dataset is in Table S1. Figures 3-9 show the time of consumption of water from water and beverages, separately for each age group. The data showed that different age groups have very specific patterns of water and beverage consumption depending on the time of day. The dataset is in Table S1.
Discussion
The present results are among the first to document the timing of water and beverage intakes around the clock in a large and representative NHANES 2011-2016 sample of US children and adults.
Discussion
The present results are among the first to document the timing of water and beverage intakes around the clock in a large and representative NHANES 2011-2016 sample of US children and adults. The present results have important implications for the promotion of healthy beverage choices, notably the ongoing attempts to replace SSBs with plain drinking water.
There is very little science on population water consumption patterns during the day. One recent study, conducted in Greece, explored the fluctuation in water intakes and hydration indices during the day, looking for signs of transient dehydration in a sample of healthy adults [16]. While water intakes did go up and down during the day, as they did here, the term fluctuation generally refers to an unpredictable and irregular rising and falling. As the present results show, the timing of water and beverage consumption followed predictable patterns. SSBs were consumed with lunch and dinner and in the afternoon but rarely at breakfast or the mid-morning snack. Water was consumed largely in the morning and rarely at night. Adults drank coffee in the morning and alcohol in the evening [22]. Beverage choices and consumption patterns varied with age. Whereas children aged 4-18 y consumed water between 18:00 and 21:00, adults were more likely to consume less water and more alcohol in the same time slot. Furthermore, while children consumed milk in the morning, adults tended to drink tea or coffee during this time.
The present data add to past work on the impact of caloric beverages consumed separately or with a meal on total energy intakes. In experimental studies, when caloric beverages were presented shortly before or with a pizza meal, no energy compensation was observed. Caloric beverages consumed freely at meal times added calories to the meal [23,24]. By contrast, other studies showed that the presentation of a stand-alone liquid preload reduced energy intakes at the test meal; however, the effects were sometimes inconsistent and full energy compensation was rarely observed [25].
Excessive SSB consumption is thought to contribute to childhood obesity [26]. Dietary Guidelines for Americans have stressed the importance of healthier beverage choices to be made throughout the day [1]. Ensuring access to safe, free drinking water in schools is an important CDC initiative that is intended to increase water consumption, help maintain hydration and reduce energy intake when substituted for SSBs [27]. School-based strategies to replace SSBs with plain drinking water have ranged from limiting sales in cafeterias, vending machines, and competitive food outlets to featuring teachers as competitive role models [9].
The present analyses support the CDC initiative but for a different reason. The CDC report notes that more than 95% of children are enrolled in schools and typically spend 6 h at school each day. We note that those times are in the morning (typically), which are the peak times for water consumption. There seems to be less competition during the morning snack from other beverages. Another productive strategy would be to promote water consumption with the school lunch meal.
We were surprised to see that water did not figure prominently in the afternoon snack. Though the total amount of water consumed was comparable to that during the morning snack, those beverages were SSBs, tea, and alcohol (for adults). Promoting water consumption by children in the afternoon may be a potential intervention strategy.
By contrast, promoting more water consumption in the morning might be a viable strategy for adults. There are opportunities to increase water consumption at lunch. By contrast, water is unlikely to displace morning coffee, especially with older adults or the afternoon tea [22].
The social gradient in water consumption has been addressed before [28]. Plain drinking water, bottled and tap, accounts for 38% of daily water intake from all sources including food moisture. The intake is slightly lower for lower income and minority populations. Hispanic Americans drink more bottled water and tap water but other minorities do not [3]. One issue is public trust in the municipal water system-the provision of safe water in schools and community settings is critical to the adoption of healthier beverage choices [27,28].
This study had limitations. First, the NHANES 2011-2016 data are based on dietary self-reports, still the default practice in large population-based studies. Second, the within-day variations by meal and time interval were close but not quite the same. For example, the consumption of water was high between 06:00 and 12:00, but it was associated not with breakfast but with a morning snack. People who reported consuming a morning snack did not necessarily have breakfast. Finally, the NHANES 2011-2016 data are cross sectional, not allowing for causal inferences to be made. The potential impact on water consumption patterns on health outcomes of interest cannot be determined.
Conclusions
Present analyses of diurnal fluctuations in water intakes can inform dietary strategies for maintaining adequate hydration while reducing the consumption of added sugars. The most effective strategies for behavioral change are those that build on existing habits and consumption patterns [1].
|
2019-11-14T14:17:13.178Z
|
2019-11-01T00:00:00.000
|
{
"year": 2019,
"sha1": "ad26d2c0b8db16a5f4284b06e33d34ef9aef62cd",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.3390/nu11112707",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "bd8ce548afe0829b32f4884323ce98f0b143550b",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
119512023
|
pes2o/s2orc
|
v3-fos-license
|
Cosmological Breaking of Supersymmetry?
It is conjectured that M-theory in asymptotically flat spacetime must be supersymmetric, and that the observed SUSY breaking in the low energy world must be attributed to the existence of a nonzero cosmological constant. This would be consistent with experiment, if the {\it critical exponent} $\alpha$ in the relation $M_{SUSY} \sim M_P (\Lambda /M_P^4)^{\alpha}$ took on the value 1/8, rather than its classical value 1/4. We attribute this large renormalization to the effect of large virtual black holes via the UV/IR correspondence.
Introduction
This paper is an expanded version of a short talk I gave at Lenny Susskind's 60th birthday celebration at Stanford University. It is dedicated to Lenny, who taught me how to think about physics, and whose own recent ideas have profoundly influenced those I am reporting on here. The central message of the paper can be summarized in a few sentences: The Bekenstein-Hawking Entropy of (Asymptotically) DeSitter (AsDS) spaces represents the logarithm of the total number of quantum states necessary to describe such a universe. This implies that the cosmological constant is an input to the theory, rather than a quantity to be calculated. The structure of an AsDS universe automatically breaks supersymmetry (SUSY). From this point of view, the "cosmological constant problem"is the problem of explaining why the SUSY breaking scale is so much larger than that associated in classical supergravity (SUGRA) with the observed value of (bound on ?) the cosmological constant. I suggest that large renormalizations of the classical formula are to be expected on the basis of the UV/IR correspondence in M theory. These may be viewed as contributions from virtual black holes. The phenomenologically correct formula M SU SY ∼ (ΛM 4 P ) 1/8 may be derivable from such considerations.
The implication of these ideas is that SUSY breaking vanishes in the flat space limit, which is consistent with the fact that we have not succeeded in finding a string vacuum with broken SUSY and asymptotically flat spacetime. We will begin the paper with a brief review of the evidence for this.
We then turn to a defense of the contention that an asymptotically DeSitter (AsDS) universe can be described by a finite number of states, given by the Bekenstein-Hawking formula. We discuss the difference between such an AsDS universe and the temporary DeSitter phase of an inflationary universe. We caution the reader that once he accepts these arguments, he will be forced to conclude that the cosmological constant is an input, or boundary condition, rather than a parameter to be calculated. The conventional cosmological constant problem can then be rephrased as: why isn't the scale of SUSY breaking related to the cosmological constant by the standard classical SUGRA formula, which (without fine tuning) predicts M SU SY ∼ Λ 1/4 .
We argue that this formula may receive large renormalizations. Indeed, standard field theory calculations predict logarithmically divergent renormalizations (at finite orders in the loop expansion) of the masses of particles in softly broken SUSY theories. Conventionally, these divergences are absorbed into the parameters in the low energy effective field theory. Wilsonian renormalization group arguments suggest that these divergent terms have no dependence on the cosmological constant if this parameter is much smaller than the cutoff scale. We argue that the UV/IR correspondence of M theory suggests a possible source for such a dependence. The highest energy states of the theory are huge black holes with a size of order the DS horizon size. Their spectrum is very sensitive to the value of the cosmological constant. We speculate that it may change the value of the "critical exponent" relating the SUSY breaking scale to the cosmological constant, to M SU SY ∼ (ΛM 4 P ) 1/8 . This formula fits the observational data.
Vacuum selection
One of the unfortunate features of M-theory as a theory of the real world, is its plethora of unphysical, exactly SUSic vacua. On the one hand, this aspect of the theory is precisely what has enabled us to get so much mathematical control over its properties. On the other hand, even if one succeeded in finding a SUSY breaking vacuum which precisely describes the real world (we should be so lucky!) one would still have the un-comfortable task of explaining why the universe does not resemble one of the beautifully SUSic vacua.
From this point of view, it is interesting and exciting that it appears very difficult to break SUSY in a way which leaves us with an approximately flat spacetime. Many candidate SUSY violating vacuum states of M-theory have tachyonic instabilities. Almost all 1 classical candidate vacua generate potentials for their moduli in the quantum theory. The effect of these potentials is either to drive the system into a region of moduli space where we are unable to analyze it (except to conclude that, since it must have a large negative vacuum energy, it cannot describe an asymptotically flat spacetime), or to drive it deep into the weak coupling regime where gravity becomes a free field theory. In neither case do we get an acceptable description of the real world. The weakly coupled system has massless moduli and low energy effective parameters which vary too rapidly with time.
The above analysis is based on weakly coupled string theory and semiclassical SUGRA. Similar conclusions follow from an analysis of SUSY breaking in Matrix Theory [2] [3] . In this nonperturbative formulation of M-theory in a variety of asymptotically flat, SUSY, spacetimes, asymptotic spacetime (more precisely the configuration space of multiparticle asymptotic states propagating on the spacetime) arises as the moduli space of a SUSY quantum system. Breaking SUSY collapses spacetime.
By analogy with the AdS/CFT correspondence for asymptotically AdS SUSY vacua, one might try to find a nonsupersymmetric version of this correspondence (with an AdS space with curvature much less than the Planck scale) by searching for conformal field theories with certain properties. In particular, they should have a large gap in dimensions between the stress tensor, and all but a small number of other operators in the theory. The stress tensor is the primary field corresponding to gravitons in AdS, while other operators correspond to states with mass of order the string or Planck scale. The gap in dimensions indicates a large ratio between the AdS curvature scale and the Planck scale. In SUSY examples this gap is "guaranteed" by SUSY nonrenormalization theorems combined with an hypothetical scaling law for the dimensions of nonchiral operators in the large g s N limit. The dimension of the stress tensor is always protected, but in the absence of SUSY we do not expect to have lines of fixed points and there is no obvious parameter which could tune the gap to be asymptotically large.
Another feature of a large AdS space which would have to be reproduced by our hypothetical conformal field theory, would be the existence of multigraviton excitations.
Even in supersymmetric examples this property is not well understood. That is, it is understood only in the regime of AdS 5 ×S 5 moduli space where the 't Hooft expansion is applicable. And in this regime, multiparticle excitations exist even when the curvature of AdS space is large. There should be a purely field theoretical argument which would prove the existence of multiparticle excitations in regimes where the AdS curvature is large, independently of the dimension of the space or the existence of a weakly coupled string regime. Only when we understand this could we hope to check whether a SUSY violating conformal field theory really represented a large AdS space.
Finally, we would have to show that the theory contained metastable excitations corresponding to black holes with size much bigger than the Planck scale but much smaller than the AdS radius. So far, there is no evidence for nonsupersymmetric CFTs with these properties. Indeed, we have little understanding of either cluster decomposition and multiparticle structure , or metastable flat space black holes, in the supersymmetric versions of AdS/CFT.
In summary, all the extant evidence indicates the absence of asymptotically flat M-theory vacua with broken SUSY. There are no solid examples, though the models of [1] may yet turn out to fulfill their design criteria.
The entropy of DeSitter space
The results reviewed in the preceding section suggest (but certainly not very strongly) that SUSY breaking in asymptotically flat spacetime may be impossible in M-theory. There is certainly a well known relation between the breaking of SUSY and the Ricci scalar of spacetime. Namely a generic nonsupersymmetric quantum field theory generates a cosmological constant of order at least the SUSY breaking scale. Conversely, a positive cosmological constant is incompatible with SUSY.
The well known problem with this relation is the relative scale of the two effects. The cosmological constant is bounded from above by a number of order eighty percent of the critical density, while the scale of SUSY breaking is bounded from below by several hundred GeV. Without fine tuning of parameters, and using the methods of effective field theory, this leads to a cosmological constant about 60 orders of magnitude larger than the observational bound. We normally think about this problem by doing quantum field theory in flat spacetime and then calculating the corrections to the spacetime background. SUSY breaking "causes" a large cosmological constant which then makes the flat spacetime a bad approximation. I would like to suggest that we have been thinking about this problem the wrong way around. The flat space computation counts the zero point energy of the degrees of freedom in spacetime. We have been learning that the number and properties of the degrees of freedom in M-theory depends crucially on our specification of the boundary conditions on spacetime [4]. Asymptotically Anti-DeSitter spaces of various dimensions have very different kinds of high energy degrees of freedom and further they all differ drastically from asymptotically flat spaces. Remarkably, the semiclassical Bekenstein-Hawking formula consistently gives the right answer for the extreme high energy entropy. This is an example of the UV/IR connection. High energy states are associated with large, low curvature (outside the horizon) geometries, whose gross properties are encoded in general relativity.
For DS space, the Bekenstein-Hawking formula predicts a finite entropy. More precisely, any observer in an AsDS space only sees a finite portion of the universe, bounded by a cosmological event horizon. One quarter of the area of this of this event horizon (in Planck units) is the finite Bekenstein-Hawking entropy. I would like to interpret this number as the logarithm of the total number of quantum states necessary to describe the universe as seen by this observer.
There are three arguments for this. The first is simply an analogy with black hole physics (according to the holographic principle ): event horizons may be viewed as holographic screens on which all information about "what is going on on the other side of the horizon" is encoded for the benefit of observers "on this side" . All of the arguments in favor of this holographic view of black hole horizons apply equally well to DS space.
The second argument is by far the most convincing. Imagine an observer inside DS space trying to contradict our contention by collecting as much entropy as she can. As long as she works on scales smaller than the DS radius of curvature, she can do this most efficiently by forming flat space black holes, whose entropy is bounded by their area. The black hole size is bounded by something of order the horizon size so there is no way to violate our bound. Put another way, a system with an entropy larger than the DS horizon size would simply not evolve into an AsDS spacetime with the assumed value of the cosmological constant.
The third argument is more technical. While few people believe any longer that quantum gravity is described by an Euclidean functional integral over metrics, this paradigm does seem to provide helpful and correct hints about the quantum physics of black holes and AdS spaces [5]. Euclidean DS space is a sphere, a compact geometry. The rules for Euclidean quantum gravity (c.f. perturbative world sheet physics in string theory) tell us that all diffeomorphisms, including the DS group of isometries are gauge transformations and should be integrated over. All physical information is invariant. This is in marked contrast to asymptotically flat or AdS universes, where the isometries act nontrivially on the nonfluctuating boundary geometry. In these cases, the isometries are large gauge transformations and physical states need not be invariant under them. Now consider quantum field theory in DS spacetime, defined by analytic continuation of Euclidean Green's functions on the sphere. Long ago, constructive field theorists showed [6] for a large class of superenormalizable theories, that these Green's functions have a Hilbert space interpretation in terms of the Hilbert space of an observer living in the static patch of Lorentzian DS space. The state defined by these Green's functions is the thermal state of the static patch Hamiltonian, at the Hawking temperature. These rigorous results are the generalization of the observations of [7] for free field theory and the perturbation expansion around it. To obtain the field theory in the full DS space one uses DS isometries to copy the Green's functions from one static patch to another. According to the argument of the previous paragraph, this procedure just produces gauge copies of the original system. Thus, from this point of view it would be wrong to introduce independent physical degrees of freedom for each static patch.
It is important to examine several situations which appear to contradict the idea that AsDS spaces have a finite number of degrees of freedom. One such argument is based on considering a spacetime which is DS in the remote past. At early times, the volume of space is very large, and one can easily impose initial conditions which have a larger entropy than the DS maximum. However, most of these initial conditions will not lead to an AsDS spacetime (with the same value of the cosmological constant). Einstein's equations (with appropriate conditions on the stress tensor) will not allow a violation of the Bekenstein-Fischler-Susskind -Bousso (BFSB) bound [8].
The "approximately DeSitter" spacetimes of inflationary cosmology are confusing only so long as we forget the nature of the holographic principle. There is no cosmological event horizon in these spacetimes (unless things settle down into a DS phase much later in the history of the universe) , so the horizon size of the inflationary DS phase is at best a temporary measure of the maximal entropy in the experience of local observers. When the inflationary phase ends, the horizons of these observers expand. The proper holographic screen on which all the information in these universes can be encoded depends on their evolution after the end of inflation.
By taking a limit in which the number of e-folds of inflation becomes infinite, we can generate a paradoxical situation. If we admit the possibility of independent information in different static patches of DS space (as we have for any finite number of e-foldings) then we obtain AsDS spacetimes with entropy larger than the horizon area. These are essentially the time reverse of the spacetimes we encountered two paragraphs ago. Of course, if we extrapolate these expanding geometries back into the past, we inevitably encounter a spacelike singularity. Thus, the proper description of these spacetimes is a Big Bang singularity which evolve to DS space in the future. Note that no local observer in such a universe will ever encounter more entropy than is allowed by the bound. The confusion lies in the fact that there are many ways of cutting the space up into regions observed by independent local observers. I believe that the confusion engendered by this example is connected to initial conditions at the singularity, and propose that a proper quantum treatment of cosmology will never lead to spacetimes of this type. In particular, I suspect that general initial conditions at the singularity for a number of degrees of freedom larger than the DS entropy will not evolve into the postulated DS space. The particular solutions described above will involve extreme fine tuning of initial conditions at the singularity, and might not exist at all in a quantum mechanical treatment.
The claim that the cosmological constant determines the number of degrees of freedom in an AsDS universe is extremely important if true. Traditionally, we think of the cosmological constant as an effective field theory parameter with no direct connection to the microscopic physics of the world. It is to be calculated in terms of more fundamental quantities. If however it is a direct count of the number of degrees of freedom, then its value is part of the fundamental set up of the quantum theory. The dimension of Hilbert space (if it is finite dimensional) or the number of fundamental canonical degrees of freedom (if the Hilbert space is infinite dimensional) is part of the definition of the theory. We will see below that the possibility of such a direct connection between an apparent low energy parameter and the fundamental dynamics is an expression of the UV/IR relation of M-theory.
We must not attempt to calculate the cosmological constant but rather to postulate its value and derive other observable quantities from it. From this point of view the "cosmological constant problem" is turned on its head. It is not "why is the cosmological constant so small", but "given the value of the cosmological constant, why is SUSY breaking so large". Indeed, although I cannot derive this logically from what I have already said, in this context it seems inevitable that one should attribute all breaking of SUSY to the fact that we live in an AsDS universe. This is consistent with the impossibility of defining SUSY in DS space, and also with our failure so far to find SUSY violating asymptotically flat states of M-theory, but it flies in the face of all previous wisdom about SUSY breaking.
The classical formula relating SUSY breaking to the cosmological constant is (without fine tuning) A formula that fits the data is with α = 1/8. I would propose that we describe these formulae with the following slogan: The Λ/M 4 P → 0 limit of M-theory is a critical limit in which the number of degrees of freedom of the system goes to infinity. In this limit, the SUSY breaking scale goes to zero, and we are trying to calculate the critical exponent for its vanishing. The classical mean field value is 1/4. Experiment indicates that the correct value is 1/8.
How can this be?
If the scale of SUSY breaking is smaller than the Planck scale, then low energy physics is described by a locally SUSY effective Lagrangian. The breaking of SUSY in this Lagrangian is spontaneous. If the relevant SUSY is N = 1 , d = 4, then the Lagrangian can have DeSitter solutions with spontaneously broken SUSY. The cosmological constant and the scale of SUSY breaking are independent parameters in this Lagrangian.
The scalar potential has the form Everything has been expressed in Planck units. We will be working near the flat space limit, where the cosmological constant is very small. In that limit, the F i terms are the order parameters for SUSY breaking in the sense that mass splittings in supermultiplets are proportional to the values of the F terms at the minimum of the potential. Note that both supermultiplet and mass are approximate concepts if the cosmological constant is nonzero. Mathematically, there are no global symmetry generators with which to define these words precisely. Physically, particles cannot be separated from each other by more than a horizon size 2 , and we cannot define scattering amplitudes.
By choosing parameters in the superpotential and Kahler potential, we can arrange a minimum with nonvanishing F terms and arbitrary value for the cosmological constant. However, this is generally considered to be fine tuning, according to the following Wilsonian argument. When we calculate radiative corrections to the effective Lagrangian below the SUSY breaking scale, we find a contribution to the renormalized cosmological constant of order M 4 SU SY , where M SU SY is the largest splitting in supermultiplets, and is also chosen to be the cutoff in the calculation. This can be cancelled, by adroit choice of the parameters in the Lagrangian, but the latter are thought to represent the effect of integrating out fluctuations at very short spacetime scales. In local field theory, degrees of freedom can be classified by their spacetime extent in an underlying classical metric. Degrees of freedom at short scales see long wavelength degrees of freedom as essentially constant parameters. According to this philosophy, the calculation of the effects of short wavelength degrees of freedom is essentially independent of the value of the cosmological constant, as long as the latter is much smaller than (the -4th power of) the wavelength. Thus, one argues, it is unnatural to imagine a cancellation of the bare cosmological constant against the low energy contribution. Furthermore, in a field theory with spontaneously broken SUSY, in flat spacetime, the very high energy contributions to Λ cancel. Similar exact cancellations in string theory with exact SUSY, suggest that this is not just a fluke of the field theoretic approximation.
There are obvious problems with applying this argument to M-theory. The spacetime metric, which is used to characterize what constitutes long and short wavelength fluctuations, is, in M-theory, an approximate description of fluctuating quantum variables. More importantly, the association of large mass scales with short distances is incorrect in M-theory. This correspondence is valid down to the string scale in weakly coupled string theory. However, the high mass states of string theory are predominantly of large spacetime extent. More generally, above the Planck scale, the high mass excitations are black holes, whose Schwarzchild radius grows with their mass. It is incorrect to say that the dynamics of these objects is unaffected by the cosmological constant. Indeed, black holes with radius larger than the cosmological horizon do not exist in DS space. Thus it is no longer implausible that the low and high energy contributions to Λ cancel each other 3 .
Our identification of the cosmological constant as the (inverse logarithm of) the number of quantum states of an AsDS universe suggests a slightly different point of view. The value of the cosmological constant is now a fundamental parameter (actually a boundary condition -see below) and we should set parameters in our effective Lagrangian to match it. In the low energy effective Lagrangian, this requires us to find a vacuum with spontaneously broken SUSY, but the natural scale of SUSY breaking is set by the cosmological constant. Field theoretic renormalizations will not upset this relation. There are, in Feynman diagrams, logarithmic renormalizations of mass splittings in supermultiplets, but as long as the field theoretic couplings are small, these are not substantial when the cutoff is of order the Planck mass. Furthermore, they do not depend strongly on the cosmological constant. Now however, consider quantum gravity corrections to the mass splittings, first as loops of gravitons in Feynman diagrams. These contribute to logarithmic divergences as well, but there is no longer any small parameter controlling the series in powers of logs. However, there is still no apparent dependence on the cosmological constant. The crucial question now is what cuts off the divergences when we reach the Planck scale. Much has been made of the softness of perturbative string amplitudes at large momentum transfer [9] . Many people have viewed this as the ultimate cutoff promised by a true theory of quantum gravity. But there is plenty of evidence, both internal to the perturbative analysis [10][9] [14] and using D-brane techniques [11] that this is not correct. In [14] it was suggested instead that the ultimate cutoff comes from black hole physics. That is, all high energy high momentum transfer scattering amplitudes, and even the Regge regime, are eventually dominated by black hole production with subsequent decay by Hawking radiation. This is again an invocation of the UV/IR connection. The gross features of the highest energy processes in M-theory are ultimately encoded in General Relativity, because they involve low curvature geometries. We need the microscopic theory to calculate the detailed quantum properties of the states near a black hole horizon, but the level density of the high energy spectrum and many properties of inclusive cross sections can be calculated from semiclassical general relativity.
Thus, I would claim that there is no evidence for suppression of "diagrams" in which virtual black holes of mass much larger than the Planck scale renormalize the splittings in low energy supermultiplets. The size of these contributions must be estimated from the physics of black holes. In such a calculation it is clear that the DS horizon radius will provide a cutoff on black hole contributions. It is entirely possible that a proper calculation involves the detailed microphysics of black hole states. We will explore a more optimistic scenario below.
It is important to realize that there is no claim being made that the theory with Λ → 0 is divergent. We are merely trying to show that various quantities which vanish with the cosmological constant do so more slowly than is indicated by formulae which only take gravity into account classically. What we are claiming is that the theory with vanishing cosmological constant must be supersymmetric. It is reasonable to suppose that the restoration of SUSY will cancel otherwise divergent contributions from virtual black holes.
Proposal for a thermodynamic calculation
Our proposal implies that a full understanding of the relation between the cosmological constant and the scale of SUSY breaking is possible only if we know something about M-theory at very high energies. Rather than giving up and saying that this puts the problem beyond our powers at the present, I would like to suggest that the UV/IR correspondence may be used to get at least a rough estimate of the size of the effect. According to this principle, high energy physics in M-theory is black hole physics, and some aspects of black hole physics are computable in the semiclassical approximation to SUGRA. We may hope that an estimate of the relation between SUSY breaking and Λ may be obtained in the semiclassical approximation.
The first aspect of semiclassical physics in DS space that will be important to us is that the state of the system is a thermal ensemble with respect to the static Hamiltonian of DS space. We consider this relevant, despite our previous remarks that the DS group is a group of gauge transformations. We are contemplating a limit of very small cosmological constant, and trying to describe physics as seen by observers who are unable to discern that space is not asymptotically flat (because they are making observations that refer to low energy , approximately local, physics). The phrase "mass splittings in supermultiplets" refers precisely to properties of the (approximate) SuperPoincare generators defined by such observers. The DS Hamiltonian goes over in the limit to the Poincare Hamiltonian of the asymptotically flat observer. We use it, because our considerations will depend on the curvature of DS space.
Our second assumption is that the parameters in the local effective Lagrangian actually get contributions from "Feynman diagrams with virtual black holes in them". There is not even a semi-rigorous justification for this assumption, and the following hand waving will have to suffice: Consider Feynman diagrams contributing to the masses of some of the particles in the theory. As we allow the momenta in internal loops to grow larger than the Planck scale, we encounter subgraphs which look like super-Planckian scattering amplitudes, amplitudes in which all kinematical invariants are larger than the Planck scale. According to classical general relativity, we expect such collisions to result in black hole production. I claim that the quantum mechanical interpretation of this is that there is no suppression of the probability of producing virtual black holes.
The reader may be disturbed by the feeling that such large energy and momentum transfer processes should be cut off in M-theory. My response is that the black holes themselves provide the cutoff. For example, probability one black hole production followed by Hawking evaporation, gives exponentially suppressed inclusive cross sections for finite numbers of particles with energy and momentum transfer much larger than the Planck scale 4 .
Given our two assumptions we expect the SUSY breaking mass terms to be given by a thermodynamic average Here S(M) is the black hole entropy and β is the inverse Hawking temperature of DS space. ∆m(M) is the contribution to SUSY breaking from virtual black holes of mass M. We will restrict attention to four dimensions, since this is the only place where low energy SUGRA can have DS solutions. In that case S(M) = 4πM 2 = πR 2 S , while β = 2πR D . The integral is actually cut off when the Schwarzchild radius R S = R D . It is easy to see that the integral is dominated by its upper endpoint (unless ∆m falls extremely rapidly with black hole mass).
Our claim then is the SUSY breaking induced by DS space can be approximated by that due to virtual black holes of a size near the upper cutoff for Schwarzchild-DeSitter black holes. I hope to report on an estimate of this effect in the near future.
The fate of observers in an AsDs universe
There is a line in an old country and Western song that goes "DeSitter space is a lonely place . . ." . Indeed, once the cosmological constant takes over the expansion rate, everything that is not gravitationally bound to us soon passes outside our horizon. Worse, after baryons decay, gravitationally bound systems will cease to exist if they have not collapsed into black holes. And when quantum mechanics is taken into account even this ultimate refuge is lost to us, since the black holes decay. Eventually, the universe becomes full of elementary systems, each in its ground state in its own horizon volume (we are for the moment ignoring the Hawking radiation of DeSitter space).
Physics as we know it, which describes local interactions between systems which can communicate with each other, becomes increasingly irrelevant in such a universe, though the time scale for this to happen is enormously long. Thus, the usual apparatus of physics describes an epiphenomenon in an AsDS universe. One of the technical problems related to this observation is how one describes the physical answers that are relevant to us as exact, gauge invariant, mathematical quantities in such a theory.
In asymptotically flat space, the holographic principle tells us that we can calculate the S-matrix. So far we have found no other sensible physical quantities in Asymptotically Flat M-theory. But there is no S-matrix in AsDS spaces. One must really search for more local quantities, but it seems that any such search may have only an approximate nature. For example, one might imagine showing that the low energy effective Lagrangian description had the status of the first term in an asymptotic expansion of something. But what might that something be? If we extrapolate to high enough energy we are always required to ask questions about all of the degrees of freedom and their dependence on the global geometry of AsDS space. There is no exact quantum number that takes the place of energy. If we are willing to take the attitude that at sufficiently high energy we can neglect SUSY breaking, we can use the flat space, SUSY vacuum which best approximates our AsDS universe to calculate scattering amplitudes above the Planck scale. But we must recognize that at sufficiently high energies these amplitudes describe processes involving black holes larger than the DS radius. These have nothing to do with anything in the real world, if the universe is AsDS. It is also far from obvious to me that one could find a systematic incorporation of the the SUSY violating corrections to these amplitudes into a more exact description of the world. In our view, SUSY violation is a consequence of the AsDS geometry of the universe, and might be incompatible with a description of the world in terms of scattering amplitudes. The phrase SUSY violating scattering amplitude, might be an oxymoron that made sense only at energies below the Planck scale.
All of this suggests that there is a somewhat more local description of holographic physics than any which exists at present. I presented a preliminary sketch of what such a formalism might look like at the Millenium conference in January. It involves a collection of Hilbert spaces H i , each of which is supposed to represent those states observable in the causal past of a finite number of points, in a cosmological spacetime which begins at a Big Bang singularity. More precisely, using the Bekenstein-Hawking-Bousso relation between areas and entropy, and a causal structure which is defined by mappings of the algebra of operators in one space into a subalgebra with (in general) nontrivial commutant in another, I proposed to reconstruct a spacetime directly from quantum mechanics. 5 In this formalism, the experience of a more or less localized observer is encoded in a sequence of Hilbert spaces of (exponentially) increasing dimension. Each space in the sequence is mapped into a tensor factor of the one succeeding it. In order to have unitary evolution, the full state in the successor Hilbert space must be determined by partial mappings from many different predecessor states. In general it is not required that the entire process, including an infinite sequence of steps, can be incoporated in a single Hilbert space of finite dimension. One consistent rule which allows this is that the Hilbert spaces in any sequence converge after a finite number of steps to a space of some fixed dimension, the same for every sequence. The inclusion maps become unitary mappings of this space into itself.
I would like to identify such a situation with an AsDS space, in the limit that the number of dimensions of the asymptotic Hilbert space is very large. Appropriately smooth unitary mappings between different sequences would represent the different ways in which the spacetime could be represented as the static patch of a given observer, each of whom perceives all of the things outside her horizon as a thermal gas.
In this view of the universe, the local degrees of freedom whose investigation is the province of experimental physics should be viewed as being "on temporary loan" from the "thermal DeSitter library" . As the DeSitter era unfolds , more and more of the observer's degrees of freedom are "returned to the shelf": they get swept outside his horizon, and become part of the thermal background. It is interesting that the total number of borrowed degrees of freedom that we need to describe what we see is, even if we include the entropy in hypothetical black holes in the center of each galaxy, smaller by a factor of 10 30 than the Bekenstein Hawking entropy corresponding to the cosmological constant. Thus, from a sufficiently cosmic viewpoint, the entire organized part of the universe may be just a small coherent fluctuation in a random system with an enormous number (nearly a googleplexus)of degrees of freedom. It may be that in the far future, after the universe has degenerated into a collection of frozen elementary systems, each in its own horizon volume, a new fluctuation in the Hawking radiation can form, and the whole process will begin again.
Let me conclude this section by repeating that the most important technical problem posed by this view of the AsDS universe is to realize the physical measurements we make in terms of exact mathematical statements about the finite dimensional Hilbert space associated with the spacetime.
Metaphysics
One of the most disturbing aspects of the proposal in this paper is that the theory of the universe involves a fixed integer N, the total number of quantum states in the universe. I believe that a discussion of the meaning of this number will depend on the distinction between equations of motion and boundary conditionsin physics. It has long been apparent, that even if we find the ultimate physical laws encoded in a set of equations of motion, we will still have to deal with the question of what determines the boundary conditions. In cosmology, this question has traditionally been split into two parts: "Do the spatial sections of a Friedmann-Robertson-Walker cosmology have a boundary (and what are the boundary conditions there)?", and "What are the initial conditions?". Einstein preferred closed cosmologies because he believed this eliminated the first of these questions. Various authors [13] have tried to address the second.
There is a well known problem associated with Einstein's suggestion, if one believes that quantum theory is the ultimate description of nature, and also believes in an ultraviolet cutoff. A closed universe with a UV cutoff must have a finite number of states. If we try to associate the cutoff with a cutoff of short distances, we immediately run into a problem. The volume of the universe changes with time, so the number of states allowed by a short distance cutoff would appear to change as well. This violates unitarity.
The advent of holographic cosmology [15][8] has resolved this conundrum. The obvious conjecture that follows from this work is that the number of states in a cos-mology is the exponential of one fourth of the area in Planck units of a maximal set of holographic screens 6 . I believe that ultimately this prescription will be turned around. Cosmology will be derived from quantum mechanics, with spacetime geometry being computed from the number of quantum states.
From this point of view, the natural distinction between cosmological boundary conditions will be in terms of the number of quantum states that they admit. We first have the possibility of a finite number, and then infinity. We expect that systems with a finite number of states can describe either AsDS universes or recollapsing universes. It is likely that the distinction between the two is simply whether we require an infinite or a finite number of steps in our choice of time evolution.
With an infinite number of states it is natural to look for some operator on the Hilbert space whose eigenspaces with finite eigenvalue are finite dimensional and then to make a finer classification in terms of the behavior of the density of states at large eigenvalue. Geometrically we would expect this to map into the problem of black hole entropy in cosmologies with no finite area cosmological horizon.
I think that, apart from the apparent observational evidence for a cosmological constant, our reaction to the choice between finite and infinite cosmologies (in the present sense) can at best be an emotional one. On the one hand, it is reasonable to think that nothing is actually infinite -that infinity or infinitesimal always refers to an idealization that makes problems more easy to treat mathematically (in the practical, rather than the rigorous sense of mathematics). Then one will be saddled with the annoying question of why a particular finite number is chosen. This may lead one prefer to accept infinity as a reality, though I would claim that the various choices among behaviors of the asymptotic spectrum of black holes will be equally annoying. One may find that insisting on a large asymptotic symmetry group somewhat restricts the possibilities, but the plethora of exactly stable Poincare and AdS vacua of M-theory makes this seem unlikely. The only theoretical basis for resolving this problem would seem to be to prove a theorem that every system with an infinite number of states which is asymptotically describable by a large smooth geometry, becomes supersymmetric in the asymptotic limit. As, I have noted, there is some meager evidence for this conjecture.
Given that N is finite, the question of how it is chosen might have two generic kinds of answer: • In the fullness of time we might show that N had to satisfy some number theoretic property that is satisfied by [0, 1, 2, 216, 2 10 120 +23, 2 10 250 +13365, . . .] . Or perhaps it is the unique solution to some number theory problem.
• There is some meta-dynamics which gives rise to quantum systems with different values of N [16]. Perhaps it is even some kind of deterministic dynamics and could alleviate our unease with the application of probabilistic ideas to the whole universe. In such a system we might find either a true dynamical explanation of the value of N, or the framework for an anthropic determination of this single parameter.
The point about these possible answers is that they have very little to do with physics in the universe we observe (hence the title of this section). Our best strategy is probably to ignore the question. The most useful attitude would appear to be to assume N is a boundary condition and hope that many features of the dynamics have universal properties for large but finite N. Thus the characterization of the formula M SU SY ∼ Λ 1/8 as a formula for a critical exponent.
Some remarks on phenomenology
One of the most interesting features of the proposal in this paper is that it solves what I consider one of the primary phenomenological problems of M-theory, namely why we do not live in one of the many stable supersymmetric ground states of the theory. The answer is simply that we do not have enough states. Poincare invariant ground states have an infinite number of excitations, at least all of the scattering states of gravitons.
Our suggestion about the origin of SUSY breaking probably has more practical implications for SUSY phenomenology as well. For example suppose that the generation structure of the standard model is related to a discrete gauge symmetry that is spontaneously broken at an energy scale well below the Planck scale. We have attributed the dominant contribution to SUSY breaking to very high energy black hole states. These states will be insensitive to the low energy breaking of generation symmetry and might well produce flavor singlet squark mass matrices. Alternatively, the mere fact that SUSY breaking comes from a thermal average over a large number of states might produce flavor singlet mass matrices, without appeal to symmetries (Of course, we probably want to have flavor symmetries to explain the quark mass matrix.). One might imagine the possibility of deriving the minimal SUGRA spectrum, or some other simple pattern of SUSY breaking, from this scenario.
Another general conclusion would appear to be that the gravitino mass, as well as the masses of any moduli which originate from SUSY breaking, will be of order Λ 1/4 . This causes well known cosmological difficulties, which must be solved.
Finally one may hope that the current approach to cosmology will eventually solve the vacuum selection problem of string/M-theory. In the limit of vanishing cosmological constant, our approach implies that the finite dimensional Hilbert space of an AsDS M-theoretic cosmology, approaches that of an asymptotically flat SUSY vacuum of M-theory. This is presumably the state which describes scattering of particles inside gravitationally bound clusters during the pre-asymptotic stage of the AsDS universe.
The question of which flat SUSY background we approach in the limit might depend on initial conditions -that is, in the small Λ limit, the Hilbert space might break up into superselection sectors and different cosmological evolutions might end up in different sectors. On the other hand, one might hope for a more unique and universal answer. At any rate, the question is certainly tied up with that of initial conditions for cosmology.
Certain features of the desired background can be understood from general considerations. It must be supersymmetric, and its low energy effective Lagrangian must have a small deformation corresponding to a SUSY violating DS space. This makes it virtually certain that the SUSY background cannot have any moduli. Small deformations of a SUSY Lagrangian with moduli will generally give rise to cosmologies with varying moduli, rather than a DS space. In [17] I discussed a general analysis of inflationary cosmologies deriving from M-theory. Approximate moduli were argued to be good inflaton candidates, and the discrepancy between the inflation and SUSY breaking scales was attributed to the existence to a submanifold of approximate moduli space where SUSY and a discrete R symmetry were restored. Much of the postinflationary dynamics of the universe depended on the dimension of this submanifold. The present considerations suggest that one wants it to be a point, as has long been advocated by Dine [18]. This suggests that, in order to find the vacuum state of M-theory that describes the universe approximately, one must search for an isolated point in the approximate moduli space of an N = 1 compactification, which preserves SUSY and a discrete R-symmetry.
Conclusions
It should be obvious that the claims made here are somewhat tentative and unformed. One aspect of the subject that I find rather confusing is the relation of the fundamental theory to the low energy effective Lagrangian. Despite the UV/IR correspondence, I believe it is correct that physics below the Planck scale is governed by a locally supersymmetric effective Lagrangian. In [12] I have suggested that local SUSY is in fact connected to the arbitrary choice of holographic screen, and should therefore be a fundamental symmetry, not to be broken. Since we expect the scale of SUSY breaking to be much smaller than the Planck scale there should be an effective Lagrangian description of low energy physics which is locally supersymmetric, which means that SUSY breaking appears spontaneously. The SUSY breaking scale and cosmological constant should simply be set by tuning parameters in this Lagrangian.
The confusing point is that in this description there appears to be a low energy origin for SUSY breaking. Some chiral field's F term gets a nonzero expectation value. I suspect that the correct description will simply introduce SUSY breaking through a Volkov-Akulov [19] goldstino multiplet. The SUSY breaking scale and cosmological constant will be put in by hand. They are related by a formula of the form M SU SY = KM P (Λ/M 4 P ) 1/8 . This formula can only be understood, and the constant K calculated, within the framework of the full theory. Similarly, the couplings of the Goldstino to other low energy fields, which determine the phenomenology of SUSY breaking, will depend on high energy physics. Only if the conjectures about relating high energy physics to black hole physics, which were adumbrated in section 4 are correct, will we be able to extract any details of the SUSY spectrum without a full understanding of the quantum mechanics of M-theory.
|
2019-04-14T02:48:35.679Z
|
2000-07-18T00:00:00.000
|
{
"year": 2000,
"sha1": "ca3167b38831d2553d9c95af761fc9186527b8d8",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/hep-th/0007146",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "ca3167b38831d2553d9c95af761fc9186527b8d8",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
251326126
|
pes2o/s2orc
|
v3-fos-license
|
Dimerization of GPCRs: Novel insight into the role of FLNA and SSAs regulating SST2 and SST5 homo- and hetero-dimer formation
The process of GPCR dimerization can have profound effects on GPCR activation, signaling, and intracellular trafficking. Somatostatin receptors (SSTs) are class A GPCRs abundantly expressed in pituitary tumors where they represent the main pharmacological targets of somatostatin analogs (SSAs), thanks to their antisecretory and antiproliferative actions. The cytoskeletal protein filamin A (FLNA) directly interacts with both somatostatin receptor type 2 (SST2) and 5 (SST5) and regulates their expression and signaling in pituitary tumoral cells. So far, the existence and physiological relevance of SSTs homo- and hetero-dimerization in the pituitary have not been explored. Moreover, whether octreotide or pasireotide may play modulatory effects and whether FLNA may participate to this level of receptor organization have remained elusive. Here, we used a proximity ligation assay (PLA)–based approach for the in situ visualization and quantification of SST2/SST5 dimerization in rat GH3 as well as in human melanoma cells either expressing (A7) or lacking (M2) FLNA. First, we observed the formation of endogenous SST5 homo-dimers in GH3, A7, and M2 cells. Using the PLA approach combined with epitope tagging, we detected homo-dimers of human SST2 in GH3, A7, and M2 cells transiently co-expressing HA- and SNAP-tagged SST2. SST2 and SST5 can also form endogenous hetero-dimers in these cells. Interestingly, FLNA absence reduced the basal number of hetero-dimers (-36.8 ± 6.3% reduction of PLA events in M2, P < 0.05 vs. A7), and octreotide but not pasireotide promoted hetero-dimerization in both A7 and M2 (+20.0 ± 11.8% and +44.1 ± 16.3% increase of PLA events in A7 and M2, respectively, P < 0.05 vs. basal). Finally, immunofluorescence data showed that SST2 and SST5 recruitment at the plasma membrane and internalization are similarly induced by octreotide and pasireotide in GH3 and A7 cells. On the contrary, in M2 cells, octreotide failed to internalize both receptors whereas pasireotide promoted robust receptor internalization at shorter times than in A7 cells. In conclusion, we demonstrated that in GH3 cells SST2 and SST5 can form both homo- and hetero-dimers and that FLNA plays a role in the formation of SST2/SST5 hetero-dimers. Moreover, we showed that FLNA regulates SST2 and SST5 intracellular trafficking induced by octreotide and pasireotide.
The process of GPCR dimerization can have profound effects on GPCR activation, signaling, and intracellular trafficking. Somatostatin receptors (SSTs) are class A GPCRs abundantly expressed in pituitary tumors where they represent the main pharmacological targets of somatostatin analogs (SSAs), thanks to their antisecretory and antiproliferative actions. The cytoskeletal protein filamin A (FLNA) directly interacts with both somatostatin receptor type 2 (SST 2 ) and 5 (SST 5 ) and regulates their expression and signaling in pituitary tumoral cells. So far, the existence and physiological relevance of SSTs homo-and hetero-dimerization in the pituitary have not been explored. Moreover, whether octreotide or pasireotide may play modulatory effects and whether FLNA may participate to this level of receptor organization have remained elusive. Here, we used a proximity ligation assay (PLA)-based approach for the in situ visualization and quantification of SST 2 /SST 5 dimerization in rat GH3 as well as in human melanoma cells either expressing (A7) or lacking (M2) FLNA. First, we observed the formation of endogenous SST 5 homo-dimers in GH3, A7, and M2 cells. Using the PLA approach combined with epitope tagging, we detected homo-dimers of human SST 2 in GH3, A7, and M2 cells transiently co-expressing HA-and SNAP-tagged SST 2 . SST 2 and SST 5 can also form endogenous hetero-dimers in these cells. Interestingly, FLNA absence reduced the basal number of heterodimers (-36.8 ± 6.3% reduction of PLA events in M2, P < 0.05 vs. A7), and octreotide but not pasireotide promoted hetero-dimerization in both A7 and M2 (+20.0 ± 11.8% and +44.1 ± 16.3% increase of PLA events in A7 and M2, respectively, P < 0.05 vs. basal). Finally, immunofluorescence data showed that SST 2 and SST 5 recruitment at the plasma membrane and internalization are similarly induced by octreotide and pasireotide in GH3 and A7 cells. On the Introduction G protein-coupled receptors (GPCRs) are the largest family of membrane receptors. Because of their heavy involvement in human physiology and disease, they are major drug targets (1,2). The biochemical and pharmacological properties of GPCRs have been extensively characterized. One emerging aspect of GPCR signaling is their ability to form homo-and hetero-dimers, which has been implicated in the modulation of ligand binding affinity, signal transduction, and intracellular trafficking (3,4). Whereas constitutive dimerization is essential for the correct functioning of class C GPCRs (5, 6), its consequences for the function of other GPCRs can vary considerably (7,8). Of note, hetero-dimerization between two GPCRs of the same or different groups may endow the resulting hetero-dimer with unique signaling and pharmacological properties (9-15).
Somatostatin receptors (SSTs) comprise five family A GPCRs, named SST 1 -SST 5 , which are abundantly expressed in the endocrine system where they exert inhibitory actions on hormone secretion and cell proliferation (16)(17)(18)(19). Of these, SST 2 and SST 5 are the prevalent subtypes in the pituitary (20,21). Specifically, SST 2 and SST 5 are c o n s i d e r e d t h e m a i n p h a r m a c o l o g i c a l t a r g e t s o f somatostatin analogs (SSAs) octreotide and pasireotide in the treatment of acromegaly caused by GH-secreting pituitary tumors (22). Octreotide displays a higher binding affinity for SST 2 (IC 50 = 0.38 nM), and lower for SST 5 (6.3 nM), SST3 (7.1 nM), and SST1 (280 nM). On the contrary, pasireotide preferentially binds SST 5 (0.16 nM) and shows lower affinity for SST 2 (1 nM), SST 3 (1.5 nM), and SST 1 (9.3 nM) (23).
To date, whether SSTs are capable to form homo-and hetero-dimers in the pituitary as well as whether octreotide or pasireotide may play any modulatory effects on these events has not been investigated, yet. Indeed, the field of SST dimerization has been only partially explored in simple cell systems transfected with different SSTs from diverse species (12, 13, 24-27, revised in 28). Moreover, the exact mechanisms governing GPCR dimerization are, with a few exceptions, largely unknown.
An involvement of molecular chaperones such as 14-3-3, HSP70, or receptor activity-modifying proteins (RAMPs) has been proposed (29). Scaffolding proteins and cytoskeletal elements, which are already implicated in the formation of specialized signaling subdomains at the plasma membrane, receptor mobility, cluster assembly, and internalization, may also be involved (30, 31). Among these, the actin-binding protein filamin A (FLNA) might be a good candidate. Thanks to its flexible V shape, its actin-binding domain, and its scaffolding domains, FLNA is involved in crosslinking of actin filaments and anchoring transmembrane receptors to the subcortical cytoskeleton, thus providing a scaffold platform for receptor spatial organization and signaling (32). By means of single-molecule imaging and in situ proximity ligation assay (PLA), we have previously demonstrated that SST 2 and SST 5 directly interact with FLNA (33, 34). In pituitary tumoral cell lines, the association of FLNA with both SST 2 and SST 5 is crucial for an efficient SSA-induced signal transduction (34,35). In addition, FLNA expression is required for SST 2 internalization, recycling to the plasma membrane and protein stability after prolonged agonist stimulation (35), and to maintain a stable amount of SST 5 in basal conditions by preventing both lysosomal and proteosomal degradation (34). However, whether FLNA may participate to SST dimeric assembly remains to be assessed.
In the present work, we used a PLA-based methodology for the in situ visualization and quantification of SST 2 /SST 5 dimerization in rat GH-secreting pituitary cells (GH3). Moreover, in order to investigate the role of FLNA, we used the human melanoma cell models either expressing (A7) or lacking (M2) FLNA.
In situ PLA represents a powerful strategy to study receptor dimerization also in its natural context at physiological expression levels (36,37). Moreover, we performed immunofluorescence experiments to demonstrate the specific FLNA-dependent modulation of SST 2 and SST 5 intracellular trafficking induced by octreotide and pasireotide.
M2 is a spontaneously FLNA-deficient cell line, established from the tumor of a patient with malignant melanoma (38). A7 is a stably transfected cell line derived from the M2 cell line. A7 and M2 cells were kindly provided by Dr. Fumihiko Nakamura (School of Pharmaceutical Science and Technology, Tianjin University, Tianjin, China) and were grown in Eagle's Minimum Essential Medium (EMEM) (ATCC, Manassas, VA, USA) supplemented with 8% Newborn Calf serum (NBCS), 2% fetal bovine serum (FBS), and antibiotics (Life Technologies, Carlsbad, CA, USA). For A7 cells, 200 µg/ml G418 was added (Merck KGaA, Darmstadt, DE). Cells were kept at 37°C in a humidified atmosphere with 5% CO 2 . Octreotide and pasireotide were provided by Novartis Pharma AG (Basel, CH) and used at 100-nM concentration.
Plasmid transfection
Expression vectors coding for human HA-SSTR2 (influenza hemagglutinin-tagged SSTR2) and SNAP-SSTR2 (SSTR2 fused to SNAP tag, a 20-kDa protein derived from the enzyme O 6alkylguanine-DNA alkyltransferase) were previously described (33). These vectors were transiently co-transfected in GH3 cells for 6 h in order to achieve low expression levels, resembling those of endogenous SSTR2. Lipofectamine 2000 was used as transfection reagent (Invitrogen, Thermo Fisher Scientifi c , Waltham, MA , USA) acc ording to the manufacturer's instruction.
In situ proximity ligation assay GH3, A7, and M2 cells were seeded on 13-mm poly-Llysine-coated coverslips at a density of 1.25 × 10 5 cells/well in 24-well plates and grown at 37°C for 18 h. The following day, cells were exposed or not with pasireotide 100 nM or octreotide 100 nM for 5 min. In case of SSTR2/SSTR2 homo-dimer evaluation, cells were transiently transfected with HA-tagged SSTR2 and SNAP-tagged SSTR2, as described above. Cells were fixed with 4% paraformaldehyde (Merck KGaA, Darmstadt, DE) for 10 min at room temperature, washed three times with PBS, and incubated for 1 h at room temperature with blocking buffer (5% FBS, 0.3% Triton ™ X-100, in PBS). To test the presence of SST 2 /SST 5 hetero-dimers, coverslips were incubated overnight at 4°C with primary rabbit anti-SST 2 UMB1 #ab134152 (1:50, Abcam, Cambridge, UK) and primary mouse anti-SST 5 #6675-1-Ig (1:200, Proteintech, Rosemont, IL, USA) antibodies. To test the presence of SST 5 / SST 5 homo-dimers, two different primary antibodies against SST 5 were used: a rabbit anti-SST 5 #PA3-209 (Thermo Fisher Scientific, CA, USA) and a mouse anti-SST 5 #6675-1-Ig (Proteintech, Rosemont, IL, USA), both diluted 1:200. To test the presence of SST 2 /SST 2 homo-dimers, primary mouse anti-HA #26183 (1:250, Thermo Fisher Scientific, CA, USA) and primary rabbit anti-SNAP #CAB4255 (1:800, Thermo Fisher Scientific, CA, USA) antibodies were used. All antibodies were diluted in antibody Diluent Reagent Solution (Life Technologies, Thermo Fisher, CA). As negative controls to detect potential unspecific signal, one of the primary antibodies was removed. We used the Duolink In Situ PLA kit from Sigma-Aldrich (Merck KGaA, Darmstadt, DE). Briefly, Duolink Anti-Rabbit PLUS Probe (DUO92002, Sigma-Aldrich) and Duolink Anti-Mouse MINUS Probe (DUO92040, Sigma-Aldrich) were added and incubated for 1 h at 37°C. Then, Duolink In Situ Detection Reagents Green (Duolink, DUO92014) was used. Ligation-Ligase solution was added and incubated for 30 min at 37°C. Amplificationpolymerase solution was subsequently added and incubated for 2 h (for SST 2 /SST 2 homo-dimer detection) or 18 h (for SST 2 /SST 5 hetero-dimer and SST 5 /SST 5 homo-dimer detection) at 37°C. Coverslips were mounted on glass slides with EverBrite ™ Hardset Mounting Medium with DAPI (Biotium, Fremont, CA, USA) for subsequent observation under epifluorescence microscope. Proximity ligation events were quantified with NIH ImageJ software after image deconvolution. The average number of PLA puncta per cell are related to images acquired from randomly chosen fields per condition from three independent experiments and quantified as previously described (34).
Statistical analysis
The results are expressed as the mean ± SD. A paired twotailed Student's t-test was applied to assess the significance between two series of data. Statistical analysis was performed by GraphPad Prism 7.0 software, and P < 0.05 was accepted as statistically significant.
Results
Detection of SST 2 /SST 2 and SST 5 /SST 5 homodimers in somatotroph and melanoma cells In order to examine the occurrence of SST 2 /SST 2 homodimers in rat pituitary GH-secreting cells (GH3) and human melanoma cells A7 (FLNA-expressing cells) and M2 (FLNAlacking cells), we combined the PLA approach with epitope tagging. Cells were transiently co-transfected with human HAtagged SST 2 and SNAP-tagged SST 2 , and antibodies against HA and SNAP tags were then used for PLA experiments. Indeed, in preliminary experiments, we did not find a pair of anti-SST 2 antibodies raised in two different species, suitable for this application. To detect endogenous SST 5 /SST 5 homo-dimers in GH3, A7, and M2 cells, two separate anti-SST 5 antibodies raised in mouse and rabbit were used.
B C
A FIGURE 1 In situ detection of SST2/SST2 and SST5/SST5 homo-dimers. Representative in situ PLA experiment performed in GH3 (A), A7 cells (B), and M2 cells (C) showing SST 2 /SST 2 (upper panels) and SST 5 /SST 5 (lower panels) homo-dimers. For SST 2 /SST 2 homo-dimers, mouse anti-HA and rabbit anti-SNAP antibodies were used. For SST 5 /SST 5 homo-dimers, mouse and rabbit anti-SST 5 antibodies were used. PLA puncta representing homo-dimers are shown as green dots and nuclei are stained with DAPI in blue. A deconvoluted image of PLA events merged with DAPI is shown. White arrows indicated the localization of PLA puncta. Scale bars: 10 mm.
Effect of SSAs and FLNA on SST 2 /SST 5 hetero-dimerization in somatotroph and melanoma cells
Then, we investigated the presence of endogenous SST 2 / SST 5 hetero-dimers in GH3 cells by means of PLA analysis and tested the possible effects exerted by octreotide and pasireotide on this receptor dimeric state. As shown in Figure 2A, SST 2 /SST 5 hetero-dimers were detected under basal conditions and after treatment with 100 nM octreotide or pasireotide for 5 min. No effect on the number of PLA puncta corresponding to SST 2 /SST 5 hetero-dimers was observed.
To decipher the role of FLNA in the formation of SST 2 / SST 5 hetero-dimers and in the modulation of SSA-dependent effects on receptor assembly, PLA experiments were repeated in A7 and M2 cells. Our results showed that SST 2 and SST 5 were able to form hetero-dimers in A7 and M2 cells under basal physiological conditions. However, the absence of FLNA significantly impaired the amount of SST 2 /SST 5 heterodimers (-36.8 ± 6.3% reduction of PLA events in M2 cells, P < 0.05 vs. A7 cells). In both cell lines, a significant increase in the PLA events was observed after 5 min of incubation with octreotide (+20.0 ± 11.8% increase of PLA events in A7 cells, P < 0.05 vs. basal, and +44.1 ± 16.3% increase of PLA events in M2 cells, P < 0.05 vs. basal) but not pasireotide ( Figure 2B). No SST 2 /SST 5 dimer signals were detected in negative controls (Supplementary Figure 2).
FLNA-dependent modulation of SSAinduced intracellular trafficking of SST 2 and SST 5 Next, we used immunofluorescence to follow SSA-induced intracellular trafficking of endogenous SST 2 and SST 5 in GH3. Our imaging data showed that SST 2 and SST 5 colocalize throughout the cell body in the absence of stimuli. Cells' exposure to octreotide or pasireotide for 5 min resulted in rapid receptor translocation to the plasma membrane. SST 2 and SST 5 were then similarly internalized by longer-time stimulation with octreotide and pasireotide, as shown by the intracellular colocalization signal ( Figure 3A).
FIGURE 2
In situ detection of SST2/SST5 hetero-dimers. Representative in situ PLA experiment showing SST 2 /SST 5 hetero-dimers in GH3 (A) and melanoma cells (B) before and after treatments with 100 nM octreotide or pasireotide for 5 min. Rabbit anti-SST 2 and mouse-anti SST 5 antibodies were used. Green dots represent PLA events and indicate close proximity between SST 2 and SST 5 . Graphs resulting from the quantification of total SST 2 /SST 5 puncta representing PLA events are shown for each cell line. For (B), a reduction in basal SST 2 /SST 5 hetero-dimers in M2 cells compared to A7 cells and an increase in SST 2 /SST 5 hetero-dimers in A7 and M2 cells treated with octreotide compared to basal are shown (n = 3, number of PLA puncta per cells was quantified for 150 cells randomly chosen from different fields per condition, *p < 0.05 vs. basal A7 cells; § p < 0.05 vs. corresponding basal). Scale bars: 10 mm. SST2 and SST5 intracellular trafficking. Representative immunofluorescence experiment showing subcellular localization of SST 2 (green) and SST 5 (red) in GH3 cells (A), A7 cells (B), and M2 cells (C) stimulated or not with 100 nM octreotide or pasireotide for the indicated times. Nuclei are stained with DAPI in blue. Overlay of green and red channels is shown, and white arrows indicate SST 2 and SST 5 subcellular colocalization. Deconvolution algorithm was applied to further show SST 2 and SST 5 colocalization areas (yellow signals, right columns). Scale bars: 10 mm.
To investigate the role of FLNA in the colocalization of human SST 2 and SST 5 during internalization experiments, A7 and M2 cells were used. Under basal conditions, SST 2 and SST 5 share the same localization at intracellular sites and on the plasma membrane in both cell lines. In A7 cells, stimulation with 100 nM octreotide and pasireotide induced a rapid cell membrane recruitment of both receptors (at 5 min) and receptor accumulation in a perinuclear region (at 30 min) ( Figure 3B). In M2 cells, whereas a 5-min exposure to octreotide resulted in complete SST 2 and SST 5 translocation to the plasma membrane, similarly to what is observed in A7 cells, pasireotide also promoted a rapid internalization of a subset of both receptors. After 30 min of stimulation, only pasireotide led to SST 2 and SST 5 accumulation in a perinuclear region, while octreotide failed to internalize both receptors, as shown by the persistence of membrane signals ( Figure 3C).
Discussion
SST 2 and SST 5 are the most abundantly expressed SSTs in the pituitary, where they mediate octreotide and pasireotide inhibitory effects on hormone secretion and cell proliferation, representing the main pharmacological targets for GH-secreting pituitary tumors in acromegaly (22). FLNA is known to directly bind SST 2 and SST 5 influencing their expression, signaling, and intracellular trafficking (34)(35)(36). To date, the co-expression of SST 2 and SST 5 has been reported in pituitary tumor cells (17,39). However, the occurrence of SSTs homo-and heterodimerization and the modulation by SSAs as well as a possible involvement of FLNA in this process have been only postulated (40, 41). In the present study, we provided evidence of SST 2 and SST 5 homo and hetero-dimer assembly in pituitary somatotroph GH3 cells and melanoma cells and the impact of FLNA and octreotide, but not pasireotide, in the modulation of SST 2 /SST 5 hetero-dimer formation. Moreover, we uncovered a novel role of FLNA in SSA-dependent SST 2 and SST 5 internalization.
First, by in situ PLA we detected the presence of homodimers of transiently transfected human SST 2 in GH3 cells. This finding is consistent with previous immunoprecipitation and fluorescence resonance energy transfer (FRET) results on human, rat, and porcine SST 2 in transfected in CHO and HEK293 cells, which revealed interspecies variations in the response to somatostatin (13,24,26). Here, we showed that SST 2 /SST 2 homo-dimer assembly also occurs in melanoma cells. Moreover, our results suggest an involvement of FLNAindependent processes, since both A7 and M2 cell lines displayed PLA signals under basal conditions. As regards SST 5 , our PLA experiments showed that endogenous rat and human receptors assemble to form homo-dimers in GH3 and melanoma cells both under basal and stimulated conditions, respectively. This is in apparent contrast to previous studies which suggested that human SST 5 does not dimerize after its synthesis but rather following somatostatin treatment (25). However, there is evidence that the stringent protein solubilization conditions used to study dimerization with co-immunoprecipitation may induce dimer dissociation, whereas agonist binding may stabilize receptor dimers (42). Since PLA does not require protein solubilization and has a higher sensitivity, it is plausible that our approach was able to reveal constitutive SST 5 dimers that could not be detected in previous biochemical studies.
Since SST 5 /SST 5 homo-dimers were present in both A7 and M2 cells, our results rule out a role of FLNA in SST 5 homodimerization. Unfortunately, we could not perform a quantitative analysis of possible effects exerted by SSAs on SST 2 /SST 2 and SST 5 /SST 5 homo-dimers due to technical limitations. Specifically, in the case of SST 2 /SST 2 homodimers, the difference in SST 2 transfection efficiency among cells represented a bias, that, in turn, could have affected the PLA outcome; in the case of SST 5 /SST 5 homo-dimers, the PLA coalescent signal (a consequence of the use of two different antibodies directed toward the same protein) could have rendered the puncta counting step unreliable.
Regarding GPCR hetero-dimerization, this process may be critical for the correct functionality of the GPCR, as is the case of the and the g-aminobutyric acid (GABA B ) receptor (5, 6), or, in some instances, may result in the alteration of the single receptor functioning (9-15). Here, the occurrence of endogenous SST 2 / SST 5 hetero-dimers was reported for GH3 cells, although they remained insensitive to octreotide and pasireotide stimulation. In melanoma cells, SST 2 /SST 5 hetero-dimers were also observed, but, interestingly, the absence of FLNA resulted in a significant reduction of their amount under basal conditions, indicating that FLNA is required but not essential for bringing SST 2 and SST 5 in close proximity to dimerize. Hetero-dimer formation was enhanced by octreotide but not pasireotide in both A7 and M2 cells, suggesting that activation of SST 2 but not SST 5 is an FLNA-independent driving factor for receptors to assemble. It has to be noticed that the effect of octreotide in M2 cells seemed even more pronounced. This observation raises the hypothesis of a less organized and more random occurrence of receptor interactions in the absence of FLNA. Indeed, in previous single-molecule studies, the disrupted FLNA-SST 2 interaction resulted in upregulation of freely diffusing SST 2 in CHO cells (33). However, in line with our finding of octreotide-promoting SST 2 /SST 5 hetero-dimerization, co-immunoprecipitation and FRET data published by Grant and colleagues already documented an upregulation of human SST 2 /SST 5 heterodimers in HEK293 cells following the selective activation of SST 2 but not SST 5 or their co-stimulation, with implications on receptor dynamics such as association of b-arrestin to SST 2 , receptor recycling, and signaling transduction efficiency (27).
An issue that should be considered is that the experiments testing SST 2 /SST 2 homo-dimerization were performed by transient transfection of human SST2 in rat GH3 cells, since a pair of anti-SST 2 antibodies raised in two different species, suitable for this application, and a human tumoral pituitary cell line were unavailable. Admittedly, a limitation of the present study is the lack of primary cell cultures from GH-secreting pituitary tumors.
Moreover, further studies analyzing a possible differential function of dimers compared to receptor monomers in terms of enhanced response to SSA treatment or activation of different intracellular pathways will open the way for the development of new therapeutic strategies for pituitary tumors based on drugs targeting the formation of SST dimers.
Finally, we studied SST 2 and SST 5 trafficking in order to highlight a potential role of FLNA in the modulation of ligandmediated receptor internalization. We first observed a similar efficiency of octreotide and pasireotide in the recruitment of SST 2 and SST 5 at the plasma membrane and subsequent internalization in GH3 cells and A7 cells, supporting the idea of an efficient activation of these receptors exerted by both compounds in these specific cell models. These findings were only partially in line with data present in the literature reporting different effects on SST dynamics promoted by octreotide and pasireotide (43,44). As already reported for SST 2 in experiments of FLNA silencing in somatotroph cells (45), here we observed a lesser extent of octreotide-induced internalization not only of SST 2 but also of SST 5 in M2 cells compared to A7 cells. In addition, we documented a faster mobilization and internalization of both SST 2 and SST 5 in M2 cells compared to A7 cells promoted by pasireotide. These data pointed out a specific FLNA-dependent modulation of SST 2 and SST 5 intracellular trafficking induced by octreotide and pasireotide.
In conclusion, this work provides novel insights into the molecular mechanisms modulating SST assembly and functioning. Although the SST dimer interfaces remain to be established, such studies may open the way for the design of drugs targeting specific interactions between receptors of the SSTs family as well as interaction between scaffold proteins and SSTs.
Data availability statement
The original contributions presented in the study are included in the article/Supplementary Material. Further inquiries can be directed to the corresponding author.
|
2022-08-05T13:11:04.849Z
|
2022-08-05T00:00:00.000
|
{
"year": 2022,
"sha1": "3cde1175444c589584c8232b08a16d2d8059acc8",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "3cde1175444c589584c8232b08a16d2d8059acc8",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": []
}
|
16987909
|
pes2o/s2orc
|
v3-fos-license
|
Incidence of positive peritoneal cytology in patients with endometrial carcinoma after hysteroscopy vs. dilatation and curettage
Abstract Background The aim of the study was to compare the frequency of positive peritoneal washings in endometrial cancer patients after either hysteroscopy (HSC) or dilatation and curettage (D&C). Patients and methods We performed a retrospective analysis of 227 patients who underwent either HSC (N = 144) or D&C (N = 83) and were diagnosed with endometrial carcinoma at the University Medical Centre Maribor between January 2008 and December 2014. The incidence of positive peritoneal cytology was evaluated in each group. Results There was no overall difference in the incidence of positive peritoneal washings after HSC or D&C (HSC = 13.2%; D&C = 12.0%; p = 0.803). However, a detailed analysis of stage I disease revealed significantly higher rates of positive peritoneal washings in the HSC group (HSC = 12.8%; D&C = 3.4%; p = 0.046). Among these patients, there was no difference between both groups considering histologic type (chi-square = 0.059; p = 0.807), tumour differentiation (chi-square = 3.709; p = 0.156), the time between diagnosis and operation (t = 0.930; p = 0.357), and myometrial invasion (chi-square = 5.073; p = 0.079). Conclusions Although the diagnostic procedure did not influence the overall incidence of positive peritoneal washings, HSC was associated with a significantly higher rate of positive peritoneal cytology in stage I endometrial carcinoma compared to D&C.
Introduction
The diagnosis of endometrial cancer can be made preoperatively by obtaining a sample of endometrial tissue either with office endometrial biopsy, most commonly done with a Pipelle aspiration catheter, hysteroscopy (HSC), or dilatation and curettage (D&C). 1 The latter two procedures are most commonly used in Slovenia. HSC has been shown to be highly accurate in diagnosing endometrial cancer 2,3 and is considered a gold standard. 1 Conflicting evidence has been published in the past regarding the risk of intraperitoneal spread of malignant cells after HSC with the use of distension media. [4][5][6][7] In 2007, a retrospective study from our institution reported a significantly higher incidence of positive peritoneal washings after HSC compared to D&C. 8 However, only 24 patients in this study had undergone HSC compared to 122 who were diagnosed with D&C. 8 In recent years, HSC has become an established diagnostic tool at our institution and is now performed more frequently than D&C. The aim of our present study was to find out whether the difference in the incidence of positive peritoneal washings between HSC and D&C persists after including a higher number of patients with hysteroscopy.
Patients and methods
This retrospective study included all consecutive patients who had endometrial carcinoma diagnosed preoperatively with either D&C or HSC between January 2008 and December 2014 at the University Medical Centre Maribor, Slovenia. The study included patients who had more than one D&C or more than one HSC. Patients who had undergone both D&C and HSC were excluded from the study. The study was approved by our institution's ethics committee (Approval No. 13-03/15, November 26, 2015). All patients signed a written informed consent that their medical records can be used for research matters retrospectively.
HSC was performed in the office setting or under general anesthesia. Saline solution warmed to body temperature was used as the distension medium. In the office setting, the distension medium was installed into the pressure cuff and the intrauterine pressure was set between 80-150 mmHg. Intrauterine pressure was controlled with the Vario Flow device (Pelta, Slovenia) in operative HSC. 9 D&C was performed under general anesthesia. Curettage of the cervical canal and the uterine cavity was performed separately. Tissue samples for histologic examination were obtained during both procedures.
During the final surgery for endometrial carcinoma, samples of peritoneal washings from the pouch of Douglas were obtained for cytologic examination. Irrigation of the peritoneal cavity with saline solution was performed to obtain samples in cases with no free fluid. The samples were inspected by an expert cytopathologist. In cases of suspicious peritoneal cytology additional calretinin, MOC 31, HBME 1 and Ber-EP4 immunostaining was performed during the clarification process. In cases of small numbers of positive cells after immunostaining peritoneal cytology was described as suspicious. We therefore included suspicious results in the analysis of positive peritoneal cytology.
The primary statistical outcome was the incidence of positive peritoneal washings after HSC and after D&C. A detailed analysis of tumour histopathologic characteristics was performed including histopathologic type, tumour differentiation, depth of myometrial invasion, lymphovascular invasion and FIGO stage. Different types of endometrial carcinomas were identified in the study population with endometrioid carcinoma representing the majority of cases (N = 211; 93.0%). Other carcinomas (serous adenocarcinoma: N = 8; clear cell adenocarcinoma: N = 8) were assigned into non-endometriod group for the purpose of the study. Tumour differentiation was reported as good, moderate or poor. The depth of myometrial invasion was reported as no invasion, less than half of myometrium or more than half of myometrium. Patients treated before 2009 who had been staged according to the 1988 FIGO classification were restaged according to the new 2009 FIGO classification for statistical analysis. The time interval from diagnosis to final surgery was also analyzed.
Statistical analysis was performed with SPSS software version 22.0 (IBM, Armonk, NY, USA). Descriptive analysis, chi-square test and t-test of independent samples were performed as applicable. A p value of less than 0.05 was considered statistically significant.
Results
Between January 2008 and December 2014, 266 patients had uterine cancer diagnosed with D&C and/or HSC. 227 patients who had either HSC (N = 144) or D&C (N = 83) as well as available information on peritoneal washings were included in the statistical analysis. Two hundred and eleven (93.0%) patients had endometrioid endometrial carcinoma and 16 (7.0%) had non-endometrioid endometrial carcinoma. The differences between both groups regarding the differentiation, myometrial invasion and tumour stage are shown in Table 1. Significantly higher rate of poorly differentiated tumours (chisquare = 29.114; p < 0.001) and higher stages (chisquare = 16.019; p = 0.025) have been noted in the non-endometrioid group.
Overall, there was no significant difference in the incidence of positive or suspicious peritoneal washings regarding the procedure performed dur-ing the diagnostic evaluation (13.2% after HSC, 12.0% after D&C; chi-square = 0.062; p = 0.803).
The groups (HSC vs. D&C) did not differ in the prevalence of histologic types of the tumour, depth of myometrial invasion and lymphovascular invasion (Table 2). However, there were significant differences in tumour differentiation as well as FIGO stage, with more patients having FIGO stage I disease in the hysteroscopy group (Table 2). Due to this difference we conducted analysis only in the subgroup of patients with stage I disease. The HSC and D&C groups of stage I patients did not differ in tumour differentiation, the prevalence of histologic types of the tumour, the time from diagnosis to operation and in myometrial invasion. A separate evaluation of patients with stage I tumours showed that 12.8% in the HSC group and only 3.4% in the D&C group had positive peritoneal washings. This difference was statistically significant (chi-square = 2.422; p = 0.046) ( Table 3).
One out of 15 FIGO stage I patients with positive peritoneal washings after hysteroscopy had disease recurrence by the end of April 2015 (mean follow-up 40.2 months). Neither of the two FIGO stage I patients with positive peritoneal washings after D&C had disease recurrence in the same period (mean follow-up 39.5 months).
Discussion
The possibility of microscopic intraperitoneal spread of endometrial cancer cells after hysteroscopy has been a subject of debate for more than a decade. In our study, we did not find an increased incidence of positive peritoneal washings after hysteroscopy in comparison to D&C in the overall study population of patients with endometrial carcinoma. Several other studies similarly found no association between hysteroscopy and an increased rate of positive peritoneal cytology. 5,6 On the other hand, Bradley et al. 7 reported a higher frequency of positive or suspicious peritoneal cytology after hysteroscopy compared to blind endometrial sampling using logistic regression controlling for confounders of grade and stage. They also reported a higher rate of disease upstaging (according to the 1988 FIGO staging system) after hysteroscopy attributed solely to the positive cytology. Similar results have been reported by Zerbe et al. 10 and Obermair et al. 11 In a study conducted at our institution in 2007 8 , positive peritoneal cytology was present in 12.5% of patients after hysteroscopy and only in 1.6% after D&C. The difference In a meta-analysis of nine trials including 1015 patients with confirmed endometrial carcinoma, Polyzos et al. 12 evaluated the rate of positive peritoneal washings after hysteroscopy in comparison to other diagnostic procedures or no diagnostic procedures. They concluded that the frequency of positive peritoneal washings was significantly higher after hysteroscopy. The analysis also revealed a higher rate of disease upstaging based only on the positive peritoneal cytology. A detailed literature search performed by Guralp and Kushner 13 revealed 0-83% of positive peritoneal cytology after hysteroscopy and 0-13.6% after D&C. However, the authors emphasized a number of unanswered questions regarding the type and volume of distension medium, intrauterine pressure during the procedure, time interval between hysteroscopy and definitive surgery, stage, grade of the disease and duration of the procedure. 13 Another metaanalysis by Chang et al. 14 also reported on higher rates of positive peritoneal cytology after hysteroscopy. Nevertheless, a detailed analysis of patients with stages I or II failed to show significantly higher rates of positive peritoneal cytology in patients who had hysteroscopy.
Interestingly, our results showed a significantly higher incidence of positive or suspicious peritoneal cytology in patients with stage I disease who were diagnosed with hysteroscopy compared to those diagnosed with D&C. This is an unexpected finding because the disease at this stage is confined to the uterus. For example, only 3.3% of patients with stage I and II endometrial cancer in a large retrospective analysis by Garg et al. 15 had positive peritoneal cytology. In our study, the rate of positive or suspicious peritoneal cytology in stage I disease was 3.3% in the D&C group but as much as 12.1% in the hysteroscopy group. Positive or suspicious peritoneal cytology was shown to be more frequent after hysteroscopy in endometrial carcinoma patients who would be staged as FIGO IA in the new staging system by Obermair et al. 11 On the other hand, Biewenga et al. 6 showed no association between hysteroscopy and the rate of positive peritoneal washings in stage I disease.
Saline solution was used as the distension medium in all our patients in the hysteroscopy group. Hysteroscopy with saline solution was specifically linked to a higher rate of positive peritoneal cytology in a meta-analysis by Polyzos et al. 12 In a metaanalysis by Chang et al. 14 , the distension medium was either saline solution or 5% glucose solution.
Neither of these two meta-analyses found a connection between intrauterine pressure during hysteroscopy higher than 100 mmHg and a higher incidence of positive peritoneal cytology. 12,14 In our study, the exact intrauterine pressure during hysteroscopy was not known for each patient individually due to the retrospective nature of the analysis.
The time interval between the diagnostic procedure and definitive surgery was similar in patients with positive and negative peritoneal cytology in our study. This is in line with evidence from another retrospective study in 196 patients with endometrial cancer diagnosed with hysteroscopy. 16 Based on the data from our retrospective study, we cannot give a definite reason for the significantly higher incidence of positive peritoneal washings after HSC compared to D&C in stage I disease. We used saline solution as the distension medium, which has been previously associated with higher rates of positive peritoneal cytology. 12 Unfortunately, we do not have the exact information on the intrauterine pressure during HSC and the duration of the diagnostic procedure for each patient and therefore we cannot draw conclusions about the influence of these factors on peritoneal cytology.
Another important limitation of our study is the inclusion of suspicious peritoneal cytology in the positive peritoneal cytology group. Even after immunostaining most of the cases without evident malignant cells remained cytologically suspicious because positive immune reaction was seen in only a small fraction of cells. However, it is not easy to differentiate positive from suspicious cytology because severe atypia of reactive mesothelial cells may be interpreted as suspicious. We are aware of this methodological limitation and should aim to lower the incidence of suspicious peritoneal cytology in the future firstly by obtaining sufficient amount of fluid for cytological analysis during final surgery and secondly with accurate cytological diagnosis. Some published research on this subject also included positive and suspicious cytology in the same group. 7,8,11 Our data show that among patients with positive or suspicious peritoneal washings after hysteroscopy, in FIGO stage I patients, one out of 15 had local disease recurrence during follow-up of approximately 40 months, whereas neither of the two with positive washings after D&C had the recurrence. As these numbers are small, further research is necessary to draw relevant conclusions. Conflicting results exist in the literature regarding the prognostic significance of positive peritoneal cytology. 17,18 The updated FIGO staging system from 2009 excluded positive peritoneal cytology as a stage defining variable. Previously, all patients with positive peritoneal cytology were upstaged to stage IIIA. 19 In an analysis of 14,704 patients, Garg et al. 15 reported peritoneal cytology to be associated with survival in univariate analysis along with race, age, histology, grade and the number of removed lymph nodes. In multivariate analysis, positive peritoneal cytology remained an independent prognostic factor in stages I and II. Shiozaki et al. 20 studied the influence of positive peritoneal washings on the prognosis of 265 patients with stage I endometrial cancer. Progression-free survival was significantly lower in the group with positive peritoneal cytology. Other factors associated with progression-free survival in univariate analysis were lymph node dissection and vessel permeation, but positive peritoneal cytology was the most influential factor. 20 Disease-free survival has been shown to be 91% in FIGO stage I patients and 52.5% in those with FIGO stage II, III and IV. 21 In the study by Garg et al., survival in patients with stage I endometrioid adenocarcinoma with positive peritoneal washings was significantly poorer than in patients with negative peritoneal washings (88.2% vs. 98.6%). 15 In conclusion, the diagnostic procedure did not influence the overall incidence of positive peritone-al washings in our study. However, hysteroscopy was associated with a significantly higher rate of positive peritoneal cytology in stage I endometrial carcinoma. Although statistically significant, this finding must be interpreted with caution because of the small sample size of this subgroup. In addition, it is still not known whether iatrogenic dissemination of malignant cells bears the same influence on disease prognosis as spontaneous dissemination. Despite being excluded as the stage defining variable, peritoneal cytology should still be reported separately as requested by FIGO. 15 We believe that additional trials are needed to further clarify the prognostic value of positive peritoneal cytology after hysteroscopy, particularly in the early stages of endometrial cancer.
|
2018-04-03T03:52:57.054Z
|
2016-02-07T00:00:00.000
|
{
"year": 2016,
"sha1": "5de45ff2994ec3971b55f9d249e1adcd9de658da",
"oa_license": "CCBYNCND",
"oa_url": "https://content.sciendo.com/downloadpdf/journals/raon/51/1/article-p88.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "5de45ff2994ec3971b55f9d249e1adcd9de658da",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
257203430
|
pes2o/s2orc
|
v3-fos-license
|
Alterations in the Cellular Metabolic Footprint Induced by Mayaro Virus
: Mayaro virus is a neglected virus that causes a mild, dengue-like febrile syndrome characterized by fever, headache, rash, retro-orbital pain, vomiting, diarrhea, articular edemas, myalgia, and severe arthralgia, symptoms which may persist for months and become very debilitating. Though the virus is limited to forest areas and is most frequently transmitted by Haemagogus mosquitoes, Aedes mosquitoes can also transmit this virus and, therefore, it has the potential to spread to urban areas. This study focuses on the metabolic foot-printing of Vero cells infected with the Mayaro virus. Nuclear magnetic resonance combined with multivariate analytical methods and pattern recognition tools found that metabolic changes can be attributed to the effects of Mayaro virus infection on cell culture. The results suggest that several metabolite levels vary in infection conditions at different time points. There were important differences between the metabolic profile of non-infected and Mayaro-infected cells. These organic compounds are metabolites involved in the glycolysis pathway, the tricarboxylic acid cycle, the pentose phosphate pathway, and the oxidation pathway of fatty acids (via β -oxidation). This exometabolomic study has generated a biochemical profile reflecting the progressive cytopathological metabolic alterations induced by Mayaro virus replication in the cells and can contribute to the knowledge of the molecular mechanisms involved in viral pathogenesis.
Introduction
Mayaro virus (MAYV) is a neglected arbovirus (ARthropod-BOrne virus) classified in the family Togaviridae, genus Alphavirus. This arthritogenic alphavirus causes a dengue-like febrile syndrome, sharing many symptoms with Chikungunya fever, including headache, rash, retro-orbital pain, vomiting, diarrhea, articular edemas usually associated with myalgia, and severe arthralgia/arthritis that can be very debilitating and can endure for months [1]. The viral genome is composed of a single-stranded, positive-sense RNA molecule. The virus has a cell-derived lipid enveloped around an icosahedral capsid that is 60-70 nm in diameter [2].
Since the first identification of the virus in Trinidad in 1954 [3], epidemiological and serological evidence of virus circulation has been found in Brazil (mainly in the Amazon region and the country's Central plateau) [4][5][6][7][8], Bolivia, Peru, Ecuador, Colombia, Venezuela, Trinidad, Guyana, French Guiana, Suriname, Panamá, Costa Rica, Honduras, The results found provide information on which metabolic pathways the virus requires for replication and can shed light on these mechanisms, thus contributing to the development of new treatments or vaccines.
Materials and Methods
An outline of the experimental approach can be found in Figure 1.
BioMed 2023, 3, FOR PEER REVIEW 3 humans. Three-time points were chosen to assess metabolic alterations at different stages of the viral replicative cycle: early (2 h), after one complete cycle (6 h), and late (12 h).
The results found provide information on which metabolic pathways the virus requires for replication and can shed light on these mechanisms, thus contributing to the development of new treatments or vaccines.
Materials and Methods
An outline of the experimental approach can be found in Figure 1.
Cell Cultures and Virus
The viral strain BeAR-20.290 MAYV used in this study was isolated from Haemagogus mosquitoes in 1960 in Pará State, Brazil. It was initially propagated in suckling mice brains (Mus musculus), and then in C6/36 cells to produce the viral stock. This viral strain grows well in Vero cells, a mammalian cell line. It is expected that the metabolic alterations induced by the virus in this cell lineage be quite similar to that induced in humans [42].
Vero cells were seeded in six-well plates with MEM supplemented with 10% FBS for 24 h at 37 °C and 5% CO2 with 95% confluence. Next, the monolayer was infected with MAYV at a multiplicity of infection (MOI) of 5 to guarantee that all cells in the monolayer would be infected. The Vero cell culture supernatant was collected at 2, 6, and 12 h postinfection (hpi). Vero E6 and C6/36 cell lines were both purchased from the American Type Culture Collection, or ATCC (Manassas, VA, USA). All the monolayers were treated using MEM and SBF from the same lot, and the cells were at the same passage for each time point collected. Six samples of normal cells and of infected cells were collected simultaneously for each time point.
Preparations of Samples for NMR
The Vivaspin filtration membrane has residual quantities of glycerine and sodium azide, which interfere with NMR analyses. Before use, the Vivaspin membranes were prewashed 20 times with 2 mL of purified (deionized) water, and the tubes were stored in a refrigerator (4 °C) with purified water, covering the membrane surface until the NMR procedures.
All the infected and non-infected Vero cell supernatants collected at periods of 2, 6, and 12 hpi were thawed and added to the Vivaspin unit. They were then centrifuged at 4000 rpm for 10-15 min at 4 °C to filter extracellular metabolites. Finally, 550 μL of the filtrate was added to 50 μL of D2O and transferred to 5 mm tubes appropriate for NMR analyses. An outline of the approach used in this work. First Vero cell monolayers were infected with MAYV. In 2, 6, and 12 hpi samples were collected and filtered in Vivaspin. The filtrate was used to detect metabolites by NMR.
Cell Cultures and Virus
The viral strain BeAR-20.290 MAYV used in this study was isolated from Haemagogus mosquitoes in 1960 in Pará State, Brazil. It was initially propagated in suckling mice brains (Mus musculus), and then in C6/36 cells to produce the viral stock. This viral strain grows well in Vero cells, a mammalian cell line. It is expected that the metabolic alterations induced by the virus in this cell lineage be quite similar to that induced in humans [42].
Vero cells were seeded in six-well plates with MEM supplemented with 10% FBS for 24 h at 37 • C and 5% CO 2 with 95% confluence. Next, the monolayer was infected with MAYV at a multiplicity of infection (MOI) of 5 to guarantee that all cells in the monolayer would be infected. The Vero cell culture supernatant was collected at 2, 6, and 12 h postinfection (hpi). Vero E6 and C6/36 cell lines were both purchased from the American Type Culture Collection, or ATCC (Manassas, VA, USA). All the monolayers were treated using MEM and SBF from the same lot, and the cells were at the same passage for each time point collected. Six samples of normal cells and of infected cells were collected simultaneously for each time point.
Preparations of Samples for NMR
The Vivaspin filtration membrane has residual quantities of glycerine and sodium azide, which interfere with NMR analyses. Before use, the Vivaspin membranes were pre-washed 20 times with 2 mL of purified (deionized) water, and the tubes were stored in a refrigerator (4 • C) with purified water, covering the membrane surface until the NMR procedures.
All the infected and non-infected Vero cell supernatants collected at periods of 2, 6, and 12 hpi were thawed and added to the Vivaspin unit. They were then centrifuged at 4000 rpm for 10-15 min at 4 • C to filter extracellular metabolites. Finally, 550 µL of the filtrate was added to 50 µL of D2O and transferred to 5 mm tubes appropriate for NMR analyses.
Proton Nuclear Magnetic Resonance (NMR) Spectroscopy
NMR spectra were acquired in Bruker Avance HD III spectrometer operating at 600 MHz and equipped with a triple channel cryoprobe. The standard Bruker 1D pulse sequence NOESYPR1D was used with a mixing time of 100 ms. Sixteen scans were collected, as were four dummy scans. A spectral width of 14 ppm and 32k data points were used.
The relaxation delay was set to 5 s with 0.2 ms of gradient recovery. All measurements were performed at 293 K. Before Fourier transformation, a line broadening of 1 Hz was applied to each free induction decay (FID). Next, each spectrum was manually phase-corrected and referenced to the methyl doublet of lactate at 1.33 ppm.
To aid in metabolite identification, TOCSY spectra were performed using a DPSI sequence for mixing and pre-saturation for water suppression in selected samples. A mixing time of 80 ms was chosen, and spectra were collected using 64 scans with 4k data points in the direct dimension and 512 data points in the indirect dimension. Relaxation delay was maintained at 5 s. The 13 C-1 H HSQC spectra also were acquired to support metabolite identification. Each spectrum was collected using 1k data points in the direct dimension and 256 in the indirect dimension, and 150 scans were recorded. A relaxation delay of 1.5 s was used, and a gradient recovery of 0.2 ms was applied. All spectra were processed in the TopSpin software, version 3.2 (Bruker, Germany).
Chemometric and Statistical Analyses
Binning was manually performed to remove noise and water signal regions and the ranges selected were 0. The spectral ranges selected were uploaded onto the MetaboAnalyst web server [43,44] for further analysis. The dataset presented did not contain any missing data and underwent Pareto scaling in MetaboAnalyst. Principal component analysis (PCA) and partial least squares-discriminant analysis (PLS-DA) were used to first determine whether footprintbased metabolic differences were present in the control and infected cells and to then rank NMR signals for subsequent metabolite identification. PLS-DA is a supervised method and, as such, was validated using leave-one-out cross-validation. Both Welch's two-sample test (p-value ≤ 0.05) and a fold change analysis (fold change less than 0.5 or higher than 2.0) were used to filter signals that underwent metabolite identification. Welch's two-sample test is performed by using the mean and standard error for each group as show in equation 1. The larger the t value, the more likely it is for both groups to represent distinct groups, as regarding the used variable for calculation.
where X 1,2 is the meand and S 2 X 1,2 is the standard error. Fold change is defined as simply the logarithm of the ratio between two averages, for example, between infected and control averages. The higher the fold change value, the more important it is for discriminating between groups. As well, for small values of fold change, it is understood that the corresponding signal has decreased in the infected group and, thus, it is potentially important.
Both indicators are combined into the so-called Volcano Plot, and, thus, signals that are potentially important for discrimination between control and infected are selected. It is important to note that both indicators (p-value and fold change) were used to select signals for metabolite identification, reducing the number of signals from in the thousands to twenty signals. No further conclusions regarding metabolic differences between conditions can be deduced from the indicators.
NMR signals passing the criteria listed above and ranked according to their PCA loadings were identified using the Chenomx NMR Suite, version 8.1 (Chenomx; Edmonton, AB, Canada). Each spectrum was manually visualized in the TopSpin software, version 3.2 (Bruker; Billerica, MA, USA), and no considerable resonance shifts across different samples were observed for the signals selected. After metabolite identification, specific resonances were selected to perform signal integrations. The signals selected were not in crowded regions and could therefore be associated with each individual metabolite concentration in the cell culture medium.
Furthermore, to assess the consistency of the metabolites identified in the metabolic changes in cell culture following MAYV infection over time, a Support Vector Machine (SVM) algorithm for the classification of infected and control cell culture medium was used. The SVM algorithm, available from MetaboAnalyst, was applied and tested using Receiver Operating Characteristic (ROC) curves [45]. The Biomarker Analysis module in MetaboAnalyst was applied independently to the 2, 6, and 12 hpi datasets. The same scaling was used as previously mentioned. Combinations of metabolites were tested using the SVM approach and the MetaboAnalyst built-in feature selection. The ROC curve and 95% confidence interval were generated using a Monte Carlo cross-validation based on 70% of samples for feature selection and training and the remaining 30% of samples for testing, a process that was repeated multiple times.
Results
PCA and PLS-DA score plots with the first two principal components were successful in dividing the samples into two well-distinguished groups for all three-time points (2, 6, and 12 hpi), indicating that the NMR signals correspond to the metabolic state of the infection (Figure 2). This result also suggests that resonances from the 1 H NMR spectrum (i.e., metabolite concentrations in cell culture medium) differ between infected and controlled cells at different time points.
Volcano plots of each time point contains both criteria used to select signals for metabolite identification, i.e., Welch's p-value of 5% or less and a fold change at a 2.0 threshold ( Figure S1). The unique NMR signals that met both criteria of the Volcano plot increased to 100 at 2 hpi, 126 at 6 hpi, and 200 at 12 hpi.
Metabolite identification was performed using the Chenomx profiler software. The Chenomx library was searched to find signals that met the criteria of the volcano plot, and the findings were sorted by loadings. PLS-DA VIP scores were used to sort signals. To confirm whether the Chenomx results fit into the 1D NMR spectra, TOCSY spectra were used to check for cross-peaks. In addition, 13 C-1 H-HSQC was used to assign metabolites to each specific signal when only one signal was present, as well as in crowded regions. The metabolites identified are summarized in Table 1. Figure 3 outlines reference spectra from infected cells 2, 6, and 12 hpi with identified metabolites (For a similar figure with control samples, please, refer to Figure S2 in Supplementary Material). Table 1 shows the relative concentration of the identified metabolites at each time point under study, and the corresponding boxplots are presented ( Figures S3-S5). Following the assignment of NMR signals, the extra signals at 6 and 12 hpi that passed the criteria of the Volcano plot corresponded to very similar metabolites, as observed 2 hpi. Lactate was present only on the 6 hpi list. Isoleucine and leucine were found to be present at both 6 and 12 hpi, but not in samples after 2 hpi. In order to have relative levels of identified metabolites, the integration of the specific resonance signals for each metabolite was performed, and this result was used to compare metabolite levels between controls and infected cells (Figure 4). This second round of analyses was performed using only identified metabolites, reducing the number of signals to only twenty metabolites. Most metabolites were found to have similar relative levels between the controls and the infected cells. The levels of aspartate were not found to be affected by MAYV infection after 2 hpi, but they were affected at 12 hpi. Lactate levels presented changes only at 6 hpi, but not at the beginning or at the end of the infection process. In addition, PCA and PLS-DA loadings indicating important metabolites are shown in Figure S6. PLS-DA cross-validation results indicate that the metabolites identified were successful in correctly classifying controls and infected samples. that the metabolites identified were successful in correctly classifying controls and infected samples. The SVM algorithm was used to show that controls and infected cells at each time point have a different metabolic profile and was therefore successfully differentiated. This classification method is useful for highlighting how well the data used herein are suitable for predicting new samples, as well as how robust the reported metabolites are for infected cells (label 2, green) are observed to have higher levels of most metabolites, except for aspartate, which level is not affected by infection, and glucose, which is more consumed in infected cells. In 6 hpi (upper right) infected cells are seen to consume more glucose (similarly to 2 hpi), lactate, and aspartate. Other identified metabolites are at higher levels in infected cells. In 12 hpi (bottom) infected cells follow the same behavior as in the 6 hpi time point, but levels are different between control and infected cells. Also, lactate is not observed to have different levels in control and infected cells.
The SVM algorithm was used to show that controls and infected cells at each time point have a different metabolic profile and was therefore successfully differentiated. This classification method is useful for highlighting how well the data used herein are suitable for predicting new samples, as well as how robust the reported metabolites are for describing the infection process. Several individual metabolites were found to be able to correctly identify controls and infected samples at every post-infection time point studied ( Figures S1-S5). To avoid overfitting issues, combinations of five metabolites were used to build the final SVM classifier in two distinct ways. In the first, MetaboAnalyst built-in SVM method was used to select metabolites. In the second, metabolites were manually chosen using a K-nearest neighbor algorithm to split them into five different groups to then select the highest log 2 FC for each group. Figure 5 shows predicted class probabilities for each infection time using the SVM classifier with manually selected metabolites following the above-mentioned criteria. Results of the SVM algorithm with built-in feature selection are also available in the supplementary material ( Figures S7-S9). The ROC curves for each classifier in Figure 5 indicate that the metabolic changes observed are consistent and were able to differentiate between controls and infected cells at every time point.
describing the infection process. Several individual metabolites were found to be able to correctly identify controls and infected samples at every post-infection time point studied (Figures S1-S5). To avoid overfitting issues, combinations of five metabolites were used to build the final SVM classifier in two distinct ways. In the first, MetaboAnalyst built-in SVM method was used to select metabolites. In the second, metabolites were manually chosen using a K-nearest neighbor algorithm to split them into five different groups to then select the highest log2 FC for each group. Figure 5 shows predicted class probabilities for each infection time using the SVM classifier with manually selected metabolites following the above-mentioned criteria. Results of the SVM algorithm with built-in feature selection are also available in the supplementary material ( Figures S7-S9). The ROC curves for each classifier in Figure 5 indicate that the metabolic changes observed are consistent and were able to differentiate between controls and infected cells at every time point.
The metabolomic approach used in this study revealed a metabolic profile with cytopathological alterations caused by MAYV replication in Vero cells ( Table 1). The metabolic footprints obtained are attributed to variations in metabolite levels detected and identified in infected Vero cell supernatants. This variation in the level of these organic compounds can be attributed to the effects of MAYV infection on Vero cells. The metabolomic approach used in this study revealed a metabolic profile with cytopathological alterations caused by MAYV replication in Vero cells ( Table 1). The metabolic footprints obtained are attributed to variations in metabolite levels detected and identified in infected Vero cell supernatants. This variation in the level of these organic compounds can be attributed to the effects of MAYV infection on Vero cells.
Discussion
The profiling of metabolite secretion reflects the cellular metabolic activity, providing insights into intracellular metabolic processes and variations in metabolite levels in infected cells relative to the control. These processes and variations represent the effect of cellular biochemical reactions (anabolism and catabolism) that occur in response to the virus infection, suggesting affected metabolic pathways and possible mechanisms of action of the virus replication [39][40][41]. At this point is impossible to determine the exact metabolite pathways influenced by MAYV replication; however, the results can throw some light on it. In this report, we observed variations in the levels of 20 metabolites in the three-time points studied of the MAYV infection.
We observed alterations in the amino acid metabolism, which is expected of any virus infection. The increase and decrease in amino acids are not specific to a metabolic pathway; however, it is possible to infer some possibilities. For example, an increase in glutamine was observed. This amino acid is the main biological source of amino groups for a wide array of biosynthetic processes and plays a central role in the metabolism of amino acids in mammals [46]. Therefore, increased glutamine affects the metabolism of amino acids, and this finding herein may be associated with the increases in amino acids observed during the three-time points.
Methionine, arginine, and lysine were detected at increased levels in infected samples at 2, 6, and 12 hpi. They participate in anabolic and catabolic TCA cycle pathways; methionine and arginine are glycogenic, and lysine is ketogenic [47]. The high levels of methionine and arginine found in the infected cell samples may be associated with the disruption of cell homeostasis, energy production, and the consumption mechanism (ATP) [48,49]. The increases in methionine, arginine, and lysine in the infected samples may be linked to glutamine levels since this is central in the metabolism of amino acids. The increase in arginine in the infected samples may be associated with acetate and acetic acid that was detected at highly differentiated levels.
Valine was also found to increase in infected samples when measured at different time points. Higher levels of valine and pyruvate may be associated with changes in glucose metabolism. Studies have demonstrated that MAYV alters glucose metabolism through the enzyme 6-phosphofructo 1-kinase [50].
Furthermore, the pattern of metabolic alterations presented in this study allowed us to infer that MAYV replication in Vero cells interferes with the glycolysis and TCA pathways, both of which are important for cellular energy and biosynthesis. It is known that other alphaviruses, such as the Semliki Forest virus and Sindbis virus, are also known to interfere with these pathways [51].
The data collected in this study is not sufficient to support the conclusion that MAYV interferes with lipogenesis. However, as alphavirus are enveloped viruses, it is expected that MAYV can cause alterations in this metabolic pathway since lipid and cholesterol metabolism are important for the entry and replication of enveloped viruses [52][53][54][55].
Acetate or acetic acid is a carboxylic acid that is involved in pyruvate metabolism and lipogenesis [56]. It was found to increase during the three infection periods. Acetyl-CoA hydrolysis produces the acetate, and acetyl-CoA is a key intermediary in biochemical reactions in the glycolytic pathway, the tricarboxylic acid (TCA) cycle, the β-oxidation pathway, and the lipogenesis [57][58][59]. This observation supports the hypothesis the high level of acetate at the three-time points may be associated with acetyl-CoA regulation and the virus replication process. However, more studies need to be carried out to confirm this hypothesis.
As also expected, enveloped virus interferes with the cytoskeleton organization and the integrity of the cytoplasm membrane. Decreases in glucose and increases in galactose were identified and detected in infected samples relative to the control samples. The release of viral particles takes place at 6 hpi, and there was an increase in the permeability of the plasmatic membrane at the time of the entrance of the virus, perhaps due to the formation of pores by the viral proteins E1 or 6K. The formation of syncytia ("polykaryocyte") by the Semliki Forest virus, an alphavirus was already observed [60].
The presence of syncytia and the acidification of the extracellular media were observed at 6 hpi (data not shown). These observations, along with the increase in galactose, support the hypothesis that these metabolites may be regulating ATP production, which interferes with syncytia formation.
Furthermore, these results are in accordance with the enzyme analyses, which revealed changes in the glucose metabolism of Vero cells infected with MAYV [51]. It is important to note that, when galactose is transformed into an intermediate glycolytic metabolite, it acts upon the metabolism of nucleotide sugars. This observation suggests that the later-stage effects of the infection may be influencing the biochemical regulation of nucleotide sugar metabolism. Another metabolite, τ-methylhistidine, a component of the actin and myosin filament, was altered in infected samples at all-time points.
NMR spectroscopy combined with multivariate analytical methods revealed important differences between infected and non-infected cells. These data showed that the exometabolome generates a biochemical profile that reflects the metabolic status of the infected cells at different time points. These alterations are linked to the progression of cytopathic effects of the virus on the cells. Our study is preliminary, and future research to determine any associations between genomic and proteomic data is necessary to clarify the molecular mechanisms involved in MAYV pathogenesis.
Supplementary Materials:
The following supporting information can be downloaded at: https: //www.mdpi.com/article/10.3390/biomed3010013/s1, Figure S1: Volcano plot and Venn Diagram for highlighting important NMR signals. Time-specific volcano plot (left) for selecting NMR signals that attend the criteria of p-value, from Welch-Two Sample test, less than 5% and a fold change threshold of 2. The Venn diagram (right) shows an increasing number of important signals that are observed according to the progress of the infection process. Nevertheless, only three new metabolites (lactate, isoleucine, and leucine) are seen in 6h p.i. and 12h p.i.. The lactate pathway was only affected in the 6h p.i. cells.; Figure S2. Reference NMR spectra from control subjects at 2 (brown), 6 (green), and 12 h (blue) post infection. Metabolite identification followed the protocol mentioned in the main text. A total of 20 metabolites were identified as being relevant for the infection process Figure S4: Boxplots indicating metabolites concentrations at 6 hpi.; Figure S5: Boxplots indicating metabolites concentrations at 12 hpi.; Figure S6: Loadings from PCA and PLS-DA for each time point. Metabolites with high values are Acetate, Glucose, Pyruvate, Glutamine, Ethanol, and Lactate for all time points. The bottom shows the PLS-DA performance in leave-one-out cross-validation for each time post-infection; Figure S7: SVM classification of MAYV infected cells at 2 hpi. Assessment of model quality through classification using SVM with its built-in variable selection approach and evaluation by means of area under the ROC curves. Classification performance is related to the robustness of the metabolites identified in the chemometric analysis in predicting a metabolic profile in control or infected cell; Figure S8: SVM classification of MAYV infected cells at 6 hpi. Assessment of model quality through classification using SVM with its built-in variable selection approach and evaluation by means of area under the ROC curves. Classification performance is related to the robustness of the metabolites identified in the chemometric analysis in predicting a metabolic profile in control or infected cell.; Figure S9: SVM classification of MAYV infected cells at 12 hpi. Assessment of model quality through classification using SVM with its built-in variable selection approach and evaluation by means of area under the ROC curves. Classification performance is related to the robustness of the metabolites identified in the chemometric analysis in predicting a metabolic profile in control or infected cell.
|
2023-02-26T16:13:04.347Z
|
2023-02-24T00:00:00.000
|
{
"year": 2023,
"sha1": "0ac8d113e2b339b69047a040008735f9a8a38337",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2673-8430/3/1/13/pdf?version=1677461678",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "72bfa96443d80313e080f8a1f9ea288436252872",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science",
"Biology"
],
"extfieldsofstudy": []
}
|
268840719
|
pes2o/s2orc
|
v3-fos-license
|
Gene Expression Profiling Reveals Fundamental Sex-Specific Differences in SIRT3-Mediated Redox and Metabolic Signaling in Mouse Embryonic Fibroblasts
Sirt-3 is an important regulator of mitochondrial function and cellular energy homeostasis, whose function is associated with aging and various pathologies such as Alzheimer’s disease, Parkinson’s disease, cardiovascular diseases, and cancers. Many of these conditions show differences in incidence, onset, and progression between the sexes. In search of hormone-independent, sex-specific roles of Sirt-3, we performed mRNA sequencing in male and female Sirt-3 WT and KO mouse embryonic fibroblasts (MEFs). The aim of this study was to investigate the sex-specific cellular responses to the loss of Sirt-3. By comparing WT and KO MEF of both sexes, the differences in global gene expression patterns as well as in metabolic and stress responses associated with the loss of Sirt-3 have been elucidated. Significant differences in the activities of basal metabolic pathways were found both between genotypes and between sexes. In-depth pathway analysis of metabolic pathways revealed several important sex-specific phenomena. Male cells mount an adaptive Hif-1a response, shifting their metabolism toward glycolysis and energy production from fatty acids. Furthermore, the loss of Sirt-3 in male MEFs leads to mitochondrial and endoplasmic reticulum stress. Since Sirt-3 knock-out is permanent, male cells are forced to function in a state of persistent oxidative and metabolic stress. Female MEFs are able to at least partially compensate for the loss of Sirt-3 by a higher expression of antioxidant enzymes. The activation of neither Hif-1a, mitochondrial stress response, nor oxidative stress response was observed in female cells lacking Sirt-3. These findings emphasize the sex-specific role of Sirt-3, which should be considered in future research.
Introduction
In mammals, including humans, sex differences go beyond purely anatomical differences.Males and females differ in their hormonal status, physiological responses, susceptibility to diseases (e.g., autoimmune diseases, cardiovascular diseases and certain cancers), and life expectancy, with females living longer than males [1][2][3][4].Furthermore, metabolic homeostasis, a cornerstone of physiological balance, is controlled by different regulatory mechanisms in males and females [5].Although many health-related sex differences have decreased in recent years due to lifestyle changes and advances in healthcare, preclinical biomedical research should take sex factors into account to produce scientific knowledge that is relevant to both sexes.Sex-biased asymmetry in the research data is mainly caused by the tendency to exclude female rodents from study designs because it is often assumed that variability increases due to the female reproductive cycle [6].However, such claims have been disputed through extensive meta-analyses [7,8].Therefore, it is clear
Results
Samples were clustered by calculating simple error ratio estimate (SERE) coefficients as an estimate of distance among experimental groups.The global pattern of gene expression showed significant differences between both sexes and genotypes, with male MEFs modulating the expression of more genes than female MEFs as a consequence of loss of Sirt-3 function.The samples were divided into four clearly defined clusters based on their gene expression signatures (Figure 1).The samples are well grouped by both sex and Sirt-3 status, and the difference in global gene expression between sexes is larger in Sirt-3 KO than between WT samples.The gap between WT and KO male MEFs (SERE value 4.6) was also significantly greater than that between WT and KO female MEFs (SERE value 3.4).As expected, male and female WT MEFs show distinct gene expression profiles.Importantly, the difference between male and female KO cells is significantly larger than that between male and female WT MEF suggesting the sex-specific contributing mechanisms.A sex-dependent response to Sirt-3 KO is also evident from the PCA plot (Supplementary Figure S1), with the separation between male and female KO MEFs being much larger than that between WT MEFs.When comparing gene expression between the two genotypes (WT and KO) disregarding the effect of sex, we were able to identify a total of 2714 differentially expressed genes (DEGs).This is similar to the number of DEGs detected between WT male and female (2527) MEFs, meaning that sex differences between KOs could be masked by inherent sex-specific gene expression pa ern differences between WT male and female MEFs.A less stringent adjusted p value (padj) of 0.05 was chosen because of the large number of proposed Sirt-3 targets, implying modulation of many Sirt-3-dependent cellular pathways that are expected to be regulated by subtle changes in gene expression.The high number of DEGs between WT male and female MEFs means that, when comparing all WT with all KO samples, each group effectively consists of two significantly different subgroups in comparison.Therefore, we decided to compare male KO vs. WT and female KO vs. WT gene expression separately.Then, sex-independent DEG sets were generated by intersecting male and female DEG sets and sex-specific DEGs by subtracting them in both ways.
Sirt-3-Dependent Changes in Gene Expression
We discovered 1382 DEGs common to both male and female KO MEFs compared to WT MEFs of the same sex (Figure 2).Inherent to enrichment analyses, many reported pathways are not relevant for the specific experimental model, are not informative, or are beyond the scope of this article.The samples are well grouped by both sex and Sirt-3 status, and the difference in global gene expression between sexes is larger in Sirt-3 KO than between WT samples.The gap between WT and KO male MEFs (SERE value 4.6) was also significantly greater than that between WT and KO female MEFs (SERE value 3.4).As expected, male and female WT MEFs show distinct gene expression profiles.Importantly, the difference between male and female KO cells is significantly larger than that between male and female WT MEF suggesting the sex-specific contributing mechanisms.A sex-dependent response to Sirt-3 KO is also evident from the PCA plot (Supplementary Figure S1), with the separation between male and female KO MEFs being much larger than that between WT MEFs.When comparing gene expression between the two genotypes (WT and KO) disregarding the effect of sex, we were able to identify a total of 2714 differentially expressed genes (DEGs).This is similar to the number of DEGs detected between WT male and female (2527) MEFs, meaning that sex differences between KOs could be masked by inherent sex-specific gene expression pattern differences between WT male and female MEFs.A less stringent adjusted p value (p adj ) of 0.05 was chosen because of the large number of proposed Sirt-3 targets, implying modulation of many Sirt-3-dependent cellular pathways that are expected to be regulated by subtle changes in gene expression.The high number of DEGs between WT male and female MEFs means that, when comparing all WT with all KO samples, each group effectively consists of two significantly different subgroups in comparison.Therefore, we decided to compare male KO vs. WT and female KO vs. WT gene expression separately.Then, sex-independent DEG sets were generated by intersecting male and female DEG sets and sex-specific DEGs by subtracting them in both ways.
Sirt-3-Dependent Changes in Gene Expression
We discovered 1382 DEGs common to both male and female KO MEFs compared to WT MEFs of the same sex (Figure 2).Inherent to enrichment analyses, many reported pathways are not relevant for the specific experimental model, are not informative, or are beyond the scope of this article.Some pathways are related to known and proposed functions of Sirt-3, such as lipid metabolism and cellular redox balance.Thus, loss of Sirt-3 affected phosphatidylcholine and cholesterol biosynthesis, fa y acid metabolism, and the oxidative stress response (Table 1).Sirt-3 is known to promote fa y acid oxidation [19] and support mitochondrial respiratory function [11].While these results are generally valid, no significant differences between subgroups, i.e., sexes, can be detected with this commonly used averaged approach.
Sex-Specific Changes in Gene Expression
Next, we focused on defining male-and female-specific altered pathways.We generated two gene sets containing DEGs detected only in male or only in female KO MEFs.Apart from several receptor-mediated pathways, the outputs include changes in glycolysis, the pentose-phosphate pathway, and lipid and cholesterol biosynthesis, which are indicative of major sex-specific metabolic shifts.Therefore, we investigated the expression of genes involved in these processes.As shown in Tables 2 and 3, Sirt-3 KO induces the expression of major glycolytic genes specifically in male MEFs.This includes Some pathways are related to known and proposed functions of Sirt-3, such as lipid metabolism and cellular redox balance.Thus, loss of Sirt-3 affected phosphatidylcholine and cholesterol biosynthesis, fatty acid metabolism, and the oxidative stress response (Table 1).Sirt-3 is known to promote fatty acid oxidation [19] and support mitochondrial respiratory function [11].While these results are generally valid, no significant differences between subgroups, i.e., sexes, can be detected with this commonly used averaged approach.
Sex-Specific Changes in Gene Expression
Next, we focused on defining male-and female-specific altered pathways.We generated two gene sets containing DEGs detected only in male or only in female KO MEFs.Apart from several receptor-mediated pathways, the outputs include changes in glycolysis, the pentose-phosphate pathway, and lipid and cholesterol biosynthesis, which are indicative of major sex-specific metabolic shifts.Therefore, we investigated the expression of genes involved in these processes.As shown in Tables 2 and 3, Sirt-3 KO induces the expression of major glycolytic genes specifically in male MEFs.This includes muscle-type phosphofructokinase 1 (Pfkm) and phosphofructokinase 2 (Pfkfb2), key regulators of glycolytic flux [20].Pfkfb2 activates Pfkm through formation of its allosteric activator, fructose 2,6-bisphosphate.Phosphorylation of fructose 6-phosphate to fructose 1,6-bisphosphate by Pfkm is the rate-limiting step in glycolysis, and cells control this flux through regulation of phosphofructokinase levels.This points to the major male-specific shift in metabolism to aerobic glycolysis as a result of the loss of Sirt-3.Another significant effect pronounced in male KO MEFs is the upregulation of the pentose-phosphate pathway (PPP).The rate-limiting step in the PPP is the conversion of glucose-6-phosphate to 6-phosphogluconolactone by glucose-6-phosphate dehydrogenase (G6pdx), which can be controlled at the transcriptional level by substrate availability or allosterically by NAD + .Regarding the TCA cycle, an increase in pyruvate dehydrogenase kinase (Pdk-1), an inhibitor of pyruvate dehydrogenase (Pdh), is observed in both sexes but is much more pronounced in male MEFs.Inhibition of Pdh leads to an accumulation of pyruvate, which can be further metabolized to lactate by lactate dehydrogenase.This is supported by a strong upregulation of lactate dehydrogenase (Ldhb) only in male KO MEFs.Sirt-3 is considered to most strongly affect the TCA cycle and fatty acid (FA) metabolism [21].Many of these effects may not be due to changes in gene expression but to the modulatory activity of Sirt-3.However, male KO MEFs show a slight decrease in acetyl-CoA carboxylase (Acaca), which catalyzes the carboxylation of acetyl-CoA to malonyl-CoA as the first step of FA synthesis.In addition, male KO MEFs exhibited upregulation of solute carrier family 27, member 4 (Slc27a4, Fatp4), suggesting that male KO MEFs switched from ATP use for the synthesis of FAS to ATP-producing FA β-oxidation.Such an effect was not observed in female MEF.This metabolic shift is further confirmed by a decrease in FA synthase (Fasn) and an increase in hydroxyacyl-CoA dehydrogenase trifunctional multienzyme complex subunit beta (Hadhb, a key enzyme in beta oxidation) at the protein level in male KO MEFs, as shown by a Western blot (Figure 3, Supplementary Figures S2 and S7).Female KO MEFs show the opposite behavior, increasing Acaca transcription and Fasn at both the mRNA and protein levels, thereby increasing the rate of fatty acid synthesis.
OXPHOS and MEF Energy Status
The regulation of ATP generation through oxidative phosphorylation (OXPHOS) is complex system of tuning OXPHOS-related gene transcription, protein synthesis, an import to mitochondria, pos ranslational modifications to control their activity, alloster
OXPHOS and MEF Energy Status
The regulation of ATP generation through oxidative phosphorylation (OXPHOS) is a complex system of tuning OXPHOS-related gene transcription, protein synthesis, and import to mitochondria, posttranslational modifications to control their activity, allosteric control by substrates/products, and the number and size of mitochondria [22][23][24][25][26][27].We used an OXPHOS-related gene list from the KEGG database to detect potential differences in the transcript levels between male and female Sirt-3 KO MEFs compared to their corresponding WT controls.At the transcription level, we could not detect any significant differences in the expression of major OXPHOS regulators.On the other hand, we show that male KO MEFs are indeed energy-depleted, which is reflected by the increase in protein kinase AMP-activated catalytic subunit alpha 2 (Prkaa2).Prkaa2 is a gene that encodes the catalytic subunit of AMP-activated protein kinase (AMPK), a key cellular energy sensor that plays a crucial role in regulating cellular energy balance [28,29].Energy deficit in males is supported by our previous study, which showed a decrease in C1-driven oxygen consumption in WT and KO male mice compared to WT and KO females [12].
Both AMPK mRNA and its phosphorylation are increased specifically in male KO MEFs (Table 2, Figure 4, Supplementary Figure S3).AMPK is induced by a low ATP/ADP ratio and modulates the activity of a large number of target proteins to restore proper cellular energy status [28].As Sirt-3 is known to support proper mitochondrial function, C1-driven respiration was impaired both in male and female MEFs (Figure 4A), as expected.On the other hand, we observed that only male KO MEFs are held in an energy-deficient state, while female MEFs are able to maintain the proper ATP/ADP ratio.A decrease in ATP production through OXPHOS induces AMPK, which suppresses anabolic pathways such as fatty acid synthesis and supports energy-producing pathways such as beta-oxidation.The results described in a previous chapter confirm this finding through downregulation of Acaca/Fasn and upregulation of Slc27a4 in male KO MEFs.Thus, the core metabolic pathway equilibrium in male MEF is highly dependent on Sirt-3, while female MEFs can compensate for the loss of Sirt-3 function and consequential decrease in OXPHOS through yet unknown mechanism(s).
encodes the catalytic subunit of AMP-activated protein kinase (AMPK), a key cellular energy sensor that plays a crucial role in regulating cellular energy balance [28,29].Energy deficit in males is supported by our previous study, which showed a decrease in C1-driven oxygen consumption in WT and KO male mice compared to WT and KO females [12].
Both AMPK mRNA and its phosphorylation are increased specifically in male KO MEFs (Table 2, Figure 4, Supplementary Figure S3).AMPK is induced by a low ATP/ADP ratio and modulates the activity of a large number of target proteins to restore proper cellular energy status [28].As Sirt-3 is known to support proper mitochondrial function, C1-driven respiration was impaired both in male and female MEFs (Figure 4A), as expected.On the other hand, we observed that only male KO MEFs are held in an energydeficient state, while female MEFs are able to maintain the proper ATP/ADP ratio.A decrease in ATP production through OXPHOS induces AMPK, which suppresses anabolic pathways such as fa y acid synthesis and supports energy-producing pathways such as beta-oxidation.The results described in a previous chapter confirm this finding through downregulation of Acaca/Fasn and upregulation of Slc27a4 in male KO MEFs.Thus, the core metabolic pathway equilibrium in male MEF is highly dependent on Sirt-3, while female MEFs can compensate for the loss of Sirt-3 function and consequential decrease in OXPHOS through yet unknown mechanism(s).Taken together, the male-specific metabolic effects of Sirt-3 KO include a shift to glycolysis and downregulation of the TCA cycle, along with OXPHOS and compensatory ATP production from fatty acids.This phenomenon can occur when primary nutrients are scarce, not available due to downregulation of relevant transporters, or as a consequence of mitochondrial function disruption.Since nutrients are available to cultured cells and transporter downregulation was not observed, the latter appears to be the most likely explanation.
Sirt-3 is known to maintain mitochondrial homeostasis.Therefore, its absence is expected to affect normal respiration and force cells to compensate by the abovementioned mechanisms.However, our results show for the first time that this effect is sex-specific.Loss of Sirt-3 leads to an increase in mitochondrial ROS due to an inefficient electron transport chain [30].High ROS levels mimic the hypoxic state at physiological pO 2 , a phenomenon termed 'pseudohypoxia'.First coined to describe a hypoxia-like phenomenon in diabetes, pseudohypoxia is now generally defined as a cellular response that resembles that to hypoxia but occurs under normoxic conditions [31].The cellular response to hypoxic conditions is mediated mainly by hypoxia-induced factor 1a (Hif-1α).Hif-1α is a well-known, ubiquitously expressed transcription factor with over 1000 known target genes that regulate various cellular processes, such as energy metabolism, proliferation, apoptosis, stem cell maintenance and tissue development.Most importantly, Hif-1α is a major regulator of the cellular response to hypoxia or pseudohypoxia.The stabilization of Hif-1α in the absence of Sirt-3 has been described previously [18,32] and is mostly considered in the context of the tumor microenvironment and the Warburg effect.However, these studies did not provide a complete gene expression signature and were limited to a small number of genes/proteins.Additionally, differences between sexes were not considered.Here, we show that male KO MEFs accumulate significantly more Hif-1α than female KO MEFs.While Hif-1α levels in WT male and female MEFs remain similar, the KO of Sirt-3 leads to a male-specific increase in both Hif-1α mRNA and protein (Figure 5, Supplementary Figure S4).
of mitochondrial function disruption.Since nutrients are available to cultured cells and transporter downregulation was not observed, the la er appears to be the most likely explanation.
Sirt-3 is known to maintain mitochondrial homeostasis.Therefore, its absence is expected to affect normal respiration and force cells to compensate by the abovementioned mechanisms.However, our results show for the first time that this effect is sex-specific.Loss of Sirt-3 leads to an increase in mitochondrial ROS due to an inefficient electron transport chain [30].High ROS levels mimic the hypoxic state at physiological pO2, a phenomenon termed 'pseudohypoxia'.First coined to describe a hypoxia-like phenomenon in diabetes, pseudohypoxia is now generally defined as a cellular response that resembles that to hypoxia but occurs under normoxic conditions [31].The cellular response to hypoxic conditions is mediated mainly by hypoxia-induced factor 1a (Hif-1α).Hif-1α is a well-known, ubiquitously expressed transcription factor with over 1000 known target genes that regulate various cellular processes, such as energy metabolism, proliferation, apoptosis, stem cell maintenance and tissue development.Most importantly, Hif-1α is a major regulator of the cellular response to hypoxia or pseudohypoxia.The stabilization of Hif-1α in the absence of Sirt-3 has been described previously [18,32] and is mostly considered in the context of the tumor microenvironment and the Warburg effect.However, these studies did not provide a complete gene expression signature and were limited to a small number of genes/proteins.Additionally, differences between sexes were not considered.Here, we show that male KO MEFs accumulate significantly more Hif-1α than female KO MEFs.While Hif-1α levels in WT male and female MEFs remain similar, the KO of Sirt-3 leads to a male-specific increase in both Hif-1α mRNA and protein (Figure 5, Supplementary Figure S4).Under normoxic conditions, specific prolyl residues of Hif-1α are hydroxylated by prolyl hydroxylases (PHDs).Hydroxylated Hif-1α is recognized by VHL (von Hippel-Lindau) protein, and this interaction leads to ubiquitination and subsequent proteasomal degradation of Hif-1α [33].In contrast, low oxygen levels and ROS inhibit PHDs, leading to Hif-1α accumulation.After being translocated to the nucleus, Hif-1α forms an active complex with Hif-1β.The Hif-1α/β complex then binds to HRE (hypoxia-responsive elements) in target gene promoters.The male-restricted Hif-1α response reflects higher ROS generation in male MEFs upon Sirt-3 loss.We therefore extracted expression data for genes related to oxidative stress management.Among superoxide dismutases, superoxide dismutase 1 (Sod-1) was expressed at the highest level, followed by Sod-2 and Sod-3.Significant differences detected between sexes were an increase in Sod-1 in female KO MEFs and extracellular Sod-3 in male KO MEFs, although the overall expression of Sod-3 was very low.Furthermore, catalase was increased only in male KO MEFs (log 2 FC = 0.26, p adj = 0.006).Among glutathione peroxidases (GPXs), male KO MEFs specifically upregulated Gpx-7, an ER-associated enzyme responsible for ROS and lipid peroxide detoxification in the ER.Mitochondrial Gpx-4 was also increased in male MEFs, but this increase was not statistically significant.Glutathione S-transferases Gstm-1, Gstm-5 and Gstz-1, quinone reductase Nqo1 and thioredoxin inhibitor Txnip were upregulated specifically in male MEFs.On the other hand, female KO MEFs expressed higher levels of a subset of oxidative stress protective genes, such as Gpx-8, Gsto-1, Sod-1, Nxn (nucleoredoxin-also a Wnt signaling inhibitor) and the redox-sensitive chaperone Park-7.Although the pattern of sex differences in ROS detoxifying enzymes is not clear, female MEFs appear to cope more efficiently with the increase in ROS generation induced by loss of Sirt-3, thereby avoiding a significant Hif-1α response.Females have been previously proposed to be less sensitive to oxidative stress [34,35], as also excellently reviewed in [36].Our data suggest that female MEFs are inherently more efficient at eliminating ROS than male cells and are therefore less dependent on Sirt-3 function, whereas male KO MEFs are forced to sustain a response to chronic ROS overproduction.This is also supported by the growth curves for male and female KO MEFs (internal data), where female MEFs exhibited faster growth and viability along with lower mitochondrial ROS levels.In brief, Sirt-3 deficiency affects mitochondrial function and induces a pseudohypoxic state and ROS increase with a concomitant Hif-1α increase specifically in male KO MEFs.Compromised respiration inevitably leads to the generation of excess ROS, resulting in cellular stress and a corresponding response.Given the variety of Sirt-3 targets, effects beyond oxidative stress can also be expected, such as mitochondrial protein folding or nutrient utilization due to metabolic shifts.Cells respond to these and other stressors through the ISR [37].Therefore, we investigated the expression of the main ISR regulators and their targets.
Sirt-3 Loss Leads to Male-Restricted, Atf-4-Mediated ISR
Sirt-3 KO induces cellular stress, primarily through impaired mitochondrial function and excessive ROS production, putting the cell in a pseudohypoxic state.It is plausible that the changes in cell metabolism described above act as secondary stressors, such as specific nutrient deprivation, aberrant signaling or damage to other organelles.Eukaryotic cells react to various stresses through the ISR, a highly conserved mechanism.The activation of ISR serves to restore cellular homeostasis and results in a global decline in Cap-dependent translation and sustained translation of ISR-specific mRNAs.Different stressors activate a specific kinase, such as protein kinase R (PKR)-like endoplasmic reticulum kinase (Perk), protein kinase R (Pkr), hepatic heme-regulated inhibitor (Hri) or serine/threonine-protein kinase general control nonderepressible 2 (Gcn2).Oxidative stress activates all four of them [38][39][40][41].Activated kinases then phosphorylate eukaryotic initiation factor 2 (eIF-2α), and phosphorylated eIF-2α preferably translates activating transcription factor 4 (Atf-4), activating transcription factor 5 (Atf-5), C/EBP homologous protein (Chop, Ddit-3 gene), and growth arrest and DNA damage-inducible protein (Gadd-34, Ppp1r15a gene) thus activating the ISR program.Gadd-34 dephosphorylates eIF-2α, creating a negative feedback loop to blunt the stress response.The main ISR effector is Atf-4, which induces the expression of a number of genes in a pattern that optimizes the cellular response to a specific type of stress [37,42,43].Here, we show that Atf-4-mediated ISR in Sirt-3 KO MEFs is highly male-specific.Figure 6 shows the expression of the main ISR regulators and transcription factors.
initiation factor 2 (eIF-2α), and phosphorylated eIF-2α preferably translates activ transcription factor 4 (Atf-4), activating transcription factor 5 (Atf-5), C/EBP homolo protein (Chop, Ddit-3 gene), and growth arrest and DNA damage-inducible protein (G 34, Ppp1r15a gene) thus activating the ISR program.Gadd-34 dephosphorylates eI creating a negative feedback loop to blunt the stress response.The main ISR effector i 4, which induces the expression of a number of genes in a pa ern that optimize cellular response to a specific type of stress [37,42,43].Here, we show that Atf-4-med ISR in Sirt-3 KO MEFs is highly male-specific.Figure 6 shows the expression of the ISR regulators and transcription factors.Figure 6.TPM values of the main ISR regulators and effectors in male and female Sirt-3 KO M (A) Graphical display of gene expression of Atf4: a p < 0.001, male WT vs. male KO; c p < 0.05, f WT vs. female KO; Atf5: a p < 0.001, male WT vs. male KO; Gadd34: a p < 0.001, female WT vs. f Figure 6.TPM values of the main ISR regulators and effectors in male and female Sirt-3 KO MEFs.(A) Graphical display of gene expression of Atf4: a p < 0.001, male WT vs. male KO; c p < 0.05, female WT vs. female KO; Atf5: a p < 0.001, male WT vs. male KO; Gadd34: a p < 0.001, female WT vs. female KO.(B) Graphical display of gene expression of Nfe2I1: a p < 0.001, male WT vs. male KO; Nfe2I2: b p < 0.01, male WT vs. male KO; Xbp1: a p < 0.001, male WT vs. male KO, female WT vs. female KO; Chop/Ddit3: a p < 0.001, male WT vs. male KO; Ddit4: a p < 0.001, male WT vs. male KO, female WT vs. female KO; (exact TPM and p adj values can be found in Supplementary Table S1).TPM-Transcripts per Million.
In addition to Atf-4, male KOs significantly increased the expression of Atf-5 and CCAAT-enhancer-binding protein homologous protein (Chop/Ddit-3), key regulators of mitochondrial UPR [44].The data indicate that the loss of Sirt-3 activates the Atf-4/Ddit-3 UPR mt exclusively in male MEFs.Nuclear factor erythroid 2-related factor 2, Nfe2l2 (Nrf-2), is induced by oxidative stress and plays a key role in activating cellular defense against oxidative damage [45].Recent research has proven that Nrf-2 is induced by Atf-4 as an integral part of oxidative stress-induced ISR [46]. Figure 6 shows that Sirt-3 KO induces an Atf-4/Nrf-2-mediated response to oxidative stress only in male MEFs.On the other hand, Xbp-1, a key transcription factor in the cellular response to endoplasmic reticulum (ER) unfolded protein stress [47], is induced in KOs of both sexes.Finally, both male and female KO MEFs upregulated Ddit-4 (regulated in development and DNA damage response 1; REDD-1), which can be induced by Atf-4, Hif-1α and Xbp-1 in the presence of various stressors, including the accumulation of misfolded proteins in the ER.The Ddit-4 increase in both female and male Sirt-3 KOs further confirms that loss of Sirt-3 leads to ER proteotoxic stress that is both sex-and Atf-4-independent.
To explain the nature of male KO-restricted ISR, we analyzed the expression of known Atf-4 targets.The target gene list was taken from an excellent systematic review [48].We detected a total of 91 differentially expressed Atf-4 targets in male MEFs (73 up-and 18 downregulated) and only 44 in female KO MEFs (27 up-and 17 downregulated, Figure 7).A higher number of Atf-4 targets in male MEFs is expected, and the fact that male-and female-upregulated gene targets show only a 27% overlap supports the sex-specific nature of ISR in Sirt-3 KOs.
mitochondrial UPR [44].The data indicate that the loss of Sirt-3 activates the Atf-4/Ddit-3 UPR mt exclusively in male MEFs.Nuclear factor erythroid 2-related factor 2, Nfe2l2 (Nrf-2), is induced by oxidative stress and plays a key role in activating cellular defense against oxidative damage [45].Recent research has proven that Nrf-2 is induced by Atf-4 as an integral part of oxidative stress-induced ISR [46]. Figure 6 shows that Sirt-3 KO induces an Atf-4/Nrf-2-mediated response to oxidative stress only in male MEFs.On the other hand, Xbp-1, a key transcription factor in the cellular response to endoplasmic reticulum (ER) unfolded protein stress [47], is induced in KOs of both sexes.Finally, both male and female KO MEFs upregulated Ddit-4 (regulated in development and DNA damage response 1; REDD-1), which can be induced by Atf-4, Hif-1α and Xbp-1 in the presence of various stressors, including the accumulation of misfolded proteins in the ER.The Ddit-4 increase in both female and male Sirt-3 KOs further confirms that loss of Sirt-3 leads to ER proteotoxic stress that is both sex-and Atf-4-independent.
To explain the nature of male KO-restricted ISR, we analyzed the expression of known Atf-4 targets.The target gene list was taken from an excellent systematic review [48].We detected a total of 91 differentially expressed Atf-4 targets in male MEFs (73 upand 18 downregulated) and only 44 in female KO MEFs (27 up-and 17 downregulated, Figure 7).A higher number of Atf-4 targets in male MEFs is expected, and the fact that male-and female-upregulated gene targets show only a 27% overlap supports the sexspecific nature of ISR in Sirt-3 KOs.Atf-4 can be induced in the case of ER stress, amino acid deprivation, unfolded protein accumulation, mitochondrial dysfunction, hypoxia, oxidative stress, and other stressors [49][50][51].These responses overlap and include many commonly affected genes, but to a certain extent, they can be distinguished through gene expression pa erns.Thus, ER stress is characterized by Atf-4-mediated induction of ER-associated chaperones such as Hspa-5 [52].Disturbed metabolism and availability of amino acids induces phosphoserine amino transferase-1 (Psat-1) and asparagine synthetase (Asns).As a response to hypoxia/oxidative stress, Atf-4 induces cystathionase (Cth) and heme oxygenase (Hmox-1, [53,54].Additionally, Atf-4 has been identified as a key regulator of the mitochondrial stress response, along with the canonical UPR mt transcription factor Atf-5 [44,55,56].Both the Atf-4 and Atf-5 pathways act to rescue mitochondrial homeostasis and the capacity for proper protein folding through the induction of chaperones (Hspe1, Atf-4 can be induced in the case of ER stress, amino acid deprivation, unfolded protein accumulation, mitochondrial dysfunction, hypoxia, oxidative stress, and other stressors [49][50][51].These responses overlap and include many commonly affected genes, but to a certain extent, they can be distinguished through gene expression patterns.Thus, ER stress is characterized by Atf-4-mediated induction of ER-associated chaperones such as Hspa-5 [52].Disturbed metabolism and availability of amino acids induces phosphoserine amino transferase-1 (Psat-1) and asparagine synthetase (Asns).As a response to hypoxia/oxidative stress, Atf-4 induces cystathionase (Cth) and heme oxygenase (Hmox-1, [53,54].Additionally, Atf-4 has been identified as a key regulator of the mitochondrial stress response, along with the canonical UPR mt transcription factor Atf-5 [44,55,56].Both the Atf-4 and Atf-5 pathways act to rescue mitochondrial homeostasis and the capacity for proper protein folding through the induction of chaperones (Hspe1, Hspd1 and Hspa9) and proteases (ClpP and LonP1).Table 4 summarizes the differences in the expression of key ISR effectors between male and female KO MEFs.2.5.Sirt-3 KO Induces UPR ER Independent of Sex Both male and female KO MEFs mounted an endoplasmic reticulum (ER) unfolded protein stress response (UPR ER ), as shown by the induction of the ER-specific chaperone BiP, protein disulfide isomerases Pdia-4 and Pdia-5, ER-associated protein degradation (ERAD)-involved Herpud-1 and i. Sirt-3 has been previously implicated in the UPR ER [58].UPR ER can be activated through three transmembrane misfolded/unfolded protein sensors: inositol requiring kinase 1 (Ire-1, mouse Ern-1), pancreatic ER eIF2a kinase (Perk), and activating transcription factor 6 (Atf-6).All of them are kept in an inactive state by bound Hspa-5 (BiP).Misfolded proteins in the ER compete for BiP binding.BiP dissociation activates kinases through dimerization of Ire1, oligomerization of Perk or Atf-6 transport to the Golgi, where the transcriptionally active form is formed by proteolytic cleavage [59].In KO MEF, the elevation of BiP indicates general UPR ER activation.The activated Ire1 pathway is mediated by Xbp-1, while Perk induces Atf-4 through eIF-2α phosphorylation.Atf-6 mRNA levels were not changed in KO MEFs, but this does not necessarily reflect the activity of a corresponding pathway.As we detected increased expression of BiP, Xbp-1, Atf-4, and a number of their target genes (Table 4), we can conclude that loss of Sirt-3 induces UPR ER through at least two sensor kinases, Perk and Ire1, and independently of sex.
Loss of Sirt-3 Activates the UPR mt Stress Response in Male MEF
To maintain appropriate levels of mitochondrial function, cells can use several stressresponse pathways to respond to various aspects of mitochondrial dysfunction.Thus, perturbations in mitochondrial proteostasis activate UPR mt fusion, and fission can be utilized to control or restore the proper abundance, stability or distribution of mitochondria [60,61].Moreover, irreversibly damaged mitochondria can be removed by selective autophagy (mitophagy).Atf-4, Atf-5 and Ddit-3 are major regulators of the response to the mitochondrial unfolded protein stress response [56,62].While the Atf-4/Chop branch of the UPR mt can be activated in various ways as part of the ISR, Atf-5 localizes to mitochondria, is induced by mitochondrial stress and is translocated to the nucleus.Once imported, it activates the expression of genes involved in mitochondrial quality control, such as chaperones, proteases and antioxidant enzymes, to restore mitochondrial function.Our expression data (Figure 6, Table 4) shows a large increase in Atf-4, Chop/Ddit-3 and Atf-5 gene expression exclusively in male KO MEFs.Interestingly, while the mitochondrial chaperone Hspa9 and LonP protease were upregulated, we detected a decrease in Hspe-1 and Hspd-1 in male KO MEFs.Female KO MEFs did not show any changes in chaperone/protease expression.Downregulation of Hspe-1 and Hspd-1 could indicate a decreased need for chaperones due to low mitochondrial translation.In the case of mitochondrial stress, cells can decrease mitochondrial protein translation until stress resolves.Furthermore, we noticed the reduced expression of mitochondrial ribosomal proteins (Mrps, Table 5).Although Mrps are encoded in the nucleus, translated in the cytoplasm and then imported to mitochondria, their downregulation is related to various diseases, cell cycle progression, metabolic adaptations to stress and apoptosis [63,64].This finding points to a male-specific Sirt-3-dependent increase in misfolded protein content (LonP increase) followed by suppression of mitochondrial translation, as shown by a decrease in mitochondrial chaperones and ribosomal proteins.Sirt-3 is known to mediate the cellular response to UPR mt induced by overexpression of mutant endonuclease G [65].Here, we show that, in male KO MEFs, the sole loss of Sirt-3 is sufficient to mount this response.On the other hand, regarding mitochondrial proteostasis, female MEFs either tolerate loss of Sirt-3 or are unable to react properly.Considering the global expression patterns described thus far, the first option is more probable.Finally, we investigated the levels of genes involved in mitophagy and found differences in Bcl-2 interacting protein 3 (Bnip-3), Unc-51-like kinase 1/2 (Ulk1/2), and neighbor of BRCA1 gene 1 (Nbr-1), but the results are not very conclusive; therefore, we did not engage in further analyses.
Male-Restricted ISR Induction as a Consequence of Oxidative Stress
Regarding oxidative stress, two lines of evidence point to different responses to the loss of Sirt-3 in male and female MEFs.First, as described before, male KO MEFs show a significant Hif-1α response.As the MEFs we used are stable cell lines, male KO MEFs established a steady state stress response due to chronic pseudohypoxia caused by Sirt-3 loss.On the other hand, both WT and KO female MEFs obviously sustain a level of oxidative stress defense sufficient to avoid Hif-1α stabilization.Second, we observed that only male KO MEFs mounted a strong ISR-mediated antioxidant response: in addition to the major ISR inducer Atf-4 and key redox regulator Nrf-2, male KO increased the expression of Hmox1 (heme oxygenase 1, an antioxidant enzyme and known Nrf-2 target), Gclc (glutamate-cysteine ligase; major regulator in glutathione synthesis) and the ERassociated glutathione peroxidase Gpx-7.To confirm the differential oxidative state in male and female KO MEFs, we analyzed the protein expression of two main antioxidant enzymes, Sod-1 and Cat.In agreement with several previous studies that showed higher Sod levels in females [66][67][68], we detected increased Sod-1 and Cat protein expression in female cells compared to male cells, independent of Sirt-3 (Figure 8, Supplementary Figures S5 and S6).Other antioxidant enzymes did not show significant sex-specific differences at the mRNA level.These results suggest that female MEFs benefit from their inherently higher Sod-1/Cat levels to compensate for the loss of Sirt-3.On the other hand, male KO MEFs are forced to induce a complex antioxidant response to maintain satisfactory cellular homeostasis levels.
Discussion
Sirt-3 is most often described as a mitochondrial deacetylase that plays a pivotal role in regulating mitochondrial function and maintaining cellular energy balance.Proteomic analyses have identified hundreds of (de)acetylation sites regulated by Sirt-3 [69], and focused studies are still revealing new Sirt-3 targets.Given the large number of cellular processes in which Sirt-3 is involved, it is likely that it serves as a mediator in fine tuning the cell's metabolic status and not as an on/off switch.Therefore, Sirt-3 KO mice and cell lines are viable and do not show major anatomical or pathological aberrations.However, at the level of an organism, even slight but persistent metabolic alterations can have longterm effects on the lifespan, healthspan, aging, and development of age-related diseases.Indeed, Sirt-3 has been proposed to increase lifespan in yeast [70] and humans [71].Decreased Sirt3 levels have been implicated in the development of neurodegenerative disorders such as amyotrophic lateral sclerosis, Parkinson's disease (PD), Alzheimer's disease (AD), and probably Huntington's disease [72][73][74], while its role in cancer is not clear yet [75][76][77].All these pathologies require time to develop, and the cells experience suboptimal conditions for long periods.Experimentally, chronic Sirt-3 deficiency in our stable Sirt-3 KO cellular model should more closely resemble the effects of inherited or age-related Sirt-3 decrease, as opposed to commonly used conditional or transient KOs.The la er is probably of less biological relevance.Many Sirt-3-related pathologies show
Discussion
Sirt-3 is most often described as a mitochondrial deacetylase that plays a pivotal role in regulating mitochondrial function and maintaining cellular energy balance.Proteomic analyses have identified hundreds of (de)acetylation sites regulated by Sirt-3 [69], and focused studies are still revealing new Sirt-3 targets.Given the large number of cellular processes in which Sirt-3 is involved, it is likely that it serves as a mediator in fine tuning the cell's metabolic status and not as an on/off switch.Therefore, Sirt-3 KO mice and cell lines are viable and do not show major anatomical or pathological aberrations.However, at the level of an organism, even slight but persistent metabolic alterations can have long-term effects on the lifespan, healthspan, aging, and development of age-related diseases.Indeed, Sirt-3 has been proposed to increase lifespan in yeast [70] and humans [71].Decreased Sirt3 levels have been implicated in the development of neurodegenerative disorders such as amyotrophic lateral sclerosis, Parkinson's disease (PD), Alzheimer's disease (AD), and probably Huntington's disease [72][73][74], while its role in cancer is not clear yet [75][76][77].All these pathologies require time to develop, and the cells experience suboptimal conditions for long periods.Experimentally, chronic Sirt-3 deficiency in our stable Sirt-3 KO cellular model should more closely resemble the effects of inherited or age-related Sirt-3 decrease, as opposed to commonly used conditional or transient KOs.The latter is probably of less biological relevance.Many Sirt-3-related pathologies show different incidences, onsets and/or progression between sexes, such as PD, AD, multiple sclerosis, cardiovascular diseases, and many cancer types.However, most studies ignore sex-specific differences.As a reflection of natural differences in metabolism, response to environmental stimuli and consequential responses at the molecular level, sex should be considered an important parameter in molecular medicine research.Therefore, we aimed to identify male-and female-specific events in response to a common molecular event, i.e., the loss of Sirt-3.
Here, we show that many consequences of Sirt-3 loss are sex-specific.Some of the affected pathways were described before but were not discussed in the context of sex, mostly because of the preferential use of male animals and cell lines or averaging male and female datasets, which is often the case.
Global gene expression difference between Sirt-3 WT and KO MEFs identified several pathways affected differently between the genotypes, but apart from oxidative stress response, the data were not very informative.This is expected due to the inherent diversity of Sirt-3 targets and basal sex-related differences in gene expression, which affect the enrichment results.However, sex-dependent DEG sets supported by protein expression data revealed a striking difference between male and female Sirt-3 KOs.First, a strong Hif-1α response observed in male KO MEFs implies the impaired oxygen reduction in male KO mitochondria, even though oxygen levels were maintained at normal concentrations.Inefficient electron transfer increases mitochondrial ROS, stabilizing Hif-1α through inhibition of prolyl-hydroxylases, which mark Hif-1α for proteasomal degradation.We did not observe a Hif-1α response in female MEFs, which raises several possible explanations: either female MEFs are able to maintain adequate mitochondrial function, or they are more capable of detoxifying excess ROS.We found an increase in Sod-1 and Cat at both the mRNA and protein levels in female MEFs of both genotypes, meaning that they were not induced by Sirt-3 loss; instead, this is a female-specific trait.We therefore hypothesize that the loss of Sirt-3 affects mitochondria similarly in both sexes; however, females benefit from their inherent increased antioxidant protection and are able to maintain ROS levels low enough to avoid a significant response to hypoxia.This is supported by a similar decrease in C1-driven respiration in both sexes.As Sirt-3 is known to activate Sod-2 and Cat [78,79], its loss is expected to affect cellular oxidative status, and female MEFs can compensate through higher expression of Sod-2/Cat.Additionally, since Sirt-3 expression decreases during aging [80], these findings can contribute to sex-inclusive aging biology research.Metabolic effects downstream of Hif-1α also follow a sex-specific pattern: male Sirt-3 KO MEFs increase glycolytic flux and the pentose phosphate pathway, downregulate the TCA cycle and decrease mitochondrial ATP production, as shown by AMPK phosphorylation.Male KO MEFs compensate for their low energy status through the suppression of fatty acid synthesis and an increase in mitochondrial beta-oxidation.Female KO MEFs show none of the metabolic shifts described, suggesting that their mitochondrial function is preserved in the absence of Sirt-3 and the resulting increase in ROS levels.Earlier studies have shown that females are more efficient in ROS detoxification [66][67][68]81,82].The inability of male KO to efficiently deal with the high ROS production rate induces a broad Atf-4-mediated ISR.ISR can be triggered by various stressors, and we show that in male KO cells ISR is a result of mitochondrial proteotoxic stress.Mitochondrial proteins are damaged by excess ROS, leading to the accumulation of unfolded proteins in the mitochondrial matrix.This branch of the ISR is known as the UPR mt and is not induced in female KO MEFs.We propose that Sirt-3 loss initially leads to decreased activity of OXPHOS components [83].Inefficient electron transfer results in an increase in ROS, which are efficiently detoxified in female KO MEFs, avoiding further damage.In contrast, male KOs are unable to neutralize ROS at the proper rate, which induces the Hif-1α response with an associated reduction in OXPHOS and accumulation of misfolded and damaged proteins as a secondary effect.OXPHOS is reduced in male KO MEFs also due to lower levels of mitochondrial ribosomal proteins, consequential to global protein synthesis inhibition in ISR.This further supports female Sirt-3 KO-increased antioxidant capacity compared to male MEFs.ISR has been implicated in the pathogenesis of AD, PD, and amyotrophic lateral sclerosis.Chronic activation of the ISR in neurons can lead to neuronal dysfunction, synaptic impairment, and neuroinflammation, contributing to disease progression.Additionally, ISR-mediated dysregulation of proteostasis and autophagy may play a role in the accumulation of the misfolded proteins and protein aggregates characteristic of these diseases.ISR signaling pathway has been implicated in metabolic disorders such as obesity, type 2 diabetes, and non-alcoholic fatty liver disease.Chronic activation of the ISR in response to nutrient excess or ER stress can lead to insulin resistance, inflammation, and lipotoxicity in tissues such as adipose tissue, liver, and pancreas.Also, dysregulation of the ISR can contribute to cellular senescence and inflammation, which are hallmarks of aging and age-related pathologies (reviewed in [84]).Here, we present firm evidence that sex-specific differences must be considered as an indispensable parameter in future research on all these conditions.On the other hand, Sirt-3 deficiency induces UPR ER independent of sex.In this case, females' advantage in antioxidative capacity does not rescue them from protein damage and misfolding in this compartment, suggesting separate Sirt-3 roles in ER and mitochondrial homeostasis.As knowledge about the role of Sirt-3 in ER stress is limited [85], further research is needed to resolve the background of its sex-dependent and sex-independent functions.
Perspectives & Significance
The sex-specific differences described in this study are significant and include some crucial cellular processes.While basal protein expression in female MEFs allows them to tolerate lower respiration and high ROS levels after the loss of Sirt-3, male MEFs adapt in a more drastic way by entering a prolonged state of cellular stress.This allows the cells to avoid apoptosis or necrosis but comes at the price of slower growth and inefficient nutrient/oxygen utilization.This study provides new insights into the sex-specific effects of Sirt-3 deficiency and shows that the consequences of Sirt-3 loss differ markedly in male and female cell models.Future studies should aim to define the elements that enable female cells to sustain normal metabolic and oxidative states.Furthermore, as Sirt-3 is considered a potential therapeutic target, our study should encourage the inclusion of sex as a parameter in both fundamental research and preclinical studies.How these findings will impact the discovery of sex-specific aspects of aging and longevity, the etiology and treatment of diseases such as neurodegenerative and cardiovascular diseases and cancer research could be an exciting venue in the near future.
Cell Lines
Thirteen-day-old embryos of Sirt-3 WT (129S1/SvImJ, Stock No: 002448 Jackson Laboratory, Bar Harbor, ME, USA) and Sirt-3 KO (Stock No: 012755, Jackson Laboratory, Bar Harbor, ME, USA) female mice were used for isolation of male and female mouse embryonic fibroblasts (MEFs) according to [86], with the modification that individual embryos were isolated to enable sex-based differentiation.Cells were maintained in a 37 • C incubator with 5% CO 2 and immortalized by stable transfection with SV40 T-antigencontaining plasmid.Sex determination of MEFs was performed by real-time PCR analysis utilizing specific primers targeting the sex-determining region Y (Sry) gene on the Y chromosome.For both RNA extraction and Western blots, low-passage cells were used.
RNA Sequencing
MEFs of both genotypes and sexes were seeded into 6-well plates at 4 × 10 5 cells per well in triplicate.After 48 h, RNA was extracted using a Qiagen RNeasy mini kit.RNA samples were sent to a commercial NGS service provider (Novogene, Cambridge, UK) for sample quality control, library prep, and sequencing at targeted 15 million reads per sample.RNA quality was assessed using an Agilent Bioanalyzer.Following polyA enrichment, RNA was fragmented, and first-strand cDNA was synthesized using random hexamer primers followed by second-strand cDNA synthesis.After end repair, A-tailing, adapter ligation, size selection, amplification, and purification, libraries were normalized and paired-end sequenced on an Illumina NovaSeq 6000 sequencer (Illumina, San Diego, CA, USA).
Oxygen Consumption
The cells were trypsinized, and then kept on ice unless otherwise specified.Respiration buffer (150 mM KCl, 1 mM EGTA, 20 mM TRIS, 5 mM KH2PO4, pH 7.4) was used with the cells for the determination of oxygen consumption.The cells (5 × 10 6 ) resuspended in 50 µL of cell culture media were added into 450 µL respiration buffer in the Clarks type electrode chamber (Oxygraph, Hansatech Instruments Ltd., Pentney, UK).Solely the plasma membrane was permeabilized by adding 0.01% (w/v) digitonin.For complex 1 assessment, cells were incubated with 2.5 mM glutamate and 1.25 mM malate.Mitochondrial respiration was accelerated by the addition of ADP (500 µM final concentration) for state 3 respiration measurements.Oxygen consumption was calculated in nmol/min/number of cells.
RNA-seq Data Analysis
Adapter sequences and low-quality base trimming were performed in Novogene.The number of reads per gene (reference genome GRCm38) was calculated using Salmon 1.10.2[88], and a TPM (transcripts per million) matrix was used for differential gene expression analysis.DeSeq2 with the Wald test and parametric fit was used either as an R package or through the web-based tool RaNa-Seq (https://ranaseq.eu/index.php,accessed on 10 January 2024).The adjusted p value threshold was set to 0.05.RaNa-Seq also generates sample clustering heatmaps, PCA and functional analysis, such as gene set enrichment analysis (GSEA).For pathway enrichment, we used the open source tool Enrich [57].Gene sets corresponding to individual pathways were downloaded from Harmonizome 3.0 [89].
Statistical Analysis
For the statistical analysis of Western blot data, SPSS for Windows (v.17.0,IBM, Armonk, NY, USA) was used.A two-way ANOVA was performed to reveal the interaction effect of Sirt-3 and sex.If a significant interaction was observed, a Bonferroni adjustment was made to correct for multiple comparisons within each simple main effect separately.Significance was set at p < 0.05.
1.
Male and female MEFs mount significantly different adaptive metabolic responses to loss of Sirt3 function and the resulting mitochondrial dysfunction; 2.
Female MEFs compensate for Sirt-3 loss, at least in part, through increased antioxidant enzyme expression; in male cells, Sirt3 knock-out induces pseudohypoxia resulting in chronic oxidative stress, shift to glycolysis and fatty acid oxidation;
In the MEF (hormone-independent) experimental model, female cells exhibit a higher level of protection from oxidative stress induced by impaired mitochondrial function, maintaining the normal metabolic and oxidative states; 5.
Through maintenance of mitochondrial function, Sirt3 is involved in aging and neurodegenerative and cardiovascular diseases; significant differences between sexes should be accounted for in future research.
Figure 1 .
Figure 1.Heatmap representing distances between samples based on SERE (simple error ratio estimate) values.Higher values indicate a larger distance between samples.Lower SERE values indicate more similarity.
Figure 1 .
Figure 1.Heatmap representing distances between samples based on SERE (simple error ratio estimate) values.Higher values indicate a larger distance between samples.Lower SERE values indicate more similarity.
Figure 2 .
Figure 2. Venn diagram representing the numbers of DEGs detected in male, female or both male and female KO MEFs.
Figure 2 .
Figure 2. Venn diagram representing the numbers of DEGs detected in male, female or both male and female KO MEFs.
Table 1 .
Top pathways significantly enriched for male and female Sirt-3 KO common DEGs.
Table 1 .
Top pathways significantly enriched for male and female Sirt-3 KO common DEGs.
Table 3 .
Expression of glycolytic, TCA cycle, pentose-phosphate pathway and fatty acid metabolismrelated genes in male and female Sirt-3 KO relative to WT MEFs.
Table 4 .
Expression of selected ISR effectors in male and female Sirt-3 KO MEFs relative to Sirt-3 WT MEF.Log 2 fold and p adj values for genes showing significant expression changes are in bold (p adj < 0.05).
Table 5 .
Expression of mitochondrial ribosomal proteins in male and female Sirt-3 KO MEFs relative to WT MEFs.Log 2 fold and p adj values of significantly downregulated genes are in bold (p adj < 0.05).
|
2024-04-02T15:18:52.258Z
|
2024-03-30T00:00:00.000
|
{
"year": 2024,
"sha1": "0b487bb3e0b85d5f9bf4a8ea299ad5065c26a7ba",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1422-0067/25/7/3868/pdf?version=1711782397",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "66eee999fae8297a35096311bc28d317f5d6e889",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": []
}
|
254621335
|
pes2o/s2orc
|
v3-fos-license
|
Forward Osmosis Technology and Its Application on Microbial Fuel Cells: A Review
As a new membrane technology, forward osmosis (FO) has aroused more and more interest in the field of wastewater treatment and recovery in recent years. Due to the driving force of osmotic pressure rather than hydraulic pressure, FO is considered as a low pollution process, thus saving costs and energy. In addition, due to the high rejection rate of FO membrane to various pollutants, it can obtain higher quality pure water. Recovering valuable resources from wastewater will transform wastewater management from a treatment focused to sustainability focused strategy, creating the need for new technology development. An innovative treatment concept which is based on cooperation between bioelectrochemical systems and forward osmosis has been introduced and studied in the past few years. Bioelectrochemical systems can provide draw solute, perform pre-treatment, or reduce reverse salt flux to help with FO operation; while FO can achieve water recovery, enhance current generation, and supply energy sources for the operation of bioelectrochemical systems. This paper reviews the past research, describes the principle, development history, as well as quantitative analysis, and discusses the prospects of OsMFC technology, focusing on the recovery of resources from wastewater, especially the research progress and existing problems of forward osmosis technology and microbial fuel cell coupling technology. Moreover, the future development trends of this technology were prospected, so as to promote the application of forward osmosis technology in sewage treatment and resource synchronous recovery
Principle of Forward Osmosis
Forward osmosis is a separation process that uses the osmotic pressure difference between feed solution (FS) and draw solution (DS) on both sides of the forward osmosis membrane as the driving force without external pressure to make water flow spontaneously from the feed solution (low osmotic pressure) to the drawn solution (high osmotic pressure) as shown in Figure 1. In this process, the FO membrane selectively penetrates water molecules to intercept and remove pollutants and ions in water. Forward osmosis (FO) is based on the natural phenomena of osmotic processes and can extract clean water from wastewater [1].
Compared with other membrane systems, FO has many advantages, such as high energy efficiency, high salt discharge rate, low membrane pollution, and low salt water discharge. Therefore, once FO becomes advantageous, determine the appropriate DS based on different wastewater types. The pore diameter of the second FO membrane is only 0.3-0.5 nm, allowing a high solute rejection rate, making it an ideal choice for desalination, removal of heavy metals, and removal of micro pollutants, such as cell inhibitory drugs and endocrine disruptors. In addition, FO does not require pre-treatment of wastewater.
Another key advantage of the FO process is its low pollution tendency [2]. Reversible
Development of Forward Osmosis(FO)
The development of FO technology and membrane materials has mainly gone through three stages: Stage 1: A new process for desalination of seawater based on the principle of FO membrane was proposed for the first time. However, at this stage, a special FO membrane was not developed, but the reverse osmosis membrane was used for FO research. Due to the dense support layer of the reverse osmosis membrane, serious internal concentration polarization was caused when it was applied to FO, resulting in low FO performance [3].
Stage 2: Started to explore a semi permeable membrane more suitable for the FO process. HTI Company of the United States used polyester mesh to replace the RO membrane support layer, developed an asymmetric cellulose triacetate FO membrane (CTA membrane) with better performance, and realized commercial application in the field survival water purification equipment and food concentration. However, compared with the RO process, FO water flux is still at a low level. Moreover, the mass transfer mechanism of FO process and the study of extraction solution are still not in-depth.
Stage 3: Further development has been made in exploring FO mass transfer mechanism and model, developing efficient extraction solutions, and developing high-performance FO membranes. The researchers successfully prepared polyamide composite membrane (TFC) through interfacial polymerization, which improved the water flux and salt rejection of the FO process, and had a wider pH application range than CTA membrane. Different from the traditional membrane separation process, the forward osmosis process uses the osmotic pressure difference between two solutions to drive water through the semi permeable membrane, without additional pressure, so it has the advantage of low energy consumption [4].
The biggest feature of FO technology is osmotic pressure driving, which is essentially different from other membrane separation processes. Therefore, compared with traditional pressure driven membrane separation processes (such as reverse osmosis and nanofiltration), FO technology has the following advantages.
Low energy consumption
No hydraulic pressure is required in the operation process, so the FO process has the advantage of low energy consumption, especially in applications where the extract does not need to be recycled, such as the diluted fertilizer extract, directly used for agricultural
Development of Forward Osmosis(FO)
The development of FO technology and membrane materials has mainly gone through three stages: Stage 1: A new process for desalination of seawater based on the principle of FO membrane was proposed for the first time. However, at this stage, a special FO membrane was not developed, but the reverse osmosis membrane was used for FO research. Due to the dense support layer of the reverse osmosis membrane, serious internal concentration polarization was caused when it was applied to FO, resulting in low FO performance [3].
Stage 2: Started to explore a semi permeable membrane more suitable for the FO process. HTI Company of the United States used polyester mesh to replace the RO membrane support layer, developed an asymmetric cellulose triacetate FO membrane (CTA membrane) with better performance, and realized commercial application in the field survival water purification equipment and food concentration. However, compared with the RO process, FO water flux is still at a low level. Moreover, the mass transfer mechanism of FO process and the study of extraction solution are still not in-depth.
Stage 3: Further development has been made in exploring FO mass transfer mechanism and model, developing efficient extraction solutions, and developing high-performance FO membranes. The researchers successfully prepared polyamide composite membrane (TFC) through interfacial polymerization, which improved the water flux and salt rejection of the FO process, and had a wider pH application range than CTA membrane. Different from the traditional membrane separation process, the forward osmosis process uses the osmotic pressure difference between two solutions to drive water through the semi permeable membrane, without additional pressure, so it has the advantage of low energy consumption [4].
The biggest feature of FO technology is osmotic pressure driving, which is essentially different from other membrane separation processes. Therefore, compared with traditional pressure driven membrane separation processes (such as reverse osmosis and nanofiltration), FO technology has the following advantages.
Low energy consumption
No hydraulic pressure is required in the operation process, so the FO process has the advantage of low energy consumption, especially in applications where the extract does not need to be recycled, such as the diluted fertilizer extract, directly used for agricultural irrigation, and the diluted seawater extract, directly discharged, which can obviously reflect the low energy consumption advantage of FO process [5].
Light membrane pollution and high reversibility
No hydraulic pressure can prevent pollutants on the membrane surface from being compacted, resulting in light FO membrane pollution and high reversibility.
High pollutant retention rate and good effluent quality
The pore diameter of FO membrane is very small (about 0.25-0.3 nm), which has an excellent removal effect on ions and micro pollutants in water. Therefore, FO technology with low energy consumption, low pollution, and high retention has a very broad application prospect [6].
Concentration Polarization
Concentration polarization is a common phenomenon in all membrane separation processes, and the forward osmosis process is no exception. Concentration polarization is due to the fact that during the membrane separation process of water and solute, the solute of the feed solution accumulates on the membrane surface layer, and one side of the draw solution is diluted by water, resulting in the phenomenon that the effective osmotic pressure of the membrane layer is far less than the osmotic pressure difference of the solution itself on both sides [7]. Concentration polarization not only reduces osmotic driving force, thereby reducing water flux and increasing solute diffusion, but also aggravates membrane pollution. Due to the asymmetric structure of the forward osmosis membrane, external concentration polarization and internal concentration polarization are prone to occur. The outer concentration polarization occurs on the membrane surface and can be reduced or eliminated by hydraulic conditions [8]. The inner concentration polarization occurs in the support layer of the membrane, which seriously affects the performance of the forward osmosis membrane. In the process of forward osmosis, there are two commonly used operation modes: FO mode or AL-FS mode. The active layer of the feed solution towards the membrane. PRO mode or AL-DS mode. The active layer of the absorption solution towards the membrane.
Different membrane orientation will lead to different dilution or concentration polarization. Figure 2 describes the concentration polarization diagram of FO and PRO modes [9]. irrigation, and the diluted seawater extract, directly discharged, which can obviously reflect the low energy consumption advantage of FO process [5].
Light membrane pollution and high reversibility
No hydraulic pressure can prevent pollutants on the membrane surface from being compacted, resulting in light FO membrane pollution and high reversibility.
High pollutant retention rate and good effluent quality
The pore diameter of FO membrane is very small (about 0.25-0.3 nm), which has an excellent removal effect on ions and micro pollutants in water. Therefore, FO technology with low energy consumption, low pollution, and high retention has a very broad application prospect [6].
Concentration Polarization
Concentration polarization is a common phenomenon in all membrane separation processes, and the forward osmosis process is no exception. Concentration polarization is due to the fact that during the membrane separation process of water and solute, the solute of the feed solution accumulates on the membrane surface layer, and one side of the draw solution is diluted by water, resulting in the phenomenon that the effective osmotic pressure of the membrane layer is far less than the osmotic pressure difference of the solution itself on both sides [7]. Concentration polarization not only reduces osmotic driving force, thereby reducing water flux and increasing solute diffusion, but also aggravates membrane pollution. Due to the asymmetric structure of the forward osmosis membrane, external concentration polarization and internal concentration polarization are prone to occur. The outer concentration polarization occurs on the membrane surface and can be reduced or eliminated by hydraulic conditions [8]. The inner concentration polarization occurs in the support layer of the membrane, which seriously affects the performance of the forward osmosis membrane. In the process of forward osmosis, there are two commonly used operation modes: FO mode or AL-FS mode. The active layer of the feed solution towards the membrane.
PRO mode or AL-DS mode. The active layer of the absorption solution towards the membrane.
Different membrane orientation will lead to different dilution or concentration polarization. Figure 2 describes the concentration polarization diagram of FO and PRO modes [9]. In the AL-FS mode, the water molecules of the feed solution enter the absorption solution side through the membrane, while the solute gradually accumulates in the active layer of the membrane, making the concentration of the solute on the membrane surface greater than its concentration in the solution, forming a concentrated external concentration polarization. At the same time, the water permeates the active layer with gradually diluting the extract of the support layer and then the diluted internal concentration polarization occurs. In AL-DS mode, the solute in the feed solution gradually accumulates in the membrane support layer and the concentrated inner concentration polarization occurs [10]. The absorption solution near the active layer is diluted by the transferred water, which reduces the concentration and polarizes the diluted external concentration difference. Therefore, regardless of the membrane orientation, the concentration polarization will reduce the osmotic pressure, resulting in a decrease in water flux. In the process of forward osmosis, the internal concentration polarization occurs in the support layer and cannot be removed through optimization of hydraulic conditions, which is the main reason for the decline of water flux [11].
Membrane Fouling
Membrane fouling involves solutes and/or particles on the membrane surface and in the membrane hole or the feed spacer is blocked. This may cause dirt, scaling, or damage of the membrane. The main pollutants in natural and damaged water bodies are microorganisms, organic substances, and inorganic substances (scaling). When wastewater is used, due to the existence of microorganisms and the secretion of extracellular polymeric substances (EPS) to establish biofilm integrity, biological scaling may be the most limiting factor [12]. Biological scaling is affected by influent water quality, membrane physical and chemical properties and operating conditions. In a FO-MBR study, biological deposition had little effect on water permeability, but the mass transfer coefficient was seriously reduced and ICP was enhanced. In seawater FO, silica scaling or membrane biological scaling may occur through transparent outer polymer particles (TEP). Organic pollution varies depending on the water supply used. The wastewater consists of mobile organic matter (EfOM), including soluble microbial products and natural organic matter (NOM). NOM has been found to be a serious pollutant in many membrane processes, including FO [13]. Therefore, it is important to simulate the behavior of these complex feeds to include all or the most important dirt. Model fouling, using, e.g., sodium alginate or alginate, bovine serum albumin (BSA), and Aldrich humic acid (AHA), has been used to test the severity of NOM fouling on FO membrane. Alginic acid is related to the hydrophilic part of EfOM, AHA represents humic acid, and BSA represents protein part [14].
Immediate fouling detection ensures and restores membrane performance. Determining the scaling potential of the feed can help predict scaling, However, once fouling occurs on the membrane surface, off-line methods may be required for future preventive measures. Non invasive visual online methods can detect early signs of fouling in real time, such as flow decline, solute rejection, and NPD change operating parameters (temperature, feed TDS, penetrant flow, recovery). Figure 3 summarizes the fouling detection technology in which feed and FO membrane contamination are involved [15].
Application of FO Technology
The idea of wastewater treatment has changed from the original "pollutant removal up to standard discharge" to the idea of "resource and energy recycling", which can realize water resource regeneration, energy production, and value-added product output [16].The advantages of FO technology, such as low energy consumption, light pollution and high interception rate, make it widely used and researched in wastewater treatment, specifically including water resource regeneration and nitrogen and phosphorus nutrient recovery [17].
Water resources regeneration
Due to the high interception of FO membrane, most of the pollutants in wastewater can be removed, and high-quality effluent can be obtained to realize the regeneration and reuse of water resources. The FO wastewater treatment and resource recovery unit is composed of two parts, namely the FO treatment system and extraction liquid recovery water purification system. Zhang et al. studied the treatment effect of FO membrane on the effluent of the secondary sedimentation tank and used solar radiation to drive electrodialysis to recover the diluted extract, which can meet the drinking water standard [18].
Recovery of nitrogen and phosphorus nutrients
Wastewater contains rich nutrients, such as nitrogen and phosphorus. If discharged directly, it will not only reduce the effluent quality, but also cause eutrophication of the water body. Recycling nitrogen, phosphorus, and other nutrients as fertilizers is an urgent need for sustainable development of wastewater treatment.
The dense membrane pore of FO membrane can effectively intercept and concentrate ammonia, nitrogen, and phosphate in wastewater for subsequent crystallization and recovery [13]. At present, it is successfully used for concentration and recovery of nitrogen and phosphorus resources in anaerobic digestion liquid and urine shown in Figure 4. In addition, using the reverse diffusion characteristics of the FO draw solution, with the salt solution of magnesium bivalent as the extracting solution, nitrogen and phosphorus in the synthetic urine are recovered by FO technology [19]. After FO treatment, magnesium ions entering the concentrated solution form struvite precipitation with phosphorus. The diluted extracting solution of recovered urea is used for the direct irrigation of green walls, parks, or urban agriculture.
Application of FO Technology
The idea of wastewater treatment has changed from the original "pollutant removal up to standard discharge" to the idea of "resource and energy recycling", which can realize water resource regeneration, energy production, and value-added product output [16].The advantages of FO technology, such as low energy consumption, light pollution and high interception rate, make it widely used and researched in wastewater treatment, specifically including water resource regeneration and nitrogen and phosphorus nutrient recovery [17].
Water resources regeneration
Due to the high interception of FO membrane, most of the pollutants in wastewater can be removed, and high-quality effluent can be obtained to realize the regeneration and reuse of water resources. The FO wastewater treatment and resource recovery unit is com- addition, using the reverse diffusion characteristics of the FO draw solution, with the salt solution of magnesium bivalent as the extracting solution, nitrogen and phosphorus in the synthetic urine are recovered by FO technology [19]. After FO treatment, magnesium ions entering the concentrated solution form struvite precipitation with phosphorus. The diluted extracting solution of recovered urea is used for the direct irrigation of green walls, parks, or urban agriculture.
Coupling Advantages of Forward Osmosis Technology and Microbial Fuel Cell Technology
The forward osmosis microbial fuel cell technology, which combines the advantages of forward osmosis technology and microbial fuel cell technology, improves the power generation performance of MFCs and demonstrates good performance in water recovery. Coupled technology links BES and FO units externally through a hydraulic connection [20].
Wastewater could be used as a source of fuel for BES, with the benefit of accomplishing wastewater treatment. In recent years, BES has been researched in the context of treating wastewater and extracting the waste energy extensively, with the representing technology, microbial fuel cell (MFC). For example, MFCs may produce up to 1.43 kWh m −3 from a primary sludge or 1.8 kWh m −3 from a treated effluent [21]. Theoretically, BES can convert maximum 100% of chemical energy into electricity.
However, there is always some energy lost through (1) coulombic loss where organics are not converted to electrical current at 100%, and (2) electrochemical potential or voltage loss. Nevertheless, the reported energy conversion efficiency for MFC can reach 80% which is much higher than 33% for typical heat engine combustion of methane gas. An example of a coupled technology is to connect an MEC to an FO unit for recovering ammonium from a synthetic wastewater and then applying the recovered ammonium as a draw in the subsequent FO process [22].
In an osmotic membrane bioreactor (OMBR)-MFCs system, the membrane fouling in the OMBR was alleviated by the MFC treatment, and the electricity generation in the MFC was enhanced due to increased solution conductivity after the OMBR treatment. FO-based processes have also been studied as pre-treatment before BES. For example, an FO unit containing anaerobic acidification converted complex organic contaminants into shortchain fatty acids and alcohols, as well as concentrated wastewater [23], which was then
Coupling Advantages of Forward Osmosis Technology and Microbial Fuel Cell Technology
The forward osmosis microbial fuel cell technology, which combines the advantages of forward osmosis technology and microbial fuel cell technology, improves the power generation performance of MFCs and demonstrates good performance in water recovery. Coupled technology links BES and FO units externally through a hydraulic connection [20].
Wastewater could be used as a source of fuel for BES, with the benefit of accomplishing wastewater treatment. In recent years, BES has been researched in the context of treating wastewater and extracting the waste energy extensively, with the representing technology, microbial fuel cell (MFC). For example, MFCs may produce up to 1.43 kWh m −3 from a primary sludge or 1.8 kWh m −3 from a treated effluent [21]. Theoretically, BES can convert maximum 100% of chemical energy into electricity.
However, there is always some energy lost through (1) coulombic loss where organics are not converted to electrical current at 100%, and (2) electrochemical potential or voltage loss. Nevertheless, the reported energy conversion efficiency for MFC can reach 80% which is much higher than 33% for typical heat engine combustion of methane gas. An example of a coupled technology is to connect an MEC to an FO unit for recovering ammonium from a synthetic wastewater and then applying the recovered ammonium as a draw in the subsequent FO process [22].
In an osmotic membrane bioreactor (OMBR)-MFCs system, the membrane fouling in the OMBR was alleviated by the MFC treatment, and the electricity generation in the MFC was enhanced due to increased solution conductivity after the OMBR treatment. FO-based processes have also been studied as pre-treatment before BES shown in Figure 5. For example, an FO unit containing anaerobic acidification converted complex organic contaminants into short-chain fatty acids and alcohols, as well as concentrated wastewater [23], which was then treated in an MFC for electricity generation. In addition, the MDC-FO system can be applied for desalination, and the effluent salinity from MDC-FO system is lower than the maximum contaminant levels of the National Secondary Drinking Water Regulations. Compared to the integration of MDC and RO, the MDC-FO system might have lower energy consumption and lower membrane fouling propensity [24].
Enhance power generation performance
Yao et al. [25] used the FO membrane as separator to build a new OsMFC. They found that OsMFC generated more electricity than MFC in batch operation and continuous operation. According to the polarization curve, the maximum power density of OsMFC is 4.74 W/m 3 , 36% higher than that of MFC with CEM, when 58 g L −1 NaCl is used for cathode liquid and aeration is used. The catholyte used 35 g/L NaCl, and the power density of the air cathode OsMFC is 8% and 87% higher than that of the MFC with AEM and CEM, respectively. Generally, the performance of MFC was evaluated by open circuit voltage and internal loss, including ohmic loss, activation loss, microbial metabolism loss, and concentration loss [26]. When the reactor configuration and electrolyte were the same, the open circuit voltages of OsMFC and MFC were not the same. Therefore, the main contribution to the improvement of OsMFC power generation capacity was the reduction of internal losses, such as low membrane internal resistance, low ion penetration resistance, and low pH gradient of cathode and anode solutions [27,28].
Zhao et al. [29] found that the membrane internal resistance of OsMFC is smaller than that of the MFC system and predicted that high water flux would reduce the internal resistance of the system after accurately simulating the experimental results with mathematical models. The air cathode OsMFC has a very low internal resistance, which is only 54 Ω. The resistance of ions passing through the FO membrane was 9 Ω, which is smaller than that passing through the AEM and CEM. This may be due to the existence of water flux, which accelerated the transmission speed of ions. After 10 h of operation, the pH of cathode solution of OsMFC is 9.76 and that of MFC is 10.90. This is because the rapid transport of protons in OsMFC buffers the continuously increasing pH of cathode solution, reduces the pH of cathode solution, and reduces the over voltage. At the same time, the water in the anode solution flows into the cathode solution through the forward osmosis membrane, leading to the concentration of the anode solution, which increases the conductivity of the anode solution, thereby reducing the internal resistance of OsMFC [30].
Recover high-quality water resources
Compared with the ion exchange membrane, the FO membrane has a very high water permeability coefficient [31]. When the salinity of the catholyte is very high, high-quality water can move from the wastewater end, i.e., the anode chamber, to the cathode chamber through the FO membrane. For example, using 116 g L −1 NaCl solution as the catholyte of OsMFC can produce a water flux of 3.94 ± 0.22 LMH, and there is no water flux in MFC under the same experimental conditions. The cathode liquid after drawing water is purified by reverse osmosis, electrodialysis, or a desalination tank to remove the absorbent to achieve the purpose of water resource recovery [32]. In this regard, the cathode liquid of OsMFC acts as a catalyst for water purification and extraction. The water flux will also cause dilution of catholyte.
After the OsMFC run for 10-12 h, the conductivity of the catholyte decreased by 8% to 35%, which meant that this system was also a desalination system. Based on this discovery, Tiraferri et al. [33] proposed to build a forward osmosis microbial desalination cells (OsMDC). After three days of water dilution and salt removal, the conductivity of simulated seawater decreased by 60%. AEM in the traditional desalination tank allows chloride ions to pass through the FO membrane in OsMDC, which can intercept chloride ions and reduce the damage caused by the accumulation of chloride ions to microorganisms. It should be noted that the dilution of cathode solution and the concentration of anode solution caused by water molecule shuttle and RSF lead to the reduction of osmotic driving force [34]. In addition, with the increasingly serious membrane pollution, the water flux of OsMFC will gradually decrease. In addition, the concentration of anode solution improves the conductivity of anode solution, which is conducive to electron transfer and anode performance [35]. However, when the anode solution is concentrated to a certain extent, the salt concentration may inhibit the growth of anode microorganisms, thereby reducing the anode performance. This adverse effect can be reduced by increasing the anode solution circulation rate or increasing the desalination chamber. Periodic FO membrane cleaning, cathodic solution concentration and periodic replacement of anodic solution are necessary conditions for continuous water extraction. Generally, MFC includes the biological treatment process, which is slower than the FO process. The hydraulic retention time of the two processes is different, resulting in different treatment capacities. This imbalance reduces the treatment efficiency of the MFC anode for organic pollutants. In order to reduce the HRT gap between the two, proper coordination of MFC and FO processing capacity, such as increasing the size of anode cavity, can improve the performance of MFC system [36].
Compared with the traditional BES system, the most prominent feature of OsMFC is that it can extract high-quality water resources from wastewater through the embedded forward osmosis technology [37]. There is no obvious water infiltration flux on either side of the membrane in the traditional MFC system. Studies [38] have shown that the OsMFC system can recover more than 50% of the water resources from a variety of sewage, i.e., more than half of the water resources can be reused instead of being discharged directly by using the OsMFC technology to treat wastewater [39].
In OsMFC technology, a key factor for water recovery is the water transfer law under the influence of electric field and osmotic pressure. Since the membrane can be considered as a gel structure composed of cross-linked polyelectrolytes and forming water adsorption in the aqueous solution [40], the water content of the membrane, the concentration of salt solution on both sides of the membrane, temperature, and the density of fixed charges on the membrane surface have a greater impact on the water distribution and electroosmotic coefficient in the membrane. It has been reported that increasing the concentration of the salt solution of the extraction solution can increase the osmotic pressure on both sides of the cathode and anode, promote the migration of water to the anode, improve the water distribution in the membrane, and improve the battery performance. In addition to osmotic pressure driving on both sides of the membrane, another driving mode of water in the membrane is that it is carried by protons under the drag of electroosmosis and moves from anode to cathode [34]. The more protons cross the membrane, the greater the water flux that moves with protons from anode to cathode. The transmembrane water transfer phenomenon not only affects the size of the water flux in the forward osmosis process, but also the membrane impedance, because for any partition membrane, its ability to transmit protons is closely related to the water content of the membrane [41]. When the water content of the membrane is low, the conductivity of the electrolyte membrane is limited, while the water content of the membrane is closely related to the water transfer mechanism in the membrane. Therefore, by studying the water transfer phenomenon in OsMFC, the operating conditions of the battery can be optimized to ensure a higher and more stable output performance. Therefore, the in-depth study of water transfer phenomenon is of great significance for us to understand forward osmosis microbial fuel cells [42]. In addition, typical FO technology uses very high circulation speed to generate film side shear force to prevent pollution from accumulating on the film surface to reduce external concentration polarization. Generally, the solution circulation speed in OsMFC is smaller than that in typical FO system, and the cross flow speed on the membrane surface is 0.01~0.02 m s −1 vs FO 10~30 m s −1 .High circulation speed cannot be used on the side of anode electrode with attached biofilm, otherwise the electrogenerating bacteria will fall off from the electrode. Therefore, the water resources recovered by OsMFC system will be higher than that of traditional FO system. The forward osmosis microbial fuel cell technology is a new sewage treatment and energy recovery technology that can effectively treat pollutants, purify water quality, and convert pollutants into electricity [43]. The organic matter is oxidized under the action of anode microorganism, releasing protons and electrons. The electrons first arrive at the anode through a series of transfers, and then finally arrive at the cathode through the external circuit to complete the reduction reaction. At the same time, the protons generated simultaneously with the electrons arrive at the cathode through the membrane and electrolyte to complete the current transfer, realizing the process of converting the chemical energy in the organic matter into electrical energy [44].
Development of Forward Osmosis Technology and Microbial Fuel Cell Technology
Forward osmosis microbial fuel cells (OsMFCs) represent a new type of microbial fuel cells formed by the combination of forward osmosis technology and microbial fuel cells. By combining the advantages of microbial fuel cell and forward osmosis technology, OsMFCs can use FO membrane to treat, concentrate and prevent the penetration of solute ions in the feed solution, i.e., MFCs anode wastewater, while generating electricity, and extract water from the anode electrolyte to the electrolyte through osmotic pressure [45]. Compared with conventional MFCs, OsMFCs can use sodium chloride solution or simulated seawater as cathode electrolyte to generate more electricity in intermittent mode and continuous mode as shown in Table 1. The improvement of its performance is due to the fact that the internal resistance of OsMFCs is lower than that of traditional MFCs [46]. Verma et al. [47] proposed a mathematical model of OsMFCs, which predicted that the internal resistance would decrease with the increase of osmotic pressure and water flux, and that the electricity generation would increase synchronously, thus confirming the importance of membrane resistance. Moreover, they believed that the lower membrane resistance in OsMFCs was related to the lower transmembrane pH gradient. This is because the water flux promotes the proton transfer. Compared with CEM, the combination of FO membrane and MFC technology can slow down the accumulation of cathodic pH, which has a good application prospect in the field of wastewater treatment. Previous studies have shown that FO membrane as separator of MFC has higher power generation performance than traditional MFC, which may be due to water flux accelerating proton transfer, low internal resistance, or reverse diffusion of salt to improve anode conductivity. However, the research on the mechanism of improving the power generation capacity is not clear, nor has a relatively consistent view been formed. At the same time, previous research has focused on the comparative analysis of electrochemical indicators, such as the power generation effect, while the research on the internal characteristics of the membrane, e.g., the impact of water flux on the membrane impedance of the forward osmosis membrane, as well as the distribution of salt concentration in the membrane and its relationship with the membrane impedance, is less or even blank [48]. Meanwhile, the main factor affecting its operation effect is the internal resistance power loss caused by high internal resistance, which mainly includes ohmic internal resistance loss, activation internal resistance loss and concentration difference loss. Research shows that using FO membrane to replace CEM or PEM can affect the membrane impedance in ohmic internal resistance [49]. However, given the characteristics of OsMFC, how the water flux affects its power generation capacity is worth further discussion. Previous research reports have confirmed that OsMFC, due to the generation of water flux, promotes the ion transport between the cathode and anode chambers, which indicates that FO membrane, as the separation material of MFC, has a lower blocking effect than CEM and AEM membranes as shown in Table 2. At the same time, the water flux can also promote the transfer of protons, easing the decrease of anode pH and the increase of cathode pH. The research results show that OsMFC can stabilize the system pH, thereby reducing the system overvoltage. In addition, the absorbing solution in OsMFC is usually a salt solution with relatively high concentration, so it has lower solution impedance than the MFC system, which can reduce the ohmic impedance loss of the whole system [50]. Since the operating conditions of FO membrane are the existence of osmotic pressure difference and concentration gradient on both sides of the membrane, the research on the characteristics of the membrane operating under the concentration gradient is not comprehensive at present [51].
Challenges of Forward Osmosis Technology and Microbial Fuel Cell Technology
Although OsMFCs technology has greatly improved its power generation capacity, as a technology based on the FO principle, OsMFCs also present some inherent disadvantages of FO, the most important of which is that reverse salt flux is almost inevitable, and it is also one of the most challenging problems. The reverse salt flux occurs due to the concentration gradient on both sides of the FO membrane, resulting in the reverse transport of the extracted solute to the side of the feed solution [52]. In the FO process, the ideal FO membrane should have high water permeability and low solute permeability to achieve high water flux while reducing the reverse salt flux. In OsMFCs, although the reverse salt flux can reduce the resistance of the anode solution, the excessive accumulation of salt will affect microbial activity, causing microbial dehydration while polluting the feed solution water quality. On the other hand, the impedance of the forward osmosis membrane as the separation material of MFCs is not constant and will change with the concentration of solution salts [53]. Especially when the concentration of the feed solution and the draw solution on both sides of the membrane are quite different, the key factors affecting the membrane resistance have not been studied in depth, and the relationship between the concentration of the outer membrane solution, the concentration of the inner membrane solution, and the membrane impedance still needs to be further explored.
Conclusions and Prospects
Forward osmosis (FO) technology has been developed to treat wastewater. The water flow in FO flows naturally from medium with high water concentration to medium with low water concentration [54]. The water treatment process consists of a semi permeable membrane that allows water to pass through and expel solutes. FO has attracted much attention due to its excellent energy efficiency and salt discharge capacity, as well as its low scaling tendency and saltwater discharge. Therefore, the synergy of microbial fuel cells and FO can potentially eliminate the dependence on fossil fuels, as well as provide better waste management. One technology to achieve this result is the OsMFC [55]. It can potentially be used in many processes, such as wastewater treatment facilities, where clean water can be produced and extracted, and in water desalination facilities where salt can be removed from water and used for water reuse.
OsMFCs seems to be more effective than conventional MFC in terms of energy generation and water extraction, due to the presence of FO membrane in OsMFC. It also leads to more power generation than conventional MFC and provides an opportunity to extract water through the anode chamber. Because of the many positive characteristics of OsMFC, they can be applied to many processes in practice. However, the FO membrane fouling remains a major challenge for these internal configurations, as it is difficult to apply in situ membrane cleaning. All the above problems related to OsMFC will eventually lead to operation in a short time [56]. This is why OsMFCs long-term continuous operation has not been well studied in previous studies. Based on these facts, more research is needed to better understand the combination of MFC and FO. In general, the research of OsMFCs is still in its infancy, but the huge prospect of MFC and FO as separate technologies in resource recovery and progress will accelerate the development of OsMFCs technology. More efforts must be invested to identify application areas, understand energy issues, alleviate membrane pollution, and expand OsMFCs to the transition stage.
Data Availability Statement:
The data presented in this study are available on request from the corresponding author.
Conflicts of Interest:
The authors declare no conflict of interest.
|
2022-12-14T16:03:03.516Z
|
2022-12-01T00:00:00.000
|
{
"year": 2022,
"sha1": "bcc2e4179a592af188caba51d552f6233a2875a4",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2077-0375/12/12/1254/pdf?version=1670838002",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "7c7a879a99fe2f7755ca01a24055626469e94c07",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
4132302
|
pes2o/s2orc
|
v3-fos-license
|
Recomputation Enabled Efficient Checkpointing
Systematic checkpointing of the machine state makes restart of execution from a safe state possible upon detection of an error. The time and energy overhead of checkpointing, however, grows with the frequency of checkpointing. Amortizing this overhead becomes especially challenging, considering the growth of expected error rates, as checkpointing frequency tends to increase with increasing error rates. Based on the observation that due to imbalanced technology scaling, recomputing a data value can be more energy efficient than retrieving (i.e., loading) a stored copy, this paper explores how recomputation of data values (which otherwise would be read from a checkpoint from memory or secondary storage) can reduce the machine state to be checkpointed, and thereby reduce the checkpointing overhead. Specifically, the resulting amnesic checkpointing framework AmnesiCHK can reduce the storage overhead by up to 23.91%; time overhead, by 11.92%; and energy overhead, by 12.53%, respectively, even in a relatively small scale system.
INTRODUCTION
Scalable checkpointing is the key to enable emerging highperformance computing applications. Ready to expand their problem sizes as more hardware resources (e.g., more cores under weak scaling) become available, these applications challenge processing capabilities. More hardware resources translate into more components subject to errors, which, along with a higher expected component error rate as an artifact of technology scaling, results in a higher probability of (system-wide) errors. Therefore, proper error detection and recovery becomes a must for successful completion of any execution.
Systematic (often, periodic) checkpointing of the machine state enables backward error recovery (BER) upon detection of an error, by rolling back to and restarting execution from a safe (i.e., error-free and consistent) machine state. Energy and time overhead of checkpointing the machine state, however, grow with the frequency of checkpointing. The expected increase in error rates makes amortization of this overhead especially challenging, as a higher probability of error directly implies more frequent checkpointing.
The overhead of BER spans the overhead of checkpointing and the overhead of recovery (which entails roll-back + restart). The time or energy overhead of checkpointing, o chk , applies every time the system generates a checkpoint; the time and energy overhead of recovery, o rec , every time the execution restarts from the most recent checkpointed (safe) state after detection of an error. Depending on the interaction among parallel tasks of execution during checkpointing and recovery, BER schemes typically form two major classes: coordinated and uncoordinated [1,2]. Coordinated schemes enforce tight lock-step coordination (i.e., synchronization) among all parallel tasks every time the system generates a checkpoint or triggers recovery, and hence, generally incur a higher overhead. Uncoordinated schemes address this overhead by omitting coordination or confining it only to tasks interacting with each other during computation, which as a downside complicates the establishment of a consistent error-free global state.
The checkpointing overhead, o chk is proportional to the time or energy spent on storing the checkpointed state (to memory or secondary storage), o wr,chk , and the number of checkpoints, # chk (which represents a proxy for the checkpointing frequency). Putting it all together, applies. The recovery overhead, o rec , on the other hand, includes the time or energy (spent on useful work and) lost since the most recent safe checkpoint, o waste , and the time or energy spent on restoring the state captured by the most recent safe checkpoint, o roll−back . Under an error probability of perr, which dictates the number of recoveries, the recovery overhead becomes: Imbalances in technology scaling render the energy consumption (and latency) of data storage and communication significantly higher than the energy consumption (and latency) of actual data generation, i.e., computation [3,4]. As a result, whenever a data value is needed (i.e., has to be loaded from memory), re-generating (i.e., recomputing) the respective value can easily become more energy-efficient than retrieving the stored copy from memory [5]. During recovery, recomputation of a data value, which otherwise would be read from a checkpoint, can therefore be less energy hungry and time consuming than retrieving the respective checkpoint from main memory or secondary storage. This can further eliminate the need for checkpointing such recomputable data values, which would never be retrieved from memory or secondary storage, but recomputed. The result is an amnesic BER framework, AmnesiCHK, which can opportunistically omit checkpointing of (recomputable) data values, and thereby can reduce the machine state to be checkpointed, by relying on the ability to recompute the respective data values when needed during recovery. Under recomputation, time or energy spent on storing the checkpointed state, o wr,chk , can decrease since a (recomputable) subset of the updated memory values would be omitted from checkpointing. This in turn can decrease o chk , even if # chk remains the same. However, the recovery overhead o rec now has to incorporate the overhead of recomputation (of the values which were omitted from checkpointing), o rcmp . Still, we expect the time or energy spent on restoring the state of the most recent safe checkpoint, o roll−back to decrease, since the size of checkpoints would simply reduce under recomputation. Putting it all together, the recovery overhead under recomputation becomes: Therefore, for AmnesiCHK to hold recovery overhead at bay, o rec,rcmp ≤ o rec should be the case, which implies: Recomputation in this case is fundamentally different than classic replay: recomputation refers to the recalculation of a data value to cut any energy-hungry memory access associated with the respective value. This can be regarded as restricted replay of a small backward slice of instructions just to generate that respective data value.
In this paper, we explore how AmnesiCHK can help reduce the overhead of checkpointing without compromising the overhead of recovery in terms of time, energy, and storage. AmnesiCHK is: • hybrid (hardware/software): AmnesiCHK relies on a compiler pass to generate (and embed into the binary) instructions required to recompute the respective data values, which can be excluded from checkpointing. Under recovery, AmnesiCHK's runtime scheduler in turn triggers recomputation of these values. • transparent: Both, amnesic binary generation and triggering recomputation upon recovery are transparent to the application developer and user. • low overhead: AmnesiCHK trades the data storage and retrieval overhead of checkpointing for the overhead of recomputing the respective data values. AmnesiCHK can significantly reduce the overhead of checkpointing, while holding recomputation-incurred overheads (particularly during recovery) at bay. • scalable: Traditional checkpointing and recovery becomes more challenging at larger scale. AmnesiCHK can effectively reduce the checkpoint size, hence, is by construction more scalable.
In the following, we will detail a proof-of-concept AmnesiCHK implementation. Specifically, Section 2 provides the background; Section 3 discusses AmnesiCHK basics; Sections 4 and 5 provide the evaluation; Section 6 covers the related work; and Section 7 concludes the paper.
Backward Error Recovery (BER)
Checkpointing: Checkpointing serves establishment of a safe (i.e., error-free and consistent) machine state to rollback to and recover from upon detection of a error, thereby ensuring forward progress in execution in the presence of errors. Without loss of generality, we consider shared memory many-cores featuring directory-based cache coherence. We start our analysis with global coordinated checkpointing and recovery [6,7,8,9], but provide a sensitivity study for local coordinated schemes [10,11], as well. Under global checkpointing, all cores periodically cooperate to checkpoint the respective machine state. Specifically, at the beginning of each checkpointing period, all cores stop computation to participate in checkpoint generation. As a running example (and a relatively lower-overhead baseline for comparison, not to favor AmnesiCHK), we will use a log-based incremental in-memory checkpointing variant similar to [12,8,9], where upon each memory update, a record for the old value goes into a log stored in memory. This log corresponds to the checkpoint. The log constitutes a record of values updated only within the time window between two consecutive checkpointing events, as opposed to the entire machine state. Establishing a checkpoint involves writing all dirty cache lines back to memory and recording (the rest of) each core's architectural state. For dirty lines, the memory controller only updates the log with the corresponding old value, if the update represents the very first modification since the last checkpoint. Thus, similar to [8], a modified cache line gets logged only once between a pair of consecutive checkpoints. The directory controller keeps an additional bit per memory line to keep track of whether the line has already been logged for the current checkpoint interval. The controller sets this bit upon logging the line, and clears it upon establishing a new checkpoint. In the following, we will refer to this bit as log.
In-memory checkpointing, by construction, incurs a lower time and energy overhead when compared to (more traditional) checkpointing to secondary storage. In-memory checkpointing may correspond to a stand-alone checkpointing scheme or represent the first level in a hierarchical checkpointing framework. Our observations generally apply under both options.
Error Detection and Recovery:
In the following, we assume a fail-stop error model, where data memory and checkpoint logs do not suffer from any errors, similar to [12]. Various protection mechanisms such as ECC [13] or memory raiding [14] can achieve this. To detect errors, the system can rely on modular redundancy [15] or error detection codes (e.g., CRC). Error detection is not instantaneous, therefore, a lag between the occurrence of an error and its detection generally applies, which is referred to as error detection latency. As a consequence, corrupted state may get checkpointed, even if the error detection latency is no longer than the checkpoint period. Figure 1 illustrates an example, where an error occurs right before Ckpt2 gets taken, and is detected only after Ckpt2 is established, thereby corrupting the respective checkpointed state. In this particular case, the time elapsed between establishment of Ckpt2 and the detection of the error is less than the error detection latency, hence, there is no guarantee for Ckpt2 to be error-free. To recover from the error, the system should roll-back to the second most recent checkpoint at hand, i.e., Ckpt1, instead of the most recent Ckpt2. If the error detection latency is no longer than the checkpoint period, which applies throughout this study, keeping most recent two checkpoints suffices.
Data Recomputation for Energy Efficiency
Imbalances in technology scaling render the energy consumption (and latency) of data storage and communication significantly higher than the energy consumption (and latency) of actual data generation, i.e., computation [3,4]. As a result, whenever a data value is needed (i.e., has to be loaded from memory), re-generating (i.e., recomputing) the respective value can easily become more energy-efficient than retrieving the stored copy from memory [5]. The basic idea behind data recomputation is to eliminate memory accesses (be it a read, or a write) by relying on the ability to recalculate the respective data values, when needed. To this end, the system has to record the sequence of instructions which can produce the respective data values. As a representative example, the recently proposed Amnesiac machine [5] details compiler and (micro)architecture support for opportunistic substitution of memory reads with a sequence of arithmetic/logic instructions to recompute the data values which would otherwise be retrieved from the memory hierarchy. Following Amnesiac's terminology, we will refer to these sequences of instructions as RSlices, each forming a backward slice of arithmetic/logic instructions. To perform recomputation along an RSlice, its input operands should be available at the expected time of recomputation. Not all RSlice input operands suit themselves to (re)generation by recomputation, particularly, if input operands correspond to read-only values residing in memory (e.g., program inputs), or register values which are overwritten at the time of recomputation. Amnesiac refers to such input operands as non-recomputable inputs, and to make sure that they are available at the anticipated time of recomputation, stores them in designated buffers. To facilitate recomputation, we assume similar hardware-software support as Amnesiac, with Section 3 detailing the fundamental differences.
AmnesiCHK BASICS
In this section, we cover the basics and execution semantics of a practical AmnesiCHK implementation under checkpointing, and recovery upon the onset of an error. Impact on Checkpointing: At the end of each checkpointing interval, AmnesiCHK identifies and omits the recom-putable subset of data values (which otherwise would be included in the checkpoint being taken) from checkpointing. Thereby, AmnesiCHK can reduce the checkpoint size, which in turn reduces the o wr,chk component of the checkpointing overhead per Equation 1, i.e., the time or energy spent on storing the checkpointed state to memory. At the extreme, all values which otherwise would be included in a checkpoint may be recomputable. If this is the case, AmnesiCHK would also be able to eliminate a subset of checkpoints entirely, and thereby reduce the # chk component of the checkpointing overhead per Equation 1, i.e., the number of checkpoints. Impact on Recovery: Upon the onset of an error, the amnesic recovery handler triggers the recomputation of any data value which was omitted from the checkpoint being restored. Such recomputation incurs the overhead captured by o rcmp in Equation 3, but, at the same time, can cut back on the time or energy spent on restoring the checkpointed state from memory (i.e., o roll−back in Equation 2). Overview: AmnesiCHK trades the checkpoint storage and retrieval overhead from memory for the overhead of recomputing the respective data values. Accordingly, any practical AmnesiCHK implementation has to address: • how to identify recomputable data values in a checkpoint interval; • how to omit recomputable data values from a checkpoint; and • how to trigger recomputation of the respective data values during recovery.
Amnesic Checkpointing
We will first cover how to identify recomputable data values which can be omitted from checkpointing. Compiler Support: AmnesiCHK relies on a compiler pass to identify recomputable data values, which can be omitted from checkpointing. Under incremental in-memory checkpointing (Section 2.1), only a subset of the store instructions would trigger checkpointing (specifically, only the first updates to the same memory address). The compiler pass therefore tracks store instructions, and using data dependency graphs, extracts backward slices, i.e., sequences of arithmetic/logic instructions which produce the respective data values to be stored. Following the terminology from [5], we refer to each such backward slice as an RSlice. Fig. 2 shows an example, where the arrows point to the direction of dataflow, and each node corresponds to an instruction. Instructions i3, i4, i5 are producers of the (input operands of) instruction i2; instructions i1 and i2, of the value v to be stored by the store instruction st(v). Depending on the specifics of the instruction set architecture (ISA), such backward slices can take different forms.
In selecting which RSlices to embed into the binary, the compiler has choice. One option is, using probabilistic analysis, estimating the anticipated cost of recomputation along each RSlice when compared to reading, i.e., loading the respective data value from a checkpoint in memory, and including the RSlice only if more cost-effective (where cost can be delay, energy or a combination of both, without loss of generality). In this study, we instead take a more greedy approach of minimal complexity, and consider all RSlices Figure 2: Backward recomputation slice (RSlice).
which have a lower number instructions than a preset threshold (which typically remains less than 10, and in Section 5 we quantify the impact). The insight is that the overhead of recomputation along an RSlice increases with its number of instructions. Therefore, capping the instruction count can effectively hold recomputation overhead under control (as we will further demonstrate in Section 5.5.1).
The next question is how to embed RSlices into the binary, to facilitate invocation upon recovery. The only critical piece of information is associating the start address of each RSlice (i.e., the address of the first instruction in the backward slice) with the memory address of the respective data value (which will be regenerated by recomputation along the RSlice). Such memory addresses correspond to the destination memory addresses of the stores, and the compiler uses each such store as a proxy in identifying target values for recomputation. One way to communicate this information to the runtime is introducing a special instruction to associate these two effective addresses (and enforcing atomic execution of it with the corresponding store). We will refer to this instruction as ASSOC-ADDR.
While the compiler analysis to bake recomputing instructions into the binary looks similar to the compiler pass in [5], there is a fundamental difference: The goal in [5] is swapping each energy-hungry load with an RSlice to recompute the respective data value (which otherwise would be loaded from the memory hierarchy). In this case, the swapped load instructions are never performed. In exploiting recomputation for checkpointing, on the other hand, AmnesiCHK leaves load instructions intact, and only tracks store instructions to identify data values which can be omitted from checkpointing. In this case, the corresponding store instructions are always performed; what is omitted is the inclusion of the respective (recomputable) data value into the corresponding checkpoint. Amnesic Checkpoint Handler: Each time an ASSOC-ADDR instruction is encountered, amnesic checkpoint handler records the corresponding <memory address,RSlice address> association into a dedicated buffer called Address Map, Ad-drMap. Next, the handler asks the memory controller to exclude the corresponding (recomputable) value from the next checkpoint (which is achieved by setting the dedicated log bit, as explained in Section 2.1). Eventually, the size of the next checkpoint reduces as more (recomputable) values are excluded from checkpointing via ASSOC-ADDR instructions. Such <memory address,RSlice address> pairs have to remain in AddrMap as long as the established checkpoint for the corresponding interval remains in memory, such that upon detection of an error, recomputation along RSlices can restore the values omitted from checkpointing, in coordination with the established checkpoint for roll-back. As covered in Section 2.1, under the assumption that the error detection latency does not exceed the checkpointing period, retaining two most recent checkpoints suffices. Therefore, ASSOC-ADDR should only record the mappings for the two most recent checkpoints.
Amnesic Recovery
Upon detection of an error, amnesic recovery handler orchestrates roll-back to the most recent safe global recovery line, by triggering recomputation along RSlices for each value excluded from checkpointing, in coordination with the restoration of the most recent safe checkpoint. There is no need for separate bookkeeping for the values missing from the most recent safe checkpoint, since AddrMap contains all the necessary information to fire recomputation of these values along the respective RSlices. After recomputing the missing values and storing them back to their destination addresses, amnesic recovery handler restores the remaining states in the checkpoint, and resumes execution from this point onward.
In this study, we confine recomputation to memory values only. Therefore, upon recomputation of a missing value from the checkpoint, we have to access memory to store the respective value. Register values are checkpointed, as well, as part of the architectural state, but are not considered for recomputation. This is likely to render the proof-of-concept AmnesiCHK implementation conservative, as a register value would not incur an expensive memory write upon recomputation. In the end, during recovery, AmnesiCHK can only cut the overhead of retrieving (i.e., loading) the checkpointed state from memory (due to the omission of recomputable values from the checkpoint), which can be easily masked by the overhead of writing such omitted (memory) values back to memory upon recomputation.
Microarchitecture Support
To facilitate amnesic checkpointing, the memory controller takes a similar form to [8], and maintains the log bit to determine if the old value of a given write-back should be logged (i.e., checkpointed). For each write-back request, the memory controller has to decide (i) whether the request would result in the first update to the respective memory line since the last checkpoint was taken, and (ii) whether the current data value v of the respective memory line (i.e., the value before the write-back takes place) can be recomputed. While the memory controller can manage the log bit itself for (i), it should coordinate with amnesic checkpoint handler for (ii). As explained in Section 3.1, upon encountering a recomputable value, the amnesic checkpoint handler sends a request to the memory controller to let it know that the respective value v can be recomputed, and therefore, should be omitted from checkpointing. The memory controller sets the log bit accordingly, when it receives such requests from the amnesic checkpoint handler.
The number of (stores corresponding to the) values that can be excluded from checkpointing depends on the size of AddrMap, specifically, on how many RSlices AddrMap can keep track of. Fortunately, we do not need an excessively large AddrMap to this end: Recall that we only need to check-point the old values upon the very first write-backs (to unique addresses) when a new checkpoint is established. Therefore, the number of RSlices is not a function of how many times an address is updated, but how many unique memory addresses are updated within a given checkpoint interval. Naturally, the latter is bounded by the period of checkpointing. As the period gets longer, the probability of having a higher number of unique memory addresses updated increases. At the same time, as the period gets longer, the amount of useful work lost upon detection of an error increases. The checkpointing period cannot get too long to reduce this amount of useful work lost. The checkpointing period hence puts an upper bound on how many unique RSlices we should keep track of at runtime. Finally, to prevent corruption of architectural state during recomputation, AmnesiCHK relies on a similar renaming scheme as [5].
Putting It All Together
AmnesiCHK can reduce the number of values to be logged for checkpointing, and thereby reduce both the performance and energy overhead of checkpointing. AmnesiCHK can also reduce the size of each checkpoint, and thereby the storage overhead, by cutting the number of values to be checkpointed in each interval. A reduction in checkpoint size can easily translate into energy savings, as well as performance gain, due to the lower number of expensive memory read (during recovery) and write operations (during checkpointing), respectively.
Recovery upon detection of an error involves recomputation of missing values from the checkpoint and restoring the rest of the state using the established checkpoint. Recomputation along each RSlice incurs a performance and energy overhead; however, it is not prohibitive since the number of instructions in RSlices are bounded. During recovery, Am-nesiCHK introduces the extra overhead of recomputation, but at the same time, it reduces the number of values to be read from the checkpoint in memory for restoration. The benefit of the latter may or may not be comparable to the overhead of recomputation. However, considering the anticipated frequency of checkpointing and recovery, one can argue that recovery is a much less frequent event compared to checkpointing, thus AmnesiCHK's gain under checkpointing is more likely to outweigh its potential loss under recovery.
EVALUATION SETUP
To evaluate the impact of amnesic checkpointing and recovery on execution time and energy, we experimented with eight benchmarks from the NAS [16] suite 1 . We ran these benchmarks with 8-32 threads on a simulated 8-32 core system. We implemented recomputation, checkpointing, and recovery under AmnesiCHK in Snipersim [17]. We extracted energy estimates from McPAT [18] integrated with Snipersim. Table 1 summarizes the configuration for the simulated architecture.
We implemented AmnesiCHK's compiler pass to embed RSlices into the binary as a Pin [19] tool. Recall that Snipersim relies on a Pin-based front-end, which facilitated seamless integration. We used a predetermined threshold for 1 with the exception of ep due to simulation complications RSlice length: RSlices exceeding threshold are excluded from the binary to prohibit excessive recomputation overhead along RSlices. In Section 5.5.1, we will discuss the impact of the threshold value on checkpointing overhead.
We considered the following configurations: • We adjust the checkpointing frequency to the expected error rates and the execution times of the applications. Without loss of generality, we distribute the checkpoint intervals uniformly over the execution time. As a result, applications with longer execution times checkpoint more.
Checkpointing Overhead
We start the evaluation with a characterization of the checkpointing overhead under AmnesiCHK. For a crisp comparison, we use the configurations from Section 4 under error-free execution, which only incur the overhead of checkpointing. Specifically, we use No Ckpt as a baseline for comparison, where no checkpointing takes place. Fig. 3 shows the execution time overhead of checkpointing and recovery. The first and third columns in each group show the execution time overhead of checkpointing for the evaluated benchmarks under Ckpt NE and Amn NE , respectively. As expected, Ckpt NE and Amn NE perform consistently worse than No Ckpt due to the checkpointing overhead. However, via recomputation, Amn NE is very effective in reducing the Ckpt NE 's time overhead due to checkpointing, by up to 28.81% (for is), and 11.92%, on average. The smallest reduction is 2.12% for cg, where Ckpt NE 's time overhead is already relatively low. This is because cg's checkpoint size per checkpointing interval is relatively small and the % of time spent in checkpointing accounts for only ≈ 9% of the total execution time. Fig. 4 shows the corresponding energy overhead of checkpointing and recovery, normalized to No Ckpt . The first and third columns in each group show the energy overhead of checkpointing for the evaluated benchmarks under Ckpt NE and Amn NE , respectively. The general trend is similar to the time overhead. Amn NE reduces the energy overhead of Ckpt NE by up to 26.93% (for is), and 12.53%, on average. Among the benchmarks, is is very amenable to recomputation: as the majority of the updated memory values can be recomputed (in case of recovery), Amn NE can exclude these from checkpoints, which leads to a higher reduction in checkpointing overhead w.r.t. Ckpt NE . The smallest energy reduction is 1.75% (for cg), in line with Fig. 3.
Recovery Overhead
In Section 5.1, we characterized purely the overhead of checkpointing by assuming error-free execution where periodic checkpointing still takes place. In this section, the goal is quantifying the overhead of recovery, in the presence of errors. Recovery requires the establishment of a globally consistent state among all cores. For Ckpt E , this translates into each core rolling back to restore the machine state corresponding to the most recently established checkpoint. This also applies to Amn E , but Amn E needs to recompute the data values omitted from checkpointing, on top. Such data values have the corresponding RSlices baked into the binary. Therefore, although Amn E can reduce the checkpointing overhead, it incurs an extra overhead due to recomputation during recovery. Fig. 3, the second and fourth columns in each group show the execution time overhead of Ckpt E and Amn E , respectively (w.r.t No Ckpt ). Notice that in Ckpt E and Amn E , we have an error during execution. As expected, we observe higher time overhead under Ckpt E and Amn E than under Ckpt NE and Amn NE , respectively. Ckpt E and Amn E both incur the recovery overhead on top of the checkpointing overhead, as shown in the Fig. 3.
Still, Amn E is very effective in reducing the time overhead of Ckpt E : although Amn E needs to recompute the omitted values (from checkpointing), thus incurs additional recovery overhead, reduction of checkpointing overhead (due to the reduced checkpoint size) and reduction of the restore overhead (again, due to the reduced checkpoint size) outweighs the corresponding overhead of recomputation. As a result, Amn E reduces the time overhead of Ckpt E by up to 26.68% (for is), and 12.39%, on average. The smallest reduction is 1.9% for cg, in line with our previous observations. The second and fourth columns of each group in Fig. 4 show the percentage of the energy overhead of Ckpt E and Amn E (w.r.t No Ckpt ). The energy overhead follows the very same trend as the time overhead. Amn E reduces the energy overhead of Ckpt E by up to 30% (for dc), and 13.47%, on average. The smallest energy reduction is 1.86% (for cg).
Putting it all together, Fig. 5 shows the percentage reduction of energy-delay product (EDP) of Amn NE and Amn E w.r.t. Ckpt NE and Ckpt E respectively, as a proxy for energy efficiency. EDP provides a notion of balance between the time overhead and energy consumption. We observe that Amn NE reduces EDP by up to 47.98% (for is), and 22.47%, on average, when compared to Ckpt NE . Similarly, Amn E reduces EDP by up to 48.07% (for dc), and 23.41%, on average, when compared to Ckpt E . Although is benefits more from Amn E in terms of performance, dc has a higher energy reduction due to Amn E , which in turn leads to a higher EDP reduction.
Overall, we observe that AmnesiCHK can effectively reduce the overhead of checkpointing, as well as, of recovery. The effectiveness highly depends on the overhead of recomputation along RSlices and on how many values can be omitted from checkpointing. We will revisit the impact of RSlice length on checkpoint size reduction in Section 5.5.1.
Storage Complexity
The main benefit of AmnesiCHK stems from the reduction of checkpoint size, which has two critical implications: reducing the data size to be (i) moved to (and retrieved from); (ii) stored in the designated memory area for checkpointing. In addition to (i), (ii) can also reduce the energy consumption, e.g., due to less leakage or refresh in case of DRAM. At the same time, a reduction in checkpoint sizes can lead to a reduction in the memory footprint of checkpointing, reducing storage complexity.
The Overall columns in Fig. 6 show % reduction in the overall checkpoint size (i.e. total amount of data to be checkpointed) under Amn NE w.r.t. to Ckpt NE . Among all benchmarks, is benefits the most from recomputation, where the overall checkpoint size reduces by 75.74% under Amn NE . On the other hand, cg is less responsive, and the checkpoint size reduces by only 6.99%. The average checkpoint size reduction over all benchmarks is 38.31%.
Recall that, per Section 2.1, if the error detection latency is no longer than the checkpoint period, which applies throughout this study, keeping most recent two checkpoints suffices to have ability of recovering the global state (in case of error in execution). Therefore, the size of the largest checkpoint under AmnesiCHK represents a more accurate proxy for the anticipated memory footprint reduction than the total size of all checkpoints (as Overall columns in Fig. 6 Max columns in Fig. 6, hence show % reduction in the size of the largest checkpoint under Amn NE w.r.t. to Ckpt NE . If there is no value that can be recomputed within the largest checkpoint, AmnesiCHK cannot reduce the footprint size (although it may still reduce the the total size of all checkpoints in an application). Fig. 6 reveals such a case: is has very limited Max reduction (2.04%) under Amn NE ; but the highest Overall reduction. For the rest of the benchmarks, dc shows the largest reduction in Max of 58.3%; and ft, the smallest of 0.05%. For ft, AmnesiCHK practically cannot reduce the size of largest checkpoint (as the Max column reveals), but the total checkpoint size can still reduce by 23.27% (as the Overall column reveals). As explained in Section 4, Ckpt NE and Amn NE exclude recovery due to error-free execution, hence cleanly capture the overhead, and particularly size implications of checkpointing. That said, the corresponding reductions under Amn E would be exactly the same as under Amn NE , since the presence of errors does not change the set of values that can be omitted from checkpointing.
Coordinated Local Checkpointing
In our discussion so far we covered coordinated global checkpointing. As explained in Section 2.1, a viable alternative is coordinated local checkpointing [20,9] local checkpointing is generally more scalable as the overhead of checkpointing and recovery evolves with the number of communicating cores (as opposed to all cores under coordinated global checkpointing). Identifying communicating cores in a checkpointing interval, however, necessitates a mechanism to track inter-core data dependencies, which usually translates into continuous and dynamic monitoring and recording of inter-core interactions that may challenge scalability. We next investigate recomputation-enabled coordinated local checkpointing. In the following, we use the global coordinated checkpointing correspondent for each configuration as a baseline for normalization. Fig. 7 shows the normalized execution time under coordinated local checkpointing, specifically, Ckpt NE,Loc , Ckpt E,Loc , Amn NE,Loc and Amn E,Loc w.r.t. their global checkpointing counterparts (i.e. Ckpt NE , Ckpt E , Amn NE and Amn E , respectively). We observe that coordinated local checkpointing results in a lower time overhead for Ckpt NE,Loc as indicated by a y-intercept < 1 for the majority of the benchmarks. The lower overhead is due to the lower number of cores checkpointing together. However, this is not the case for bt, cg and sp, where practically all cores communicate with one another each checkpointing interval. For the rest of the benchmarks the time overhead of Ckpt NE,Loc reduces by up to ≈42% for ft, 17% for dc, 36% for is, 32% for mg, and 10% for lu w.r.t. Ckpt NE .
AmnesiCHK incorporated into coordinated local checkpointing remains as effective as in global checkpointing. For all the benchmarks, the checkpointing (time) overhead under Amn NE,Loc remains below (or at most the same as) the overhead under the global checkpointing correspondent Amn NE . The reductions under Amn NE,Loc are not as pronounced as under Ckpt NE,Loc , mainly because the potential for recomputation does not change considerably under local schemes w.r.t global.
Specifically, bt, cg, lu, and sp do not observe any sizable reduction (≈≤ 1%) of the time overhead under Amn NE,Loc w.r.t. the global checkpointing counterpart Amn NE . For the rest of the benchmarks, the time overhead of Amn NE,Loc reduces by up to ≈8% for dc, 33% for ft, 15% for is, and 26% for mg w.r.t. the global checkpointing counterpart Amn NE .
We Based on this outcome, we can conclude that recomputationenabled checkpointing and recovery incorporated into coordinated local checkpointing is at least as effective as its global checkpointing counterpart.
Impact of RSlice Length on Checkpoint Size
RSlice length (in terms of instructions) dictates the overhead of recomputation. Longer RSlices incur a higher recomputation overhead. The overhead of recomputation is invisible under error-free execution, as recomputation may be necessary only during recovery upon detection of an error. Throughout the evaluation, we used a threshold of 10 instructions (except is, where threshold is 5) to identify the RSlices to be embedded into the binary.
A higher threshold usually translates into being able to include more RSlices into the binary, and therefore, a higher likelihood for any value to find a corresponding RSlice in the binary (and thereby to get omitted from checkpointing). As a result, the checkpoint sizes tend to reduce. Table 2 shows the impact of RSlice length on the overall checkpoint size under Amn NEȦ s an example, fot bt, we observe that the total checkpoint size reduces by up to 89.91% when the threshold for RSlice length is allowed to grow up to 50 instructions, and 36.54% when the threshold for RSlice length remains less than or equal to 10. Threshold is a critical design parameter which dictates the overhead of recomputation (during recovery in case of an error), and the storage complexity of the microarchitectural support for AmnesiCHK (as larger buffers are necessary to keep track of larger RSlices).
At the same time, data values that have the corresponding RSlices baked into the binary (and hence are recomputable) are not necessarily uniformly distributed over the checkpoint intervals. Therefore, for each checkpoint interval, the impact of recomputation may vary (if recomputation is possible at all). Fig. 8 shows this effect for bt, by capturing how % reduction in checkpoint size changes over the execution time, considering different threshold values. We observe that Amn NE reduces checkpoint size more in certain checkpoint intervals when compared to others. Such temporal variation points to more optimization opportunities for AmnesiCHK: for example, instead of checkpointing periodically, adjusting the time to checkpoint to exploit more recomputation opportunities. We leave the exploration of this to future work. 2 75.74% for threshold of 5. Not shown in Table to keep it simple.
Impact of Error Rate
The expected (system-wide) error rate (perr) dictates the rollback and recovery overhead, as captured by Equations 2 and 3. Our discussion so far characterized the recovery overhead under Ckpt E and Amn E assuming a single error within the course of execution. In this section we expand this analysis to execution under more frequent onset of errors.
With increasing error rates, the expected number of errors within the course of execution increases, which in turn increases the recovery overhead due to more frequent recoveries within the course of execution. Fig. 9 shows the % execution time overhead of Ckpt E and Amn E w.r.t. No Ckpt , considering different numbers of (up to 5) errors within the course of execution. We assume that the errors in each case are uniformly distributed over the execution time. Not surprisingly, the execution time overhead increases with increasing number of errors. Some benchmarks experience very high time overhead as the error rate increases. This is mainly because the execution time under No Ckpt is relatively small such that the overhead of rollback and recovery becomes proportionally higher. Among the benchmarks, ft suffers the most as its per recovery overhead is relatively high.
While the execution time overhead patterns are very similar for Ckpt E and Amn E the overheads are lower in Amn E since overall recovery overhead (including restoring the checkpointed values and recomputing missing values on top) is considerably low in Amn E . Specifically, the time overhead reduces by up to 26.68% (for is) for a single error, 25.35% (for dc) for two errors, 26.87% (for dc) for three errors, 21.58% (for dc) for four errors, and 19.92% (for is) for five errors, respectively, in Amn E w.r.t. Ckpt E . On average, execution time overhead reduction ranges from ≈9% up to 12% for different error rates under Amn E .
EDP also increases with increasing error rates. The general trend is similar to the time overhead, but more pronounced. Under Amn E EDP reduces by up to 48.07% (for is) for a single error, 47.77% (for dc) for two errors, 50.04% (for dc) for three errors, 42.99% (for dc) for four errors, 34.99% (for is) for five errors. On average, EDP reduction ranges from ≈18% up to 24% for different error rates under Amn E .
Impact of Checkpointing Frequency
As captured by Equation 1, the time or energy overhead of checkpointing is a function of the frequency of checkpointing, as well as the amount of machine state being updated during each checkpointing interval. In Section 5.5.2, we evaluated the impact of the error rate on recovery overhead under a fixed checkpointing frequency. In this section, we evaluate the impact of the checkpointing frequency on checkpointing overhead under a fixed error rate. To do so, we vary the checkpointing frequency for each benchmark to yield 25, 50, 75 and 100 checkpoints within the course of execution. These checkpoints are uniformly distributed over the execution time. Fig. 10 shows the execution time overhead of Ckpt NE and Amn NE (w.r.t. No Ckpt ), considering different number of checkpoints. Naturally, the time overhead of checkpointing increases with the number of checkpoints. Among all the benchmarks, ft experiences the largest time overhead.
The general trend for Amn NE is very similar to Ckpt NE , however, Amn NE considerably reduces the time overhead of checkpointing. An interesting point in Fig. 10 is the lower overhead of 75-checkpointed runs when compared to 50checkpointed. Although it seems unintuitive at first, there is catch: when we change the checkpointing frequency, the start time of each checkpoint interval becomes different (since we uniformly distribute the checkpoints over the execution time). The ability of recomputation to reduce the checkpoint size (and thereby the checkpoint overhead) depends on whether the corresponding RSlices in a given checkpoint interval exist (i.e., were baked into the binary). If the checkpoints fall into the intervals of execution with a small number of recomputable values, AmnesiCHK cannot reduce the checkpointing overhead significantly. Such a corner case is is, where the 50-checkpointed run has very limited RSlice coverage w.r.t. the 75-checkpointed. As the data size that can be recomputed (i.e., excluded from checkpointing) is smaller, the time overhead is higher for the 50-checkpointed run. The time overhead reduces by up to 28
Scalability
The number of threads involved in execution affect the overhead of checkpointing, due to both an increase in the cost of coordination (among threads) and a potential increase in the machine state to be checkpointed. As a consequence, the memory bandwidth requirement tends to increase, as well. We next look into the scalability of AmnesiCHK with increasing thread count. We experiment with 8-, 16-, and 32threaded executions where each thread is pinned to a separate core.
RELATED WORK
Checkpointing and recovery solutions are extensively studied over the decades. The proposed solutions can be categorized into software-based or hardware-based checkpointing; and application or system level checkpointing. Software-based proposals use periodic barriers to perform system-level [21], application-level [22], or hybrid checkpoints [23].
Hardware proposals [12,8,9] reduce the checkpoint and restart penalties, but can increase hardware complexity. For example, in Rebound [12] when a core is checkpointing, the L2 controller writes dirty lines back to main memory while keeping clean copies in L2, and the memory controller logs the old values of the updated memory addresses. In addition, between checkpoint times, when a dirty cache line is written back to memory, the memory controller has to log the old value, as well. This is done for the first write-back and consecutive writes to the same memory address can be excluded from being logged. SafetyNet [9], on the other hand, explicitly checkpoints the register file, and incrementally checkpoints the memory state by logging the old values.
Compiler-assisted checkpointing [24] improves the performance of automated checkpointing by presenting a compiler analysis for incremental checkpointing, aiming to reduce checkpoint size. In incremental checkpointing, memory updates are monitored and are omitted from checkpointing if a particular memory location has not been modified between two adjacent checkpoints. This mechanism reduces the amount of data to be checkpointed, and is widely used in many checkpointing schemes. We also employ incremental checkpointing in our analysis. In [24], instead of using runtime mechanisms (such as exploiting cache coherency protocol to identify updates memory locations), they rely on compiler analysis to track the memory updates that can be excluded from checkpoints. To facilitate the compiler analysis, the source code should be manually annotated, indicating the starting point of each checkpoint. However, it has limited applicability in practice, since it may not be always feasible to obtain and/or annotate the source code.
A relevant work presented in [25], introduces the notion of idempotent execution that does not need explicit checkpoints to recover from errors. Instead, in case of an error, reexecuting the idempotent region suffices for recovery. Such idempotent regions are constructed by the compiler. As the name suggests, idempotent regions regenerate the same out-put regardless of how many times they are executed with the given program state. In comparison to AmnesiCHK, idempotent execution has limited flexibility. Generally, idempotent regions are large, and therefore incur high overhead during recovery, while we employ fine-grained data recomputation (along a short separate RSlice for each value), and each RSlice contains only the necessary instructions to generate a single value. Identifying idempotent regions is also a daunting task, and it may not be easy to find fine-grained idempotent regions for a large class of applications. RSlices provide more flexibility on values to be checkpointed and be recomputed in this regard.
A recent work demonstrates the applicability of recomputation to loop-based code [26] to reduce the checkpointing overhead. Similar to our approach, they try to reduce the checkpoint size by logging enough state to enable recomputation in case of error in execution. When error occurs, they determine which parts of the computation were not completed and they eventually recompute them by reexecuting the corresponding loop iterations. Although, it is very similar to our approach in spirit, their approach is more restricted to loop-based code, whereas our approach can target arbitrary data as long as its corresponding RSlice exist.
Similar to [26], the authors of [27] exploit the regularity of workloads, such as matrix-vector multiplication and iterative linear solver to reduce the performance overhead of checkpointing by relying on partial recomputation. Their fundamental observation is that although error occurs in computation, most of the results are still correct for those types of workloads. So, instead of simply rolling back and repeating the entire segment of computation, they employ algorithmic error localization and partial recomputation to efficiently correct the erroneous results.
In [28], authors explore energy concerns for checkpointing and evaluate a wide-range checkpointing policies to understand their respective energy, performance and I/O tradeoffs. They provide detailed insights into the energy overhead, as well as the performance impact, associated with different checkpointing policies.
CONCLUSION
In the presence of errors, systematic checkpointing of the machine state makes recovery of execution from a safe state possible. The performance and energy overhead, however, can become overwhelming with increasing frequency of checkpointing and recovery, as dictated by the growth in the frequency of anticipated errors. In this paper, we discuss how recomputation of data values which otherwise would be read from a checkpoint (from main memory or secondary storage) can help reduce these overheads. We observe that recomputation can reduce the memory footprint by up to 23.91%, which is accompanied by a reduction in time, energy and EDP overhead by up to 11.92%, 12.53%, and 23.41%, respectively, even considering a relatively small-scale system. We expect the reduction to become much higher and more visible in larger scale systems, where checkpointing overhead becomes more prominent.
|
2017-09-18T21:35:09.000Z
|
2017-09-18T00:00:00.000
|
{
"year": 2017,
"sha1": "ce4456b918e6b28d5adb03cf94c688eb11c6a54c",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "5698b7ac09c1ffc2ad6f573b21a03bb576fc79c1",
"s2fieldsofstudy": [
"Computer Science",
"Engineering"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
237384888
|
pes2o/s2orc
|
v3-fos-license
|
Clinical Study of Scleral Fixated Intraocular Lens Implantation in Blunt Ocular Trauma
Aim: to assess visual outcome and complications associated with SFIOL implantation in traumatic lens subluxation/ dislocation cases. Methods: This is a retrospective study of 45 patients who were managed for traumatic dislocation/subluxation of clear or cataractous lenses from June 2019 to July 2020 in a Krishna hospital, Karad, Satara. All cases underwent anterior vitrectomy/3 port pars plana vitrectomy + removal of lens and ab externo 2 point scleral fixation with rigid or foldable sfiol. In posteriorly dislocated/subluxated lens, vitrectomy was done and the lens was removed using pick forceps and retrieved by hand shake technique. In anteriorly dislocated cataractous lens, the lens was removed through the tunnel incision. Results: Majority of the patients were between 55-65 years of age with male pre-ponderance (73.3%).Out of 45 cases, 21 cases (46.6%) were traumatic dislocated lens and 24 cases (53.3%) were traumatic subluxated lens. The mean preoperative BCVA was 0.13 ± 0.24 logMAR, which improved 0.39 ± 0.366 logMAR postoperatively (P <0.0001 ).Preoperatively BCVA in logMAR in 39 cases (86.6%) was 0.3 or better, 6 cases (13.3%) was 0.3 to 1.0 . Postoperatively BCVA in logMAR in 21 cases (46.67%) was 0.3 or better, 24 cases (53.3%) was 0.3 to 1. P-value is 0.00057 which is Original Research Article Gadre et al.; JPRI, 33(34A): 86-91, 2021; Article no.JPRI.70049 87 significant. Early postoperative complications noted were raised intraocular pressure in 12 cases (26.6%), corneal edema in 9 cases (20%), vitreous hemorrhage in 8 cases (17.7%) and hypotony in 3 cases (6.67%).Late postoperative complications were persistent elevation of intraocular pressure in 10 cases (22.2%), cystoid macular edema in 3 cases (6.67%), epiretinal membrane in 3 cases (6.67%). Conclusion: In every horrendous case, long haul follow-up is needed to distinguish confusions and start treatment at the most punctual.
INTRODUCTION
One of the significant reasons for serious visual debilitation is visual injury, either blunt or penetrating. An expected 18 million individuals overall experience the ill effects of visual injury every year [1]. Traumatic cataracts and focal point disengagement or misfortune are the most widely recognized and critical results of eye injury [2]. Cases with post-traumatic cataracts or focal point disengagement are treated with focal point expulsion medical procedure. Much of the time, it is related with injury to other visual designs. Hence the administration of visual injury patients with deficient back Various different choices to supplant the intraocular focal point (IOL) implantation in eyes with a focal point like subluxation or relocation of optional visual injury with insufficient capsular or zonular backing, for example, scleral fixed back chamber intraocular focal point (SFIOL), front chamber intraocular focal point (ACIOL) ), iris stable IOL. This should be possible as an essential or optional interaction [3].
ACIOL or iris-fixed IOL implantation can provoke a collection of disarrays, including corneal endothelial cell decompensation, cystoid macular edema, and glaucoma speed increase, and iris abrading [4]. Consequently, SFIOL implantation enjoys some relative benefits. It diminishes the risk of corneal degeneration, periphery front synechiae, and helper glaucoma by keeping the point of convergence further away from the chief segment structures [5,6]. Be that as it may, when utilizing SFIOL implantation, stitch disintegration and crack, hazard of stitch tie, conjunctival and sclerotomy, scleral cut and so on are completely related [7,8].
To decrease fasten related disarrays, a couple of examinations introduce haptics of three-piece IOLs into the scleral tunnel, anyway there is still some threat of postoperative hypotony, IOL slippage and point of convergence deviation, scleral tunnel burst, insufficient haptic fixation power, and are in addition. Haptics deformation after operation [9,10].
In this article, we have contemplated the postusable visual result and intricacies related with both inflexible and foldable SFIOL implantation in visual injury patients without sufficient container support.
MATERIALS AND METHODS
A total number of 45 patients with traumatic subluxation or dislocation were taken up for the study. The study was conducted for one year from June 2019 to July 2020 in a Krishna hospital, Karad,Satara.
Methodology
A detailed history of the type of trauma (blunt /penetrating), eye involved, object causing trauma, duration between trauma and presentation were taken. An intensive visual assessment including visual keenness, cut light assessments, immediate and aberrant ophthalmoscopy, slit lamp biomicroscopy with +90 dioptre lens, Tonometry, Gonioscopy, B-Scan ultrasonography and routine x-ray orbit was done. OCT, FFA, CT-scan and MRI were done whenever required. A scan biometry and keratometry were done for intraocular lens power calculation. Using the SRK 2 formula IOL power was calculated.
Preoperative work up
Pre-operative investigations including Complete blood count, Random blood sugar, X ray chest, Electrocardiogram were done. Informed and written consent was taken from the patients as well as guardians in case of children. All the surgeries were performed by single surgeon. Peribulbar Anesthesia and was obtained using 4ml mixture of 2% xylocaine with adrenaline, 2ml of 0.75% bupivacaine with addition of hyaluronidase. Eye was painted using 5% povidone iodine and same drops instilled topically.
Surgery
All cases underwent anterior vitrectomy/3 port pars plana vitrectomy + removal of lens and ab externo 2 point scleral fixation. In posteriorly dislocated/subluxated lens, vitrectomy was done and the lens was removed using pick forceps and retrieved by hand shake technique from scleral tunnel. In anteriorly dislocated cataractous lens, the lens was removed through the tunnel incision. AB-Externo four point fixation with polypropylene suture with rigid PMMA lens (Aurolab, India).
The main steps after lens removal, vitrectomy and PVD induction are as follows-Scleral folds with an incomplete thickness of around 3x3 mm were made at the 3 and 9 o'clock positions. A bowed tip and a 26 G empty needle from one side of the scleral burrow at 9 o'clock were set opposite to the scleral divider and corresponding to the iris. A 10-0 polypropylene stitch on a straight needle was presented from the contrary 3 clockwise edge, which meets the 26 G needle in the pupillary plane. The 10-0 proline stitch needle was occupied with lumen of 26 g needle and needle was painstakingly removed. The edge of the 10-0 proline stitch was extended simply behind the iris plane at 3 to 9 h. A comparable cycle was rehashed from the opposite side of the passage for 9 to 3 hours. At 12 o'clock a 7 mm scleral burrow was made, 2 strands of proline stitch eliminated. Outer proline stitches were cut in the center and the closures were fixed to the eyelets at the haptics of the sfiol. The IOL was embedded into the foremost chamber and situated behind the iris, executing controlled foothold on the uncovered closures of the stitch. The bunches were tied. Utilizing foldable sfiol (Acryfold hydrophilic single piece iol) same strategy is utilized. Acryfold sort of iol utilized so that stitch can be gone through eyelet of haptics of iol. Postoperatively all patients were given effective anti-infection steroid mix eye drop (Gatifloxacin 0.3% + dexamethasone 0.1%). Patients were evaluated on day 1, multi week, 3 weeks and 3 months which included visual keenness recording, cut light assessment, IOP estimation and enlarged fundus assessment.
DISCUSSION
Traumatic cataracts and point of convergence removing are the fundamental wellsprings of outrageous visual mishap after an eye injury. If there should be an occurrence of lacking capsular assistance or capsular disfigurement, SFIOL implantation is advantageous over other IOL implantation methodology.
Besides, implantation of ACIOL isn't by and large possible because of deserts in the iris and the shortfall of glassy assistance after guidelines plana vitrectomy in horrifying eyes.
Medical Surgery and visual restoration in these eyes are regularly difficult because of the presence of related foremost or back section inconveniences. In this examination, we portray the complexities and visual results of SFIOL implantation in awful radiation or separation of the focal point. The mean preoperative BCVA was 0.13 ± 0.24 logMAR, which improved 0.39 ± 0.366 logMAR postoperatively and the difference is statically significant (P <0.0001 ). However, the visual outcome in trauma cases can be confounded by various factors related to the mode of injury, extent of injury, anterior and posterior segment comorbidities related to trauma [11].
Hypotony was seen in 3 cases which were associated with horseshoe retinal tears, base avulsion and retinal dialysis. All were treated prophylactically intraoperatively. Zhao and colleagues [11] on SFIOL implantation in horrible aphakias has shown a rate of glaucoma as 7.2%. The underlying expansion in IOP after injury might be because of uveitis and hyphema that normally react to skin steroids and antiglaucoma prescriptions. Transient raised intraocular pressure didn't influence the last visual result in our investigation.
Minor vitreous hemorrhage occurred because 26 G needle was used to penetrate sclera and was resolved without treatment.Late postoperative complications (at 3 months postoperatively) were persistent elevation of intraocular pressure in 10 cases (22.2%).Late onset glaucoma occurs secondary to trabecular meshwork damage and angle recession [19]. 7 cases (15.56%) had associated angle recession found on gonioscopy and 3 cases (6.67%) had angle closure glaucoma due secondary to anterior dislocation of lens. Incidence of CME has been found to be around 1-2% following SFIOL implantation [11,17,16,18]. In our study cystoid macular edema was seen in 3 cases (6.67%) and epiretinal membrane in 3 cases (6.67%). No other complications like IOL tilt, suture erosion or breakage, retinal detachment, endophthalmitis, suprachoroidal hemorrhage were observed during follow-up period.
CONCLUSION
The last visual result of SFIOL implantation in post-awful subluxation/separation of the reasonable or cataractous focal point might be influenced by attendant foremost and back segmental disfigurements. The quick and late postoperative difficulties noted in our investigation were contrasted and those of other comparative examinations. Nonetheless, in every horrendous case, long haul follow-up is needed to distinguish confusions and start treatment at the most punctual.
CONSENT
As per international standard or university standard, patients' written consent has been collected and preserved by the author(s).
ETHICAL APPROVAL
As per international standard or university standard written ethical approval has been collected and preserved by the author(s).
|
2021-09-01T15:04:34.818Z
|
2021-01-01T00:00:00.000
|
{
"year": 2021,
"sha1": "1350c11ec410f409cca7b310d063c25c11a3d6aa",
"oa_license": "CCBY",
"oa_url": "https://journaljpri.com/index.php/JPRI/article/download/31827/59789",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "627a60b4b3da7710b54c99ecfb2cf3f12b034cf4",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
}
|
31886143
|
pes2o/s2orc
|
v3-fos-license
|
Multidrug-Resistant Pandemic (H1N1) 2009 Infection in Immunocompetent Child
Recent case reports describe multidrug-resistant influenza A pandemic (H1N1) 2009 virus infection in immunocompromised patients exposed to neuraminidase inhibitors because of an I223R neuraminidase mutation. We report a case of multidrug-resistant pandemic (H1N1) 2009 bearing the I223R mutation in an ambulatory child with no previous exposure to neuraminidase inhibitors.
Recent case reports describe multidrug-resistant infl uenza A pandemic (H1N1) 2009 virus infection in immunocompromised patients exposed to neuraminidase inhibitors because of an I223R neuraminidase mutation. We report a case of multidrug-resistant pandemic (H1N1) 2009 bearing the I223R mutation in an ambulatory child with no previous exposure to neuraminidase inhibitors. (1). Zanamivir resistance is also rare in infl uenza viruses. A Q136K (glutamine to lysine mutation, N2 NA numbering) mutation conferring zanamivir resistance in infl uenza (H1N1) viruses has been described in an in vitro study but has not been detected in clinical specimens from patients (2). An infl uenza B strain carrying a R152K (arginine to lysine) mutation and resistant to oseltamivir and zanamivir has been reported (3). Recent case reports described multidrug-resistant pandemic (H1N1) 2009 infection in immunocompromised patients exposed to oseltamivir and zanamivir because of an I223R (isoleucine to arginine) mutation in NA (4-6). We report a case of infection by multidrug-resistant pandemic (H1N1) 2009 virus bearing the I223R mutation in an ambulatory child with no previous exposure to NAI.
The Study
On October 30, 2009, a 15-year-old girl with a history of asthma sought treatment at an emergency department in the Greater Toronto area after 3 days of cough and rhinorrhea and 1 day of chest pain. Several children at her school also had respiratory symptoms. On arrival, she was febrile to 39.6°C and mildly dehydrated; physical examination was otherwise unremarkable. Blood count and chest radiograph showed no abnormalities. The child received intravenous rehydration in the emergency department, was discharged home with a prescription for oseltamivir therapy, and recovered uneventfully. A nasopharyngeal swab was forwarded to Ontario Agency for Health Protection and Promotion (OAHPP) for infl uenza testing. Pandemic (H1N1) 2009 was detected by real-time reverse transcription PCR (7). Subsequently, the specimen was screened by a single-nucleotide polymorphism assay distributed by Canada's National Microbiology Laboratory and the World Health Organization pyrosequencing protocol for the presence of the H275Y mutation (8). Both assays confi rmed the isolate was wild type (histidine) at aa 275 of NA.
As part of pandemic surveillance, the specimen was cultured in rhesus monkey kidney cells and whole genome sequencing was performed by using a modifi ed World Health Organization protocol (9). Sequences were deposited into GenBank under accession nos. CY060619-CY060626. In comparison with A/California/7/2009 (H1N1), several nonsynonymous mutations were identifi ed: I201V and E538K in polymerase; S220T, D239E, and K465R in hemagglutinin; V100I and M316I in nucleoprotein; S99P and I123V in nonstructural protein; T16I, V106I, I223R, N248D, and N369K in NA. Apart from I201V, which is of unknown signifi cance and has not been previously documented in pandemic (H1N1) 2009, these mutations were detected in 22% to 72% of pandemic (H1N1) 2009 strains circulating in Ontario at the same time that underwent whole genome sequencing. The I223R mutation results from a 1 nucleotide substitution at codon 223 of NA. To rule out the possibility of acquisition of I223R during culture in rhesus monkey kidney cells, the NA gene of the primary sample and its fi rst passage were sequenced. Both had 100% identical nucleotide composition.
The 50% inhibitory concentration (IC 50 ) values for oseltamivir carboxylate and zanamivir, determined by chemiluminescent NAI assay (NA-Star; Applied . Compared with a wild-type control, the I223R mutant exhibited 28-and 12-fold increases in IC 50 s for oseltamivir and zanamivir, respectively. The oseltamivir IC 50 of the I223R strain was elevated, but not as much as observed in an H275Y control, which had a 168-fold IC 50 elevation compared to the wild-type strain and was 6× higher than that of the I223R strain when tested in parallel. Similar results were obtained when the sample was retested at the National Microbiology Laboratory (Tables 1, 2). The clinical signifi cance of the I223R mutation is poorly understood because the IC 50 s for oseltamivir and zanamivir are well below achievable serum levels when administered at recommended doses. Oral oseltamivir at a dose of 75 mg 2×/d resulted in a maximum serum concentration (C max ) of 348 ng/mL (1,115 nmol). Repeated inhalation of 10 mg of the dried powder formulation of zanamivir produced a C max of 39 to 54 ng/mL (117.5-162.7 nmol) at 1 to 2 postdose, with an elimination half life of 4-5 (10). Intravenous zanamivir at a dose of 600 mg resulted in a C max of 32,000-39,000 ng/mL (96,300-117,360 nmol).
I223 is recognized as one of the framework residues responsible for stabilizing the NAI active site; typespecifi c mutations at these residues have resulted in reduced susceptibility to NAIs (11,12). Although the exact mechanism by which mutations at the framework residue alter susceptibility to particular NAIs is not clear, simulation studies suggest that the NA electrostatic potential plays a major role in the interaction and stabilization of NAIs within the NA cavity (13). Nonhomologous substitution of a nonpolar hydrophobic amino acid, isoleucine, with the positively charged (polar) hydrophilic amino acid, arginine (I223R), seems to be a key point in alteration of the NA cavity. These changes most likely result in active site endpoint interactions affecting drug binding affi nity and could disturb the proposed electrostatic binding funnel instrumental in directing NAIs into and out of binding sites on NA (14). Three independent case reports described infections caused by multidrug-resistant pandemic (H1N1) 2009 in immunocompromised patients who received prolonged treatment with oseltamivir followed by zanamivir; 2 of the infections were fatal. In 2 patients, infection developed (H275Y followed by I223R alone with simultaneous reversion to wild type at position 275) (4,5); dual H275Y/ I223R mutations developed in the third patient (6). Our patient is unique because she was immunocompetent, had no prior exposure to NAIs, and had an uneventful recovery. A similar resistance profi le was seen in the published case exhibiting I223R alone, where IC 50 s for oseltamivir, zanamivir, and peramivir were elevated by 45-, 10-, and 7-fold, respectively (4). The origin of the multiresistant isolate in this patient's case could not be established. The I223R mutation may have occurred spontaneously in our patient. Alternatively, she acquired infection in the ambulatory setting, possibly as part of a school outbreak. Resistance may have evolved following random mutation, or during NAI therapy in another patient. We could not investigate this further because no samples were submitted from contacts. Using reverse genetics, it has been recently shown that an I223V NA change increased oseltamivir and peramivir resistance in pandemic (H1N1) 2009 and also restored NA substrate affi nity and replication fi tness in vitro (15).
Conclusions
Although the I223 residue is highly conserved across pandemic (H1N1) 2009 strains, the global distribution of pandemic (H1N1) 2009 was made possible by the virus adapting for stable circulation through genetic changes contributing to fi tness and facilitating transmissibility from person to person. This report of community acquisition of a multidrug-resistant strain of pandemic (H1N1) 2009 reinforces the need to continue close monitoring for the emergence of resistant viruses and incorporation of screening for newly discovered resistance mutations into clinical diagnostics.
Phenotypic resistance testing was partially funded by a research grant provided to Ontario Agency for Health Protection and Promotion (J.B.G. and D.E.L.) by GlaxoSmithKline Inc.
Dr Eshaghi is a research technologist in the Molecular Research department at The Public Health Laboratories, Ontario Agency for Health Protection and Promotion. His research interests focus on molecular evolution and characterization of respiratory viruses, including emergence of resistance in infl uenza viruses.
|
2014-10-01T00:00:00.000Z
|
2011-08-01T00:00:00.000
|
{
"year": 2011,
"sha1": "29fd0e2efc008791959ead66062205361f47c981",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.3201/eid1708.102004",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "29fd0e2efc008791959ead66062205361f47c981",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
228807543
|
pes2o/s2orc
|
v3-fos-license
|
Undiagnosed diabetes, hypertension, and hypercholesterolaemia in an overweight or obese population: implications for cardiovascular disease risk screening programme
Introduction Establishing the burden of undiagnosed CVD risk factors is critical to monitoring public health efforts related to screening and diagnosis. Objective To assess the proportion and determinants of undiagnosed diabetes, hypertension, and hypercholesterolaemia, among overweight or obese adults. Methods A sample of 1200 participants aged 35-64 years with a BMI ≥ 25 kg/m 2 was selected from the Colombo district. Data were collected through a questionnaire, anthropometry, blood pressure measurement, and blood sampling for fasting plasma glucose, HbA1c, and lipid profile. Undiagnosed diabetes, hypertension, and hypercholesterolaemia were defined as fasting plasma glucose ≥ 126 mg/dL or HbA1c ≥ 6.5%; systolic blood pressure ≥ 140 mmHg or diastolic blood pressure ≥ 90 mmHg; total cholesterol ≥ 240 mg/dl respectively, in a person without a previous diagnosis. Multiple logistic regression analyses were carried out to identify determinants. of diabetes was 28.0% (25.5, 30.5), hypertension, 33.4% (30.7, 36.1) and hypercholesterolaemia, 31.9% (29.2, 34.5). The proportion of undiagnosed diabetes was 13.8% (11.9, 15.8), undiagnosed hypertension 11.3% (9.5, 13.1), and undiagnosed hypercholesterolaemia 17.8% (15.6, 19.9). Undiagnosed cases accounted for almost half of all diabetes cases, one-third of all hypertension cases, and more than half (56%) of all high cholesterol cases. The key determinants for undiagnosed CVD risk were: male sex, low or middle income, rural residence, and relatively younger age. Conclusion CVD screening programmes should be tailored to target populations based on these determinants and provide basic diagnostic facilities in all
Introduction
Cardiovascular diseases (CVD) are the leading cause of death globally. In 2015, there were an estimated 422.7 million cases and 17.9 million deaths due to CVD in the world [1]. According to the Global Burden of Disease 2016 Study, CVDs accounted for 20% of the total disease burden in women and 24% of the total burden in men [2]. Ischaemic heart disease, causing 174 million Disability Adjusted Life Years (DALY), remained the leading cause of total global CVD burden. In Sri Lanka, ischaemic heart disease is the leading cause of death and was responsible for 31.0 deaths per 100,000 population in 2017 [3]. If the health systems fail to respond to this burden using a scientific approach, the Sustainable Development Goal (SDG) target of reducing premature deaths due to noncommunicable diseases by one-third by 2030 will be far from reality [2,4].
There are multiple risk factors that contribute to cardiovascular risk. Abnormal lipids, smoking, hypertension, diabetes, abdominal obesity, psychosocial factors, low consumption of fruits and vegetables, high alcohol consumption, and physical inactivity account for most of the CVD risk worldwide [5]. The WHO emphasizes that people who are at high cardiovascular risk due to the This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. health centres. The 'proportion undiagnosed' in the population may be a useful indicator to evaluate their effectiveness.
Original article
presence of one or more risk factors such as hypertension, diabetes, hyperlipidaemia or already established disease, need early detection and management using counselling and medicines, as appropriate [6].
In Sri Lanka, Healthy Lifestyle Centers (HLC) were established by the Ministry of Health in 2011 as a noncommunicable disease (NCD) screening service at primary health care institutions. By the first quarter of 2016, the cumulative percentage of the target population (aged 40-65 years) screened for NCD risk factors at HLC was 25.5%, with a lower proportion of men than women [7]. This percentage is still low in contrast to the rising trend of CVD, and recent improvement in availability and readiness of services at the HLC [3,8,9]. With this low coverage, there is a high chance that many individuals in the population are living with CVD risk factors such as diabetes, hypertension, and hypercholesterolaemia without knowing their presence. Thus, establishing the burden of undiagnosed CVD risk factors is critical to monitoring public health efforts related to screening and diagnosis [10].
It has been previously described by the Sri Lanka Diabetes Cardiovascular Study (SLDCS 2005-06) that the prevalence of undiagnosed diabetes was 3.9% in the adult population. The undiagnosed diabetes accounted for onethird of all diabetes cases [11]. There are no recently published data regarding the proportion of undiagnosed as a percentage of the population. Furthermore, any previous in-country studies for undiagnosed hypertension or hypercholesterolaemia are not available. Little is known about the type of people who are more likely to be undiagnosed for CVD risk factors. Potential determinants for undiagnosed CVD risk factors would be useful in identifying target populations for efficient screening. In this study, we aimed to assess the proportion and determinants of undiagnosed diabetes, hypertension, and hypercholesterolaemia, among overweight or obese adults aged 35-64 years from the Colombo district in Sri Lanka.
Study population
The present study included cross-sectional data of the baseline assessment of a randomized controlled trial designed to evaluate the effects of a mHealth nutrition and lifestyle intervention on CVD risk reduction in the Colombo district. Individuals aged 35-64 years with a body mass index (BMI) of 25 kg/m 2 or higher were included. Currently, pregnant or breastfeeding mothers and anyone who has a recent weight loss of at least 5% in the preceding six months was excluded. The estimated sample size was 1200 to test a reduction in 10-year CVD risk among participants by one-third, following the intervention.
Participants were selected in two stages from the 15 MOH areas in the Colombo Regional Director of Health Services area. In the first stage, 5 to 6 clusters (Grama Niladhari Divisions) within each MOH area were selected randomly. In the second stage, data collectors visited about 30-35 households located together within each selected cluster. The potential participants were screened for eligibility (as described above), and BMI was calculated by measuring weights and heights during this process. Only one eligible participant was chosen randomly from a given family. Of the 2518 participants screened for eligibility, 1318 were excluded due to noneligibility. A detailed methodology is available elsewhere [12].
Measurements
Basic sociodemographic data (age, sex, ethnicity, education level, employment, income, and residence) were gathered through a questionnaire-based interview. Selfreported physician diagnoses of diabetes, hypertension, and hypercholesterolaemia were verified through medical records or prescriptions. Height and weight were measured using standard anthropometric equipment to the nearest 0.1 cm and 0.1 kg, respectively (Seca 213 and 813, Hamburg, Germany). Blood pressure was measured three times at the right arm in sitting position after a fifteen-minute rest using a validated digital blood pressure apparatus (Omron HEM 7320, Kyoto, Japan). The average of the second and third measurements was used. Blood samples (6 ml of blood per person) were drawn in 12-hour fasting state and sent to an accredited laboratory within an hour and tested using the recommended methods -fasting plasma glucose using glucose oxidase method (Randox Imola, UK), HbA1c using high-performance liquid chromatographic assay (Bio-Rad D10, USA), and total cholesterol using cholesterol oxidase method (Randox Imola, UK).
Definitions of diagnosed and undiagnosed risk factors
Undiagnosed diabetes was defined as elevated levels of either fasting glucose (≥7.0 mmol/L [≥126 mg/dL]) or HbA1c (≥6.5%) measured in the same blood sample in a person without a previous diagnosis of diabetes. Diagnosed diabetes was defined as a self-reported physician diagnosis of diabetes or receiving medication for diabetes. Persons defined as not having diabetes included those with fasting glucose <126 mg/dL and HbA1c <6.5%. The classification was based on the American Diabetes Association (ADA) diagnostic criteria [13].
Undiagnosed hypertension was defined as elevated levels of systolic blood pressure (≥140 mmHg) or diastolic blood pressure (≥90 mmHg), measured in a person without a previous diagnosis of hypertension. Diagnosed hypertension was defined as a self-reported physician diagnosis of raised blood pressure or current use of Original article antihypertensive medication. Persons defined as not having hypertension included those with systolic and diastolic blood pressure <140 mmHg and <90 mmHg respectively.
Undiagnosed hypercholesterolaemia was defined as elevated levels of total cholesterol (≥240 mg/dl) measured in a person without a previous diagnosis of hypercholesterolaemia. Diagnosed hypercholesterolaemia was defined as a self-reported physician diagnosis of raised cholesterol or receiving medication for it. Persons defined as not having hypercholesterolaemia included those with normal total cholesterol (<240 mg/dl).
For self-reported physician diagnosis of any of the conditions, verification was done by review of previous medical records, laboratory reports, or prescriptions.
Statistical analyses
Percentages with 95% confidence intervals were calculated for undiagnosed, diagnosed, and not having disease out of the total investigated. The undiagnosed CVD risk conditions (diabetes, hypertension or hypercholesterolemia) were expressed as a percentage of all participants. It reflects the probability that a person in the defined population to have the undiagnosed risk condition. Undiagnosed percentages were disaggregated by biological and sociodemographic variables.
Multiple logistic regression analyses were carried out using undiagnosed as the dependent variable (undi-agnosed=1; previously diagnosed =0) while entering all biological and socioeconomic variables as categorical covariates in a single step. The purpose of the regression analysis is to estimate the magnitude of effect and significance of independent variables for predicting the 'undiagnosed' in contrast to the 'previously diagnosed' after adjusting for confounding variables. Thus, the category of 'not having disease' was excluded in the regression models. Adjusted odds ratios (95% confidential interval) are presented to indicate the magnitude of risk (odds of being undiagnosed) with p <0.05 as the level of significance. SPSS version 20.0 was used for statistical analysis.
Determinants of undiagnosed risk factors
The results of multiple logistic regression analyses are presented in Tables 2 to 4. The odds ratios indicate the odds of being undiagnosed for a given predictor variable when adjusted for all other variables in the table. People living in rural areas (AOR=2.058), having were more likely to be undiagnosed with diabetes (Table 2). Men were almost twice (AOR=2.034) as likely to be undiagnosed with hypertension than women. There was no urban/rural difference in undiagnosed hypertension.
Original article
Low-income categories did not show significant odds, although the values were somewhat higher for such groups (Table 3). Low and middle-income categories were more likely to be undiagnosed with hypercholesterolaemia (AOR ranged from 3.121 to 3.922 for different income groups) in contrast to the highest monthly income group ( Table 4). The results also indicated that older groups were less likely to be undiagnosed with diabetes and hypercholesterolaemia, independent of other sociodemographic factors.
Discussion
This study revealed high proportions of undiagnosed CVD risk factors in an overweight/obese subpopulation, amounting to 13.8% (95% CI: 11.9, 15.8) for diabetes, 11.3% (95% CI: 9.5, 13.1) for hypertension and 17.8% (95% CI: 15.6, 19.9) for hypercholesterolaemia. Undiagnosed cases represent the unseen but clinically important burden of risk factors, with significant concurrent metabolic derangements and a long-term impact on health care use [14,15]. Of the three risk conditions, hypercholesterolaemia was the most undiagnosed condition, whereas hypertension was the least undiagnosed condition. This discrepancy reflects the differences in accessibility and affordability to diagnostic facilities.
According to the global burden of disease analysis, hypertension is the leading single risk factor responsible for preventable global CVD burden. Other risk factors include an unhealthy diet, high total cholesterol, air pollution, tobacco use, high body mass index, high fasting plasma glucose, impaired kidney function, and low physical activity in that order [2]. The present analysis focused on three of those important risk factors which need laboratory testing at a health facility.
In our definition of 'undiagnosed proportion', we used the total number investigated as the denominator as opposed to the total number with the respective disease. This definition is more relevant in community screening programmes since it reflects the probability that a person in the population has the disease without her or his knowledge. Such people would continue without treatment until they are detected at a later stage with irreversible complications such as CVD, stroke, or kidney disease. Thus, the proportion 'undiagnosed' based on a community sample like in this would be a useful indicator to evaluate the effectiveness of screening programmes in detecting CVD risk in the community.
This study revealed an alarmingly high prevalence of diabetes, hypertension, and hypercholesterolaemia, 28.0%, 33.4%, and 31.9%, respectively. Our findings, together with other available studies, confirm that there is a rapidly rising trend in diabetes, especially in urban settings [11,[16][17][18][19]. With this trend, the undiagnosed population may rise even faster. According to our findings, undiagnosed diabetes accounted for almost half of all diabetes cases, compared to one-third reported previously [11]. In contrast, undiagnosed hypertension accounted for one-third of all hypertension cases in the present study compared to half reported previously [17].
We identified some relevant determinants of undiagnosed CVD risk factors through regression analyses while addressing the confounding effects. The key determinants for undiagnosed CVD were: male sex, low or middle income, rural residence, and relatively younger age. Living in a rural area as a predictor for undiagnosed DM could be attributed to a lack of motivation or low accessibility to blood glucose testing services in a rural setting compared to the urban. The costs also play an independent role in undiagnosed diabetes, indicating less affordability. Thus, it is important to ensure that all primary health care facilities in a rural setting will have testing facilities, and people are motivated to attend such services.
Male sex as a predictor for undiagnosed hypertension indicates that men are ignorant about the screening facilities compared to women. Employed status is a major barrier that prevents men from attending these services during working hours. However, there were no differences across the type of employment in the present analysis. Our findings suggest the need to encourage and provide provisions for men in accessing services for blood pressure checking, preferably at the workplace or during nonworking hours at HLC.
Lower household income as a predictor of undiagnosed diabetes and elevated cholesterol indicates an economic reason for poor detection. Concerning cholesterol, the odds are very high even for middle-income groups compared to the highest income group. This raises concerns whether testing facilities are not readily available at the government health facilities and people cannot afford high cost for testing lipid profiles at private sector laboratories. However, it is important to provide this service in all state health facilities or subsidize it for lowincome groups.
Our analysis found that older ages were less likely to be undiagnosed compared to younger ages. This is explained by the frequent health care seeking by the elderly. The age-specific analyses show a rapid increase in obesity, diabetes, and hypertension during the 4 th decade of life [20]. Therefore, screening programmes should target lower ages, especially those around 40 years.
A mere increase in coverage of screening at primary health care facilities does not guarantee that it would capture many of those with CVD conditions in the population. If the attendees are of lower risk, then there will not be many positives despite a high screening coverage. Therefore, screening programmes should adopt a high-risk screening strategy, where relevant, focusing on the determinants of undiagnosed CVD risk factors. Tailoring screening programmes according to contexts such as risk level to reach those most in need has been recommended in a review on effectiveness and uptake of screening programmes for coronary heart disease and diabetes globally [21]. In Sri Lanka, there is a promising initiative by the state health services to enhance primary health care services in the recent past. The cluster care system with defined catchment areas for each PHC aims for high coverage of screening and detection of CVD risk in the population [22]. We suggest targeting populations based on the determinants highlighted by this study to enhance its efficiency. Thereby, detection rates of diabetes Original article mellitus, hypertension, dyslipidaemia can be improved in cluster catchment areas. Such an approach would contribute to reducing premature deaths due to cardiovascular diseases in the country.
The present study has a few limitations. The sample of the study is representative, however, the representativeness was affected somewhat due to 2 reasons: selection of houses located together within the cluster; and less availability of males than females during home visits. However, males were approached at their workplace or home during non-working days. Our study population is different from the general population since we selected individuals with a BMI over 25 kg/m 2 . This resulted in a sex imbalance with a higher proportion of females in the sample, which was addressed in the regression analysis by adjusting for sex and other covariates. The study did not include access to health care services as a key determinant in the analyses.
Conclusions
The present study revealed high proportions of undiagnosed diabetes, hypertension, and hypercholesterolaemia in an overweight/obese population. Male sex, low or middle income, rural residence, and relatively younger age were the key determinants for undiagnosed CVD risk factors. The CVD screening programmes should be tailored to target populations based on these determinants and provide basic diagnostic facilities in all health centres. The 'proportion undiagnosed' in the population may be a useful indicator to evaluate the effectiveness.
|
2020-11-12T09:07:08.296Z
|
2020-09-30T00:00:00.000
|
{
"year": 2020,
"sha1": "60798962383f947611713667a1b8aea2782118fd",
"oa_license": "CCBY",
"oa_url": "http://cmj.sljol.info/articles/10.4038/cmj.v65i3.9185/galley/6612/download/",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "3f85092e05ee2b434127ab1dafd1c45cce9cf248",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
119333696
|
pes2o/s2orc
|
v3-fos-license
|
Linear discriminant analysis as an alternative method to investigate the interaction of a 1064 nm CW laser light with a cold inductively-coupled plasma
In this paper, the interaction of a 1064 nm continuum-wave laser with inductively-coupled plasma generated in a fluorescent light bulb has been studied both experimentally and theoretically. The absorption coefficients pertaining to the plasma medium were obtained for different power measurements. The results indicate that absorption coefficients decrease with the increase in laser power. The UV-Vis spectra of mercury plasma were recorded by the charge-coupled spectrometer device at different power levels of laser. The linear discriminant analysis (LDA) of plasma spectra reveals the plasma ion and electron oscillations. Fourier series modeling of electron oscillation results the Whistler mode frequency of wpe= 0.16 kHz with a density of ne=3.9x1013 cm^(-3). 3D representation of LDA coefficients shows that the increase of laser power leads the plasma species to form in Whistler mode structures. Plasma electron temperatures ( e T ) was inferred from the SPARTAN non-local thermal equilibrium (non-LTE) spectral code, and they were about 0.6 eV in the absence of laser light. However, there was a 0.05 eV increase in electron temperature when the laser power was absorbed by the plasma. Electron temperatures slightly increased with the increase in the power levels, which in turn resulted in smaller absorption coefficients since the absorption coefficients scales with Te^(-3/2).
INTRODUCTION
Plasmas are tunable mediums, which can act as absorbers, transmitters or reflectors depending on the frequency range and application of interest. Therefore, plasmas have a widespread use in industry for many applications, such as stealth technologies in radar applications and radio communications.
Propagation of radio-frequency (RF) electromagnetic waves in uniform, non-uniform, magnetized or unmagnetized plasmas have been studied extensively both experimentally and theoretically [1]- [4]. It has been shown that by adjusting the plasma parameters, such as magnetic field strength and plasma density, it is possible to obtain high levels of absorption in a broadband range including RF and microwave frequencies [5]- [7]. However, the propagation of low-intensity laser light within cold plasmas have found less interest compared to RF or microwave electromagnetic waves emitted from antennas [4], [8]- [10]. Although laser is a special form of electromagnetic wave with a very high frequency and propagation of laser can be best described by the wave model, explanation of the absorption mechanism needs the particle model. Hence, the Drude model that explains the interaction of RF waves within plasma has failed to explain the loss mechanism of plasma when a laser beam is introduced [9]. In this context, one can apply the absorption coefficient scaling explained in [11] to interpret the loss of laser light in the plasma.
To analyze large multivariate datasets, such as plasma spectra, it is often desirable to reduce the size of the dimension of the data. Principal component analysis (PCA) and linear discriminant analysis (LDA) are the most common techniques used for this purpose. These techniques reduce the dimension of data by projecting it onto a space spanned by the vectors called principal components. These vectors are obtained successively, and they correspond to subset of data with maximum variance. The main difference between PCA and LDA is that LDA considers class labels while projecting the feature space onto a smaller space, so outperforms the PCA while discriminating the classes in database. PCA and LDA have been applied in many areas, such as medicine, robotics and remote sensing. They have also been found many applications in spectroscopy, especially in unmixing species and decomposing overlapped spectral lines of UV-VIS-NIR spectroscopy to extract the spectral fingerprints. In addition, they are used in spectroscopy of astrophysical plasmas and laboratory plasmas to extract the plasma parameters and compositions of ion species [12]- [22].
In this study, the interaction of continuum mode 1064 nm diode laser light with the inductivelycoupled plasma is analyzed both experimentally and theoretically. The experimental part is based on the power-meter measurements and spectroscopy. LDA is used as a feature extractor to investigate the effects of laser power on the UV-VIS spectra of Mercury plasma. The theoretical part is based on the modeling of experimental spectra using non local thermal equilibrium (non-LTE) spectral database and laser interaction with plasma. The details of the experiment conducted are given in the next chapter, and the third chapter provides a brief summary of LDA. The fourth chapter discusses non-LTE spectral modeling and the interaction of EM-wave with the plasma. The conclusions are given in the final chapter.
II. EXPERIMENT
The uniform magnetized plasma slab has been generated by inductively coupling a 13.56 MHz RF generator (60 Watts) to the fluorescent light bulb. The light bulb has a length of 55 cm and a diameter of 2.2 cm. Figure 1 illustrates a typical spectrum of the plasma, a schematic picture of the experiment and a view of the oscilloscope reading. The spectra have been recorded by a charge-coupled (CCD) spectrometer device of AvaSpec-ULS3648 and the images have been recorded by a CCD camera of Hero4. Continuum-wave (CW) diode laser lights at different powers was directed to the plasma, and power levels were recorded by a Thorlab-PM100D power-meter. In Figure 2, applied laser powers and corresponding absorbance coefficients obtained using standard loss medium model are illustrated. Figure.2 results that low power laser attenuated quickly and the plasma becomes transparent to the laser beam as the power of laser increases. Figure 3.a illustrates the experimental spectra of mercury plasma in the absence and presence of laser light at different powers. Figure 3.b shows the zoom out of the spectral lines at 313 nm (5d 9 6s6d-5d 9 6s6p) and 365 nm (5d 10 6s6d-5d 10 6s6p) which are sensitive to the laser-light and slightly increases as the laser power increases.
III. LINEAR DISCRIMINANT ANALYSIS OF PLASMA SPECTRA
Now we provide a short summary of the mathematical background of LDA. More detailed explanation can be found in [23]. Linear Discriminant Analysis (LDA) is a dimension reduction technique to identify the hidden structures of a large data. LDA is applied to a data which consists of different classes of similar elements and used to find vectors, like principal components of PCA, to discriminate the classes while respecting the similarities among the class members. The main idea in LDA is to project a data onto a smaller subspace as in PCA but with a good class separability. In contrary, PCA deals with the entire data and does not consider the different classes. Therefore, LDA is applied to a data when different classes must be considered.
Let K be the number of classes in a dataset, and let each class consist of M elements of size N×1 . Let Γi j be the ith element of the class j for i=1,2,…,M and j=1,2,…,K where Γi j is a N×1 vector.
Then, within-class scatter matrix Sw is where μj is the mean of the class j and superscript t means the transpose, and between-class scatter matrix Sb is where μ is the mean of all classes. In LDA, the eigenvectors of (Sw) -1 Sb provides the vectors that will be used as a basis for the new vector space. The eigenvectors corresponding to the largest two eigenvalues are called the first dominant eigenvector (|LD1>), second (|LD2>) and third dominant eigenvector (|LD3>). As in PCA , the projection of a vector |v> onto the space spanned by |LD1> and |LD2> is Proj< |LD1>,|LD2>,|LD3> >|v>=w1 l |LD1>+ w2 l |LD2>+ w3 l |LD3> The coefficients w1 l , w2 l and w3 l are called the weights of |LD1>, |LD2> and |LD3> in |v> and they are equal to w1 l =|v>▪(|LD1>) t , w2 l =|v>▪(|LD2>) t and w3 l =|v>▪(|LD3>) t , In this work, LDA is applied to the data obtained for the plasma spectra in the presence of laser with the input powers Pinput=0.0, 0.5, 1.0, 1.5, 2.0 and 2.5 mWatt separately (Figure 3). Corresponding spectra considered for each of these cases. Hence, LDA in total is applied to 6x30=180 spectra of size 2825x1 for each case Pin=0.0, 0.5, 1.0, 1,5, 2.0 and 2.5 mWatt . Each linear discriminant component obtained is of size 2825x1.
3D plot of LD1, LD2 and LD3 coefficients which represents plasma oscillations are illustrated in Figure 4.a. As it can be seen from the figure, transitions in the absence of laser and in the presence of low laser powers follow a scattered pattern, and they finally form whistler mode structures at the laser power of 2.5 mW [24]. The plot also shows that the direction of the species changes with the increasing power levels. This is expected as the intensity of electric field associated with the laser increases with the increase in the power. |LD1> and |LD2 > spectra in Figure 4.b reveals the ion and electron oscillations, respectively [25]. Modeling of electron oscillations using Fourier series suggests the electron oscillations be around 166 Hz (0.16 kHz) which is already in the Whistler wave frequency region [26]. The plasma electron density (ne=3.9x10 13 cm -3 ) is obtained using the Eq.5, where pe is the electron frequency and 0 is the permittivity of the free space =
IV. A) Spectroscopic Modeling
Non thermal plasma approach can be employed in DC and RF discharge plasmas and inductively coupled plasmas (ICP). Non thermal plasmas are considered as weakly ionized low temperature plasmas. The velocity distributions of ions, and electrons follow non Maxwellian distribution [28], [29]. Electron temperatures of Mercury plasma has been estimated using the non-LTE SPARTAN code [30]. For better estimation of the plasma parameter due to scattering of laser light by the plasma, Doppler broadened profiles have been used. In Fig. 6 the temperature dependence [0.6, 0.65…, 0.85 eV] of non-LTE spectrum at electron density of 3.9x10 13 cm -3 has been illustrated. Fig.5 shows that the intensity of the lines at 313 and 365 nm slightly increases as the plasma electron temperature increases. Comparison of experimental spectra and synthetic spectra suggest the plasma electron temperature be around 0.6 eV in the absence of laser light.
For lasers p holds in general, and the wave number becomes almost identical to that of free space. This means that no reflection occurs at the boundaries, and laser can be transmitted through the plasma without any interaction. On the other hand, for p , which can hold if the plasma density is increased or the wave frequency is decreased, the wave number becomes negative and no transmission to the plasma medium occurs. However, as the laser field propagates in the plasma, the coherent motion of electrons oscillating in the laser field is converted into thermal motion by collisions with the ions in the plasma. This mechanism is called collisional or Inverse Bremsstrahlung absorption, and it introduces a loss in the system [8].
Actually, the absorption coefficients obtained by power measurements reveal that there is a significant absorption in the cold plasma. This attenuation of the laser intensity I can be modeled by where ib is the inverse bremsstrahlung absorption coefficient. If we denote c as the mass density of the plasma corresponding to the critical density, absorption coefficient is shown to scale as where = 2 c is the laser wavelength in free space shows that the modeled plasma electron density is way below of the critical density.
V. CONCLUSIONS
In this study, CW laser propagation in cold magnetized cold plasma has been investigated by means of laser power measurements and plasma emission spectroscopy. Derived absorption coefficients show that low power laser is quickly absorbed by plasma medium. LDA of experimental spectra reveals that species of plasma forms of Whistler mode structures by the increase of laser power and change their directions according to laser power. The oscillation frequency and electron density are, respectively, found to be wpe=0.16 kHZ and ne=3.9x10 13 cm -3 by Fourier series modelling of electron oscillations. The non-LTE modeling of plasma suggests plasma electron temperature be 0.6 eV in the absence of laser light, and the temperature increases slightly by the increase of laser power. Absorption scaling based on EM-wave dispersion shows that absorbed power is converted to heat by the order of Te -3/2 .
|
2019-04-13T20:40:31.272Z
|
2018-02-15T00:00:00.000
|
{
"year": 2018,
"sha1": "1f1a03874774e0b7db944f46ea0a4f3fca64b29d",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "4c30c62b9628282be244febe30af576da5024271",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
34851308
|
pes2o/s2orc
|
v3-fos-license
|
Auxin-BR Interaction Regulates Plant Growth and Development
Plants develop a high flexibility to alter growth, development, and metabolism to adapt to the ever-changing environments. Multiple signaling pathways are involved in these processes and the molecular pathways to transduce various developmental signals are not linear but are interconnected by a complex network and even feedback mutually to achieve the final outcome. This review will focus on two important plant hormones, auxin and brassinosteroid (BR), based on the most recent progresses about these two hormone regulated plant growth and development in Arabidopsis, and highlight the cross-talks between these two phytohormones.
INTRODUCTION
Unlike animals that can move to avoid the adverse surroundings, the sessile plants exhibit a highly developed adaptation to the complicated environmental conditions. To achieve this profound adaptability, communications among cells are necessary. Cell to cell communication in plants involves robust intracellular signaling processing and intricate intercellular signaling networks. Till now there are at least nine signaling substances, named plant hormones, including auxin, brassinosteroid (BR), cytokinin, gibberellins (GA), ethylene, jasmonic acid (JA), strigolactone (SL), abscisic acid (ABA), and salicylic acid (SA) discovered (Druege et al., 2016;Verma et al., 2016). The genetic and physiological studies have revealed the critical roles and functional mechanisms of these above hormones in plant growth and development (Gray, 2004). Based on the previous studies, auxin, BR, GA, SL, and cytokinin mainly function during normal plant growth and development, while ABA, ethylene, JA, and SA play important roles in plant growth response to various biotic and abiotic stresses (Pieterse et al., 2009;Santner et al., 2009;Denance et al., 2013). And also some of these hormones have dual roles, for example, ABA also plays important roles in seed development and dormancy (Seo and Koshiba, 2002). Although each hormone plays predominant roles in certain aspects, many hormones have overlapped activities and the interactions of different hormones control many developmental aspects and growth in response to endogenous developmental and exogenous cues.
Auxin and BR are two major classes of growth-promoting hormones. BR, a group of plantspecific steroid hormones which could interact with other phytohormones such as auxin, cytokinin, ethylene, GA, JA, and SA and regulate a wide range of plant growth and developmental processes including seed germination, cell elongation, vascular differentiation, stomata formation and movement, flowering and male fertility (Saini et al., 2015). Interestingly, each of these processes is also controlled by auxin, suggesting these two hormones interplay to control plant development. In this review, we will outline the signal transduction of auxin and BR based on the recent progress and review the crosstalk between auxin and BR mediated plant growth and development.
AUXIN SIGNALING PATHWAY
Auxin was first recognized as plant hormone because of its role in plant tropism to gravity or light stimuli. Later auxin was chemically identified as indole-3-acetic acid and shown to play essential roles in plethora of plant developmental and physiological processes, including embryogenesis, organogenesis, vascular differentiation, root and shoot development, tropic growth, and fruit development (Estelle, 2011).
Using genetic analysis in Arabidopsis, the molecular mechanism underlying the auxin signal transduction has been well investigated. TRANSPORT INHIBITOR RESPONSE1 (TIR1) was the first identified nuclear receptor of auxin (Ruegger et al., 1998;Dharmasiri et al., 2005). TIR1 encodes a nuclear protein belonging to the F-box protein as a subunit of SCF E3 ubiquitin ligase protein complex (Gray et al., 1999(Gray et al., , 2002Hellmann et al., 2003;Quint et al., 2005), In addition to TIR1, there are three additional F-box proteins namely AUXIN SIGNALING F BOX PROTEINs (AFBs) which show auxinbinding activity and mediate auxin signaling in Arabidopsis (Badescu and Napier, 2006). TIR1 receptor can interact with a group of AUX/IAA (auxin/indole-3-acetic acid) proteins . AUX/IAA proteins are negative regulators of auxin signaling and there are 29 members of AUX/IAA encoded in Arabidopsis genome. AUX/IAA proteins could interact with the class of transcriptional regulators, auxin response factors (ARF), to mediate transcriptional responses to auxin. Under high auxin level, AUX/IAA proteins interact with TIR1 as coreceptor of auxin, and can be ubiquitinated by the SCF tir1 complex and thus be degraded through the ubiquitin-proteasome pathway (Gray et al., 2001;Lanza et al., 2012). Upon the destruction of AUX/IAA repressors, the auxin transcriptional regulators ARFs which include 23 memberships can be released from AUX/IAA repression and thus mediate the auxin responses by activation or repression of target genes (Guilfoyle and Hagen, 2007). The different sets of F-box protein and AUX/IAA or ARFs infer the complexity during auxin signal transduction (Goh et al., 2012;Guilfoyle, 2015;Salehin et al., 2015).
The coordinated action of Aux/IAA transcriptional repressors and ARF transcription factors produces complex gene-regulatory networks which were also reported in Physcomitrella (Lavy et al., 2016). Recently, it was found that CULLIN1 (CUL1) subunit of the SCF interacts with TIR1 and thus regulates TIR1 substrates stability and auxin signaling (Wang et al., 2016). The interaction between TIR1 and Aux/IAA is also influenced by the spatial conformation of Aux/IAAs, controlled by a cyclophilin isomerase LRT2 in rice (Jing et al., 2015). HEAT SHOCK FACTOR 90 (HSP90) and the co-chaperone SGT1, respectively, interacts with TIR1 and thus regulates TIR1 stability, which affects the interactions between TIR1 and Aux/IAA and auxin signaling (Wang et al., 2016).
Besides the TIR1-dependent canonical auxin-signaling pathway, auxin has recently been reported to elicit a diverse range of developmental responses through a non-canonical auxin-signaling mechanism. In this noncanonical auxin sensing process, ARF3/ETTIN controls gene expression through interactions with process-specific transcription factors, which highly enriches auxin-mediated plant developmental diversity (Simonini et al., 2016(Simonini et al., , 2017.
BR SIGNALING PATHWAY
BRASSINOSTEROID was first discovered in pollen for its ability to promote cell elongation. Later it was found that BR plays roles in a wide range of plant growth aspects and can respond to biotic and abiotic stresses. Nowadays BR signal transduction pathway was largely clarified by combinations of different methods, including molecular genetics, biochemistry, proteomics, and genomics, etc. The cell-surface kinase BRASSINOSTEROID INSENSITIVE1 (BRI1) was identified as the receptor of BR which can bind to the extracellular domain of BRI1 and activate its kinase activity and thus switch on a signal cascade to regulate transcription (Li and Chory, 1997;Wang et al., 2001;Kinoshita et al., 2005;Kim and Wang, 2010;Clouse, 2011;Hothorn et al., 2011;She et al., 2011;Oh et al., 2012). Upon perception of BR, BRI1 interacts with co-receptor BRI1-ASSOCIATED KINASE 1 (BAK1) and its homolog SOMATIC EMBRYOGENESISRECEPTOR KINASEs (SERKs) to form a more active BR receptor complex Nam and Li, 2002;Wang et al., 2005;Tang et al., 2008;Gou et al., 2012). Activated BRI1 phosphorylates two substrates of plasma membrane-anchored receptor-like cytoplasmic kinases: BRASSINOSTEROID-SIGNALING KINASES1 (BSK1) and CONSTITUTIVE DIFFERENTIAL GROWTH1 (CDG1) (Tang et al., 2008;Kim et al., 2011), which in turn phosphorylates a PP1-type phosphatase named BRI1-SUPPRESSOR1 (BSU1) to activate BSU1, leading to BSU1 dephosphorylation and inactivation the GSK3-like kinase BRASSINOSTEROID INSENSITIVE2 (BIN2). The kinase activity of BIN2 is also inhibited by HISTONE DEACETYLASE HDA6, which interacts and deacetylates at the K189 of BIN2. When BR levels are low, BRI1 is quiescent due to its negative regulator, BRI1 KINASE INHIBITOR 1 (BKI1) and protein phosphatase 2A (PP2A), while BIN2 phosphorylate two BR homologous transcription factors, BRASSINAZOLE RESISTANT1 (BZR1) and BZR2 (also named BES1 for BRI1-EMS-SUPPRESSOR 1) Wang et al., 2002;Yin et al., 2002;Mora-Garcia et al., 2004;Kim et al., 2009Kim et al., , 2011Kim and Wang, 2010). When BR levels are high, BIN2 is inactivated, and BZR1 and BZR2 are dephosphorylated by PP2A, and move into nucleus to alter the expression of thousands of BR response genes (He et al., 2005;Yin et al., 2005;Sun et al., 2010;Tang et al., 2011;Yu et al., 2011).
THE SYNERGY BETWEEN BR AND AUXIN SIGNALING
Auxin and BR signal pathways play diverse roles, however, they also showed synergistic and interdependent interactions in a wide range of developmental processes. For example, both auxin and BR signals can promote cell expansion and can interact synergistically to promote hypocotyls elongation (Nemhauser et al., 2004). The response of one of the two pathways in promoting hypocotyl elongation requires the function of the other and the interdependence between BR and auxin pathways (Nemhauser et al., 2004). Auxin increased hypocotyl length in wild-type plants but not in the BR-insensitive mutant bri1-116, and this auxin-insensitive phenotype of bri1-116 was suppressed by the dominant gain-of-function mutant bzr1-1D, indicating BR or active BZR1 is required for auxin promotion of hypocotyl elongation. It has been found that BR signaling converges with SUPPRESSOR OF PHYTOCHROME B4-3 (SOB3) to control cell elongation and hypocotyl growth through the regulation of auxin induced SMALL AUXIN UP RNA19 (SAUR19) expression (Favero et al., 2017). On the other hand, the auxin regulated transcription factor SMALL ORGAN SIZE 1 (SMOS1) has recently been found to control cell expansion through the direct interaction with SMOS2/DLT, a member of the GRAS family of transcriptional co-regulators which plays a positive role in BR signaling in rice (Kim et al., 2009;Tong et al., 2012;Hirano et al., 2017). Auxin related mutants such as iaa3 and arf6/arf8 were less sensitive to BR than was wild-type for hypocotyl elongation, and abolished the hypersensitivity of bzr1-1D to auxin, suggesting the BR and BZR1 promotion of hypocotyl elongation requires ARF6/8. The genome-wide ChIP-Seq analysis revealed that ARF6 shares a vast number of genomic targets (around 50%) with BZR1 and the light/temperature-regulated transcription factor PIF4 by CHIP-Seq analyses (Oh et al., 2014). BZR1 and PIF4 interact with ARF6 and activate shared target genes by binding to shared target genes cooperatively during hypocotyls elongation (Oh et al., 2014) and many of these overlapping target genes encode cell wall proteins involved in cell expansion.
Brassinosteroid and auxin also play important roles in the maintenance of root apical meristem (RAM) (Durbak et al., 2012). The RAM consists of a small group of rarely dividing cells known as the quiescent center (QC), surrounded by stem cells that give rise to the various toot tissue types. The maintenance of the root stem cell population is regulated by WUSCHEL-RELATED HOMEOBOX 5 (WOX5) (Sarkar et al., 2007). WOX5 is restricted to the QC by auxin signaling and facilitates proper expression of the PLT genes (Aida et al., 2004;Ding and Friml, 2010). Mutations in the BR receptor gene BRASSINOSTEROID INSENSITIVE 1 (BRI1) result in aberrant cell cycle progression in the RAM and cause a smaller RAMs (Gonzalez-Garcia et al., 2011;Hacham et al., 2011). Auxin is known to stimulate the biosynthesis of BR (Chung et al., 2011), but the activity of BR does not affect the expression of PIN genes (Hacham et al., 2011). The root tip phenotypes of BR mutants do not show the same as the auxin mutants (Gonzalez-Garcia et al., 2011), indicating that BR act on the RAM independently of auxin.
Brassinosteroid and auxin signals are also synergistically required in the radial pattern formation of vascular bundles (Ibanes et al., 2009). By the combinations of mathematical modeling and biological experiments, auxin maxima, established by asymmetric auxin polar transport, but not changes on auxin levels is important for positioning the vascular bundles. BR signal was shown to serve as a promoting signal for the number of cells in the provascular ring which are consistent with auxin maxima. Thus the establishment of periodic arrangement of vascular bundles in the shoot is under the coordinated action of these two plant hormones (Ibanes et al., 2009). Both signals are also involved in plant root development and the interaction of BR and auxin is mediated by BREVIS RADIX (BRX) during this process. BRX is important for the rate-limiting biosynthesis of BR and BR exogenous application can rescue brx mutant defects. Furthermore, auxin-responsive gene expression is globally impaired in brx mutant, and the expression of BRX is strongly induced by auxin and suppressed by BR, implying BR biosynthesis and auxin signaling are connected through a feedback loop involving BRX during root development (Mouchel et al., 2006).
Brassinosteroids and auxin also play synergistic roles during lateral root development. BRs mainly function at the lateral root primordia initiation while auxin is required for both initiation and emergence stages of lateral root formation (Casimiro et al., 2001;Bhalerao et al., 2002;Benkova et al., 2003;Bao et al., 2004). During these processes, BRs increase LRP initiation by promoting acropetal auxin transport in the root but not by affecting endogenous IAA level (Bao et al., 2004). All these reports suggest that the crosstalk between BR and auxin plays an important role in the regulation plant growth and development.
BR REGULATES AUXIN SIGNALING
Besides the interdependency and cooperation of auxin and BR signals during plant development, BR could mediate auxin signal pathway on multiple levels. BZR1 interacts with ARF proteins to directly target multiple auxin signaling components and genes involved in auxin metabolism such as transport and signaling, including AUX/IAA, PINs, TIR1, and ARFs, etc. (Sun et al., 2010). It was found that Aux/IAA proteins are involved in BR responses and iaa7/axr2-1 and iaa17/axr3-3 mutants showed aberrant BR sensitivity and aberrant BR-induced gene expression in an organ-dependent manner (Nakamura et al., 2006). Exogenous brassinolide (BL) treatment could induce the expression of auxin-responsive genes such as IAA5, IAA19, IAA17, etc., and the expression of the above genes is downregulated in the BR biosynthetic mutant de-etiolated2 (det2), which indicates that functional BR biosynthesis is partly required for auxin-dependent gene expression (Nakamura et al., 2003; FIGURE 1 | Model of auxin-brassinosteroid (BR) crosstalk. In Arabidopsis, the perceptions of BR and auxin signal are recognized by BRI1 and TIR1 receptors, respectively. BR binds to the extracellular domain of BRI1 and promote it interacts with co-receptor BAK1 to form a more active BR receptor complex, which in turn lead to the dephosphorylation and inactivation of BIN2. The inactivation of BIN2 lead to the dephosphorylation of two BR homologous transcription factors BZR1 and BZR2, which move into nucleus to activate transcription of genes containing BRRE or E-box in their promoter region. BIN2 also can phosphorylate ARF7 and ARF19 to suppress their interaction with AUX/IAAs and thereby enhance the transcriptional activity on their target genes. TIR1 receipt the auxin signaling and interact with AUX/IAA proteins as co-receptor of auxin. The AUX/IAA then is degraded through ubiquitin-proteasome pathway, and the auxin transcriptional regulators auxin response factors (ARFs) are released from AUX/IAA repression and activate transcription of genes with auxin responsive elements (AUXRE) in their regulatory region. Some ARFs can also binds to the promoter of BRI1 and positively regulates its expression which then activates the BR signaling. Primary crosstalk occurs by activation of genes that contain both BRRE/E-box and AUXRE in their promoter region, allowing both signaling pathways to directly regulate transcription. Secondary crosstalk occurs through expression of genes that are either auxin or BR responsive, but the activities of which control expression of genes that regulate the response and signaling of other hormones. Kim et al., 2006). Additionally, BR also affects auxin flow by regulating the expression of auxin exporters such as PIN4 and PIN7 (Nakamura et al., 2004). During plant gravitropism responses, BRs could enhance the polar accumulation of the auxin exporter PIN2 in the root meristem zone and thus affect the redistribution of auxin from the root tip toward the elongation zones and result in the difference of IAA levels in both upper and lower sides of roots to induce plant gravitropism. During this process, BR activated ROP2 plays an important role in modulating the functional localization of PIN2 through the regulation of the assembly/reassembly of F-actins . Further studies showed that decreased BL perception and/or concentration could induce CYP79B2, the gene encoding an enzyme converting tryptophan to indole-3-acetaldoxime and thus affect the distribution (Kim et al., 2007).
In addition, it was found that BR signal could regulate auxin signaling output by its negative regulator GSK3 kinase BIN2. The auxin response factor ARF2 was identified as a BIN2 interacting protein in a yeast two-hybrid screen and kinase assay showed BIN2 could phosphorylate ARF2. The phosphorylation of ARF2 results in the loss of its DNA binding ability and repression activity of the target genes (Vert et al., 2008). ARF2 is a BZR1 target genes and its expression is reduced by BR treatment (Sun et al., 2010). Additionally, BIN2 can phosphorylate ARF7 and ARF19 to suppress their interaction with AUX/IAAs and thereby enhance the transcriptional activity on their target genes LATERAL ORGAN BOUNDARIES-DOMAIN16 (LBD16) and LBD29 to regulate lateral root organogenesis (Cho et al., 2014). However, BR plays a minor role during this process and BIN2 is under the control of the TRACHEARY ELEMENT DIFFERENTIATION INHIBITORY FACTOR (TDIF)-TDIF RECEPTOR (TDR) module (Cho et al., 2014). Together, BR can regulate auxin reponses through influencing different auxin signaling components.
AUXIN REGULATES BR SIGNALING
On the other hand, auxin can also regulate BR signal pathway in certain aspects. The expression of DWARF4, a crucial hydroxylase for BR biosynthesis to control endogenous BR level, is auxin dependent. Auxin treatment could noticeably stimulate the expression of DWARF4 and auxin could inhibit the binding of BZR1 to the promoter of DWARF4. The induction of DWARF4 by auxin requires auxin signaling pathway but not BR signaling pathway (Chung et al., 2011;Yoshimitsu et al., 2011). CPD catalyzing C-3 oxidation of BR was activated by BRX, a putative transcription factor acting downstream of auxin signaling (Mouchel et al., 2006). Further study in rice indicates that exogenous auxin can enhance the transcription expression levels of BR receptor gene OsBRI1, suggesting that auxin enhances BR signaling through the regulation of BR receptors (Sakamoto et al., 2013). Furthermore, the promoter of OsBRI1 possesses an upstream auxin-response element (AuxRE) motif which is targeted by ARF transcription factors. Moreover, mutant studies indicate that upon mutation of AuxRE, the induction of expression of OsBRI1 by auxin is abolished and also the expression of OsBRI1 is down regulated in arf mutant (Sakamoto et al., 2013). It has been reported that OsARF19 binds to the promoter of OsBRI1 and positively regulates its expression which then activates the BR signaling . BES1 can bind to the promoter of SMALL AUXIN-UP RNA 15 (SAUR15) and mediate BR early response gene in Arabidopsis, and this binding could be enhanced by auxin treatment (Walcher and Nemhauser, 2012). Taken together, auxin can also affect BR responses and BR regulated plant growth and development.
CONCLUDING REMARKS AND FUTURE PERSPECTIVE
During the past nearly four decades, studies on auxin-BR pathway interactions have attracted more and more researchers' interest. The appliance of physiological, molecular, genetic, and biochemical tools have greatly deepened our understanding of this issue. Based on the previous studies, BR and auxin are involved synergistically in multiple plant developmental processes including: hypocotyl elongation, vascular bundles development, root development and tropisms, etc. The interdependency and cooperation of auxin and BR are complicated and involve numerous processes on the molecular level, by sharing the same target genes, regulating each other mutually on multiple levels (Figure 1).
Phosphorylation regulation plays a crucial role in BR signaling pathway, especially during the perception process, BR is perceived through BRI1 kinase receptor and BAK1 kinase coreceptors, and eventually controls BR regulated gene expression through influencing downstream transcription factors such as BES1/BZR1 activities (He et al., 2005;Yin et al., 2005;Sun et al., 2010;Tang et al., 2011;Yu et al., 2011). However, ubiquitination regulation seems essential for auxin signaling. Once auxin binds to TIR1 receptor, which acts as an ubiquitin E3-ligase, the activated TIR1 E3-ligase ubiquitinates AUX/IAA proteins, leads to the degradation of these repressors and derepresses ARF transcription factors, and eventually causes auxin regulated gene expression pattern changes and growth responses (Gray et al., 1999(Gray et al., , 2002Hellmann et al., 2003;Quint et al., 2005). Since it has been found that BIN2 kinase, which is well known functioning in BR signaling, could phosphorylate and enhance the activities of ARFs such as ARF2 and ARF7 (Vert et al., 2008;Cho et al., 2014), it will be interesting to test if kinases such as BIN2, which are involved in BR signaling, could also interact with other auxin signaling components such as TIR1 receptor or AUX/IAA repressors, and influence TIR1 E3-ligase activity or AUX/IAA protein stabilities. On the other hand, the role of ubiquitination in BR signaling also needs to be addressed, especially if TIR1 E3-ligase could directly interact with BR signaling components and regulate their protein stabilities.
In addition, using auxin response DR5 and other auxin reporters, it has been observed that auxin regulates plant growth and development in a tissue or cellular dependent manner. The diverse transcriptional outputs depending on the cellular and environmental context (Clark et al., 2014;Etchells et al., 2016;Lavy et al., 2016). Though the spatiotemporal BR signaling has been shown to control root growth through the antagonistic action with auxin (Chaiwanon and Wang, 2015), it is still unknown if the tissue or cellular BR signaling, which could be visualized by pBZR1:BZR1-YFP, is also important to control other processes besides root development. Furthermore, generation of a detailed tissue or cellar map of auxin and BR distributions is currently possible using fluorescence-activated cell sorting or laser microdissection in combination with high-resolution gene expression analysis. This will eventually leads to address if the auxin crosstalks with BR in a tissue or cellular manner.
AUTHOR CONTRIBUTIONS
All authors were involved into the writing of this review manuscript. For more information on what constitutes authorship, please refer to our author guidelines.
|
2018-01-18T18:10:11.527Z
|
2018-01-18T00:00:00.000
|
{
"year": 2018,
"sha1": "bb8575cfc7e349d8b7324372352cec8fa95332af",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fpls.2017.02256/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "bb8575cfc7e349d8b7324372352cec8fa95332af",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
135381957
|
pes2o/s2orc
|
v3-fos-license
|
Changes in Arctic Ocean Climate Evinced through Analysis of IPY 2007 – 2008 Oceanographic Observations
Full-depth hydrographical surveys conducted in 2007 – 2009 during the International Polar Year (IPY) collaboration provide an accurate snapshot of the Arctic Ocean (AO) hydrography at a time when the Arctic Ocean Oscillation (AOO) index was highest in recent record. We construct pan-Arctic temperature and salinity (T/S) reference states from these data using variational optimal interpolation and discuss some key differences between the 2007 – 2009 state and a similarly constructed climatology from historical 1950 – 1994 Russian archives. These data provide a recent, known reference state for both qualitative and quantitative future AO climate change studies. Furthermore, we present an analysis of sea-surface height (SSH) and upper-layer circulation constructed from the IPY data via 4DVar data assimilation and use them to examine circulation and freshwater source changes visible during IPY.
Introduction
During the International Polar Year (IPY) 2007-2008, the international scientific community completed an intensive physical survey of the Arctic Ocean (AO). Many countries and institutions contributed to this effort, which generated a significant number of in situ hydrographical observations including stationary fulldepth profiles of temperature/salinity (T/S) from conductivity-temperature-depth instruments (CTD) and partial-depth profiles of the upper $700 m along Lagrangian tracks followed by Ice-Tethered Profiler (ITP) affixed to sea ice, measurements of T/S along the tracks followed by submarine gliders near coastal areas, and a small number of profiles from less accurate expendable CTD and expendable bathythermograph (XBT) instruments.
Arctic T/S distribution is governed largely by water inflow and outflow through the major gateways, the properties of those waters, and regional circulation. AO sources include the warm saline waters advected with the Norwegian current from The remote nature of the AO, together with practical difficulties in observation and navigation due to sea ice and sparse infrastructure, makes in situ sampling of the AO expensive and occasional. Satellite monitoring of the ocean surface is possible but inhibited by ice cover and clouds. Unfortunately, the accuracy of the satellite-surface observations and their processed (i.e., L2-L4) products is often far from optimal: they may contain large errors due to poor calibration, mask large portions of the AO for sea ice and thus lack of coverage over the central AO, and may contain anachronistic assumptions in their post-processing algorithms [26]. Modeling efforts and other interdisciplinary studies in need of static background ocean data may need to rely on gridded products that are biased toward older AO regimes or large amounts of surface observations from satellite. Further, climatological studies using older reference states for trend analysis may suffer from amplified trend errors. For example, the Arctic portion of the most recently available Polar Science Center Hydrographic Climatology (PHC 3.0, updated from [27]) is based on historic observations through 1993 [28, dataset g01961].
The concerns listed above motivate this work, which presents a 2007-2009 AO stationary analysis state inferred from algorithmic data conditioning of pan-Arctic hydrographical surveys and other at-depth observations to provide a snapshot of the non-coastal ocean state with an emphasis on the intermediate layers. The result is a dataset of gridded T/S available in NetCDF at http://bit.ly/2M6qsJ9, from which this chapter discusses mapped water masses and their differences relative to those mapped from 1950 to 1994 climatology. We also use 4DVar data assimilation to establish an analysis of major circulation changes during IPY relative to the climatological mean and discuss the evident anomalies of July-December 2008 [29]. The remainder of this chapter is organized as follows: Section 2 discusses the in situ data and the production algorithm for the gridded fields, Section 3 presents an atlas of water mass properties for the IPY and their differences from historical data fields, Section 4 discusses changes in the AO water mass distribution and thermal state evident from the use of IPY data and derived climatology, Section 5 presents analysis of circulation anomalies during the IPY, and Section 6 concludes the chapter.
Observational data and gridding
As part of an IPY initiative, approximately 13,000 CTD/xCTD/XBT profiles along with ITP data were curated into a central database of AO T/S observations from contributors in Japan, Norway, Russia, Canada, the USA, Germany, Poland, Sweden, and China. Stroh et al. [[26], Figure 1] show the location of profiles over the AO, of which only the IPY CTD and ITP data during 2007-2008 are used here. CTD observations during the sea-ice minimum months of August-October account for approximately 40% of all ship-borne profiles, while wintertime November-March account for approximately 30%. ITP apparatuses provide a more temporally uniform stream of profile data for the uppermost $700 m throughout the year; ITP data were collected and made available by the Ice-Tethered Profiler Program [30,31] based at the Woods Hole Oceanographic Institution (http://www.whoi.edu/itp).
The Data-Interpolating Variational Analysis tool (DIVA, [32]) is a robust finite element-based optimization tool for gridding large 2D, 3D, and 4D datasets and includes error estimates of the analysis. This freely available program, developed by the GeoHydrodynamics and Environment Research, was applied to the observational data described above to construct static full-depth fields on an equal-area polar-centered grid with 50 km resolution. Interpolation to 51 vertical levels occurs level-wise within DIVA, to which an internally applied stability algorithm ensures that analyses remain hydrodynamically stable with respect to density throughout the gridding. Bathymetric masking was inferred from the International Bathymetric Chart of the Arctic Ocean [33], and regions with depth less than 200 m are masked. The correlation length scales for observations correspond to three grid cells with a signal-to-noise ratio of 10%. The same procedure applied to historical observations collected during 1950-1994 (privately archived at the Arctic and Antarctic Research Institute of Russia) generates mean climate dataset for that period, which is used to contrast the gridded IPY data.
Water mass distribution maps
From the gridded T/S analyses for the 1950-1994 and IPY periods, water mass properties reveal qualitative differences between them. The use of density-related properties to distinguish water masses is less certain than chemical analysis [22,34]. Scarcity of widespread chemical tracer surveys precludes such an approach here, and analysis based on the more common T/S data is adopted. This work chooses to map Atlantic water (AW) and summer Pacific water (SPW) for both their simplicity of definition and importance in the freshwater (FW) and thermal budget of the AO. Characteristics used to identify AW and SPW are adapted from [25] and [35,36], respectively, and are described below.
The AW distinguishes an intermediate layer of warm water of Atlantic origin that has entered the Arctic Basin through deep coastal channels and bathymetric steering. Over-basin AW typically has S ≥ 34.8 PSU with T ≥ 0°C despite heat loss along the Eurasian shelf. SPW denotes relatively fresh waters with 31 PSU ≤ S ≤ 33 PSU and T ≥ À1.4°C entering the AO through the Bering Strait which have cooled after residence on the shallow Chukchi Shelf and include substantial meteoric FW [21,35]. These low-density waters form a subsurface layer in the western Arctic typically at depths between 50 and 100 m and often include a local temperature maximum [37,38].
In Figures 1-11, left-side plots show the identified field for the IPY dataset, while the right-side plot shows the corresponding anomaly field relative to the Russian 1950-1994 archive. We refer to each such pair singularly as a figure and distinguish between the field and its anomaly in context. Figure 1 maps the 34.8 PSU isohaline depth. Figure 2 shows the integrated FW content (FWC), in meters of freshwater, with respect to 34.8 PSU.
Figures 3-7 plot the AW core depth, core temperature, heat content, lower boundary depth, and upper boundary depth, respectively. AW here is defined as waters composing a continuous vertical region of positive temperature bounded by 0°C isotherms, which define herein the lower and upper AW boundary depths. The AW core depth and temperature are adopted to be the depth and value of the temperature maximum within the AW layer. Total heat content is calculated as the vertical integral of specific heat with respect to À1.8°C between AW boundaries.
Insufficient deep data in near the Canadian Archipelago precludes a resolution of the AW lower boundary and consequently of the heat content in that area. Figures 8-11 show calculated fields for summer Pacific water, which exists only on the Pacific side of the Arctic. SPW is defined by a local temperature maximum occurring below the surface mixed layer within the salinity range 30.5-33.0 PSU [35]. Upper and lower SPW boundary depths are determined T ≥ À1.4°C and salinity restriction to that range. Figure 8 maps the depth of the maximum temperature found in SPW, and Figure 9 identifies these maxima. Figures 10 and 11 show the lower and upper boundary depths of SPW.
Changes inferred from T/S observations
In general, the vertical and spatial patterns of hydrographic parameters in the AO and adjacent North Atlantic had undergone considerable changes by IPY although the large-scale distributions of the water masses align with the historic climatology. Readers unfamiliar with AO geography and its bathymetric features are encouraged to follow this discussion with an atlas, e.g. https://geology.com/ articles/arctic-ocean-features/.
Atlantic waters
Elevated pan-Arctic heat content due to the extraordinary heat transported to the AO from the North Atlantic is a significant change evident during the IPY period. Advection of relatively warmer AW resulted in anomalous hydrographic state formation over the entire deep Arctic Basin [17,38]. The temperatures within the core of AW were observed 0.3-1.0°C higher than climatic values; mean changes are $0.65°C over the Eurasian Basin and $0.25°C over Canada and Makarov basins.
Of further note is the warm tongue of AW that appears to be topographically steered by the Lomonosov Ridge; Figure 4 shows a clear 0.5°C core temperature anomalous increase extending from the Laptev Sea toward the Greenland Shelf. This feature resides at a depth of about 275 m, $75 m surfaceward of the historic AW core depth per Figure 3. Over the Makarov Basin, AW expanded $50 m deep into the column [39], while the AW core depth has moved 100-150 m surfaceward with an associated 0.5-1.0 GJ/m 2 increase in associated heat content. Similar changes including the AW moving surfaceward and retaining more heat at depth are present throughout most of the AO indicating stronger potential influence on ice-related processes [40].
By 2007, the intermediate AW layer had deepened and thickened in the Pacific sector [23], but the changes are heterogeneous over the central and Eurasian basins. In particular, the net AW layer thickness appears to have thinned over the Amundsen Basin, which is likely a mass-balance response to the thickened layer observed on the Pacific side of Lomonosov Ridge. Within the western side of Fram Strait, the AW layer has thickened by roughly 70 m, moving 20 m closer to the surface without change in the core depth. Figure 2 shows another of the most drastic changes in the Arctic-the change in freshwater distribution. As a proxy for the AW-PW upper-ocean front in the central Arctic, the strong FW anomaly gradient illustrates the change from the Lomonosov Ridge to the Alpha-Mendeleev Ridge (AMR) system [22 and references therein, 41]. Further, the boundary marking the extent of present SPW in Figures 9-11 tracks very directly the local bathymetric minimum of the AMR. Estimates shortly after IPY show that FWC in the Eurasian domain decreased by nearly one-quarter, while the American domain increased by the same percentage [16,42]. The influx of PW through Bering Strait was near a record high in 2007, importing anomalously large FW volume and thermal input [20].
Pacific water
The loss of FWC near the pole and in the western sector likely results from cyclonic AO moving more AW toward the eastern Amerasian Basin. Simultaneously, the wind-forced anticyclonic BG stored fresher SPW in the Pacific sector, accumulating an average of 4 m FWC on the Pacific side of the front. Much of this FW had been in place prior to 2007; the IPY FWC in the Beaufort Sea is nearly identical to that found for 2006 [21]. Carmack et al. also find that sea-ice freeze/ melt accounts for a net loss of FWC in the Beaufort Region, with riverine water and PW contributing roughly half of the regional FW [21]. Ge et al. find that the mean annual Yukon River outflow, the most significant meteoric source included in SPW, increased 8% between 1977 and 2006 [43].
An increasing trend in Eurasian catchment outflow also is evident [14] and related to changes in permafrost [44] and temporal changes in continental hydrological cycles [45]. Increased Siberian runoff suggests the apparent decreases in FW volumes adjacent to the Laptev and East Siberian seas arising from changes in seasonal ice and the regional dominance of AW, but these source changes alone do not explain FW accumulation observed in the Beaufort Sea during IPY and beyond [46]. Data-conditioned modeling of the 2008 circulation [29] suggests that this accumulation may be supported by transport from the Lincoln Sea [47] and/or regions north of Greenland.
Changes in the organization of water masses have also affected the outflow of AO through Fram Strait, located between Greenland and Svalbard. The Transpolar Drift mode arising from the cyclonic AOO regime impedes PW from reaching the continental shelf north of Greenland. Consequently PW may only exit the AO via the Canadian Archipelago [19], which has been shown to be a significant but variable route for AO export [5,48,49].
Directly observed from ITP data
The gridded IPY data do not resolve a surface layer. Sea-surface temperature and salinity (SST and SSS, respectively) are temporally variable as they depend on the strongly seasonal Arctic diurnal effects. Additionally SST/S in the AO depends seasonally on sea-ice-related processes such as meltwater strata, brine rejection, rapid wintertime heat loss through sea-ice leads, etc. Models and SST satellite data products often assume a surface freezing temperature (FT) of À1.8°C, which assumes background salinity of $32.86 PSU. At that T/S state, FT sensitivity is $0.1°C per À0.01 PSU so that inaccuracies in background salinity amplify errors in associated freezing temperature. Figure 12 illustrates the inaccuracies of these assumptions by examining the relationship between near-surface temperatures observed by 2006-2009 ITP and FT calculated from the associated salinity. Observations are primarily over the Pacific sector and central Arctic. The thick diagonal line shows exact correspondence between observed T and FT. Colors indicate binned values of T + 1.8°C (T-FT) in winter (summer) in the left (right) plot, with dashed lines demarcating percentiles as labeled. In winter months of November-April, all observations correspond to freezing point, but only about 25% of measurements have T ≤ À1.64°C, the freezing temperature associated with $30 PSU. In summer months of May-October, temperatures clearly depart from freezing, but only $25% of measurements differ from freezing by more than 0.05°C. In both summer and winter, the vertical structure of the plots demonstrates inaccuracy of the À1.8°C at $32.86 PSU assumption; surface waters in the western Arctic have salinities in the range 30-32 PSU.
Quasi-stationary "climatological"circulation
Freshwater changes throughout the Arctic relate to changes in geostrophic current distributions. Over basins, the strengthened FW gradient between the Pacific and Atlantic sectors led to a very significant sea-surface height (SSH) changes, which in turn gives rise to changes in geostrophic currents [16]. The strengthening of geostrophic currents in the Pacific sector is suspected among the factors for the reduction of multiyear ice over the Canadian Basin [50]. Other factors include deepening AW over the Canada Basin since 2004, enhancing the strength of the BG, and its accumulation of freshwater [23]. A recent study demonstrates that atmospheric modulation of geostrophic boundary currents and SSH quantifiably relates to the Northern Hemisphere annular mode strength [51].
To analyze the quantitative difference in the mean circulation during the IPY period with respect to the climatological circulation, the IPY dataset was conditioned using the four-dimensional variational (4DVar) data assimilation (DA) approach [52,53] in two ways. To find a quasi-stationary solution, the process uses 4DVar optimization of an ocean model forced by the corresponding heat, salt, and momentum fluxes inferred from NCEP/NCAR reanalysis and regional Pan-Arctic Ice-Ocean Modeling and Assimilation System (PIOMAS). In the nonstationary reconstructions, all available T/S data were averaged for model grid bins, and these meaned observations were assimilated through the conventional 4DVar DA approach using a semi-implicit ocean model (SIOM) with resolution of 65 km; a framework of the algorithm is described in [52,54].
The resulting quasi-stationary SSH maps and near-surface currents are shown in Figure 13. A comparison indicates the essential reorganization of the circulation in the AO evident during IPY. The most notable feature is the strong intensification and shift of the BG toward the Alaska. IPY SSH patterns are characterized by a pronounced BG dome which attains a central height greater than 50 cm, while the typical climatological SSH is only about 40 cm. This difference results from intensified westward flow along the Alaskan and Chukchi Sea continental slope. There is also a clear re-centering of the BG resulting from the shift of the Transpolar Drift axis toward the Canada Basin; this agrees well with the recent analysis of the freshwater content and circulations conducted by [55].
Anomalous 2008 circulation
The application of the more advanced 4DVar reconstruction of nonstationary circulation for July-December 2008 indicates stronger circulation than those directly detected from the in situ IPY dataset.
The SIOM-4DVar reconstructed bimonthly evolution of SSH and circulation at 250 m during July-December 2008 is shown in Figure 14. The SSH patterns are characterized by a pronounced BG dome which gets slightly stronger in November-December (Figure 14, right) attaining a 40 cm central elevation. Compared to the relatively smooth and symmetric SSH derived through optimal interpolation of observations (e.g., [16]), the DA-reconstructed SSH reveals finer features consistent with the observations. During September-October, the SSH pattern is characterized by a secondary SSH maximum at 74°N 140°W, which tends to erode by the end of the year but still persists as a tongue spreading toward Alaska along 140°W. This feature is seen in the AVISO anomalies averaged over the second half of 2008 [29].
Another prominent feature is a zonally spreading trough in the region between 72°N and 80°N from Severnaya Zemlya to the Bering Strait. The emergence of this depression could be one of the causes of intensification of the Bering Strait transport due to the increase of the large-scale sea level difference between the Chukchi and Bering Seas. This is supported by the analysis of Woodgate the area north of the Bering Strait (upper panels in Figure 14), the heights of which are estimated to be À11, À10, and À6 cm, respectively. This is consistent with the seasonal decline of the Bering Strait inflow from 1.1 Sv in July-August to 0.5 Sv in November-December 2008 [20].
The effect of the abovementioned SSH decrease on the transport pattern in the region of the AW inflow is of particular interest. During July-August 2008, the negative SSH anomaly is closely attached to the coastline, creating a positive crossshelf SSH gradient and a westward geostrophic transport of À2.9 Sv along the shelf break (lower-left panels in Figure 14). The effect becomes less visible by the end of the year as the negative SSH anomaly detaches from the continental slope; the total transport relaxes to eastward values of 0.8 and 1.0 Sv, respectively, for the September-October and November-December periods. This identified flow reversal agrees well with moored velocity observations from the Nansen and Amundsen Basins Observational System (NABOS, http://nabos.iarc.uaf.edu/data), which are indicated by red arrows in Figure 14 but were not used to obtain the optimized solution.
The DA results immediately provide us with quantitative FWC estimates and permit identification of the regional FW. In particular, the total FWC within the volume bounded within [70. 25,80]°N Â [140, 170]°W above 400 m depth was found to be about 20,700 km 3 , which is slightly less ($5%) than that found in literature [update from 46]. A possible source of this difference is a smaller area of the integration for the 4DVar solution and the offshore displacement of the BG observed in 2008.
To assess the FW origin accumulated FWC in the BG, FW transports across the eastern, southern, and western boundaries were estimated 0.08, À0.005, and À0.075 Sv, respectively (positive-oriented gyreward); the boundaries are shown in the top-right panel of Figure 14, where the eastern boundary abuts the figure boundary and the southern one intersects the Alaska coast. Calculated transports suggest that observed changes in the BG FWC were generally caused by the FW transport changes confined to the latitude band of 72-77°N at the eastern boundary of the model domain.
Summary
This work introduces an IPY snapshot ocean climatology and discusses freshwater and thermal changes in two principle water masses to establish, in perspective, subsurface changes over the central AO as well as consequences of surface freshening. It focuses only on the ocean and readily neglected continental shelves where important water mass-forming processes occur [56] but enhanced mixing impedes analysis based on T/S, any resolvable changes in Arctic Bottom Water, and a direct analysis of sea ice which requires an extensive discussion of the atmosphere and its variability [57] which are beyond the scope of this presentation.
Changes in the AO are not monotonic as they result from cyclic and quasi-cyclic changes in various superimposed feedback-entangled geophysical components in addition to trends in their background values. Changes may arrive in short bursts or "pulses" and may undergo periods of relaxation toward long-term means. The intensive pan-Arctic IPY survey provides evidence of an AO undergoing significant changes and departure from the longer-term mean of the late twentieth centuryresponding to variations in source content (from the Atlantic, Pacific, and continental waters) and the resulting changes in freshwater and heat distribution; atmospheric forcing, induced SSH gradients, and their associated geostrophic responses; and relative volume and means of exit of various water masses present in the AO. During IPY, many of these components appeared to be establishing new records. In the decade following, 2011-2012 set records for associated components such as river outflow, Bering Strait inflow, sea-ice minimum, and Arctic cyclone strength-some of which may have been surpassed those of 2016-2017. From this perspective, conditions of the AO during IPY 2007-2008 show that the region is in transition toward a "new normal," and a gridded IPY dataset provides a useful reference state for establishing how far that transition has progressed.
A model-DA system was also applied and may quantify the observed difference in the T/S distribution bought on climatological and seasonal temporal scales. The reconstructed mean 2007-2009 AO circulation clearly identified global shifts in the BG and axis of the transpolar drift. Both results are consistent with other qualitative analyses. Analysis of the reconstructed nonstationary circulation for July-December 2008 allowed quantification of several anomalous circulation features including: a. A reversal of the total transport in the AW inflow region of À2.9 Sv in July-August which later relaxed to an eastward transport of 0.8-1.0 Sv. This reversal of a long-slope current is confirmed by independent observations from NABOS moorings.
b.Formation of a prominent SSH trough extending from the eastern Laptev Sea to the Bering Strait. A similar and even stronger structure was obtained in the PIOMAS solution and is indirectly evidenced by two NABOS moorings located on the continental slope of the Laptev Sea.
c. The aforementioned SSH depression near the Chukchi Sea tends to increase the large-scale sea level difference between the Bering Sea and the AO. This contributes to the 25% increase in the Bering Strait transport at that time and agrees with the regional force balance suggesting an increased role of the pressure head between the Bering Sea and AO during 2007-2011 [20].
|
2019-04-27T13:12:51.270Z
|
2018-11-05T00:00:00.000
|
{
"year": 2019,
"sha1": "39fd749a4b7a2e3a8f4cbfe652b1b8056da02716",
"oa_license": "CCBY",
"oa_url": "https://www.intechopen.com/citation-pdf-url/63665",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "8acc8d8d6a1224253846803834d13b7bb385cc48",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Environmental Science"
]
}
|
11885152
|
pes2o/s2orc
|
v3-fos-license
|
Effects of feeding schedule on growth , production and economics of pangasiid catfish ( Pangasius hypophthalmus ) and silver carp ( Hypophthalmichthys molitrix ) polyculture
An experiment was carried out to evaluate the effects of feeding schedule on growth, production and economics of pangasiid catfish (Pangasius hypophthalmus) and silver carp (Hypophthalmichthys molitrix) polyculture in nine earthen ponds for a period of 135 days. There were three treatments (T) each with three replications. Species composition (1:1) and stocking density (25,000 fish/ha) were same in all treatments. A commercially available pelleted feed was given only for pangasiid catfish with same feeding rate in all treatments but the feeding frequency was different. The feeding rate was 10%, 8%, 7%, 6 %, 5%, 4% which was consecutively adjusted after each fortnightly sampling and 3% for the last 4 weeks of the study period. Feeding frequencies was once a day in T1, two times a day in T2 and three times a day in T3. The average weight gain of pangasiid catfish and silver carp in T3 (376.69 g and 81.02 g) was significantly higher (P<0.05) than those of T2 (330.25 g and 58.35 g) and T1 (261.76 g and 42.89 g). The survival rate was 95.2, 96.0 and 96.8% for pangasiid catfish and 83.2, 85.2 and 86.0% for silver carp in T1, T2 and T3, respectively. The net production of fish in T3 (5,430.64 kg/ha) was significantly higher (P<0.05) than those of T2 (4,584.70 kg/ha) and T1 (3,562.89 kg/ha). Significantly highest net return (Tk. 68,533.54/ha with benefit cost ratio of 1.36) was achieved from T3 followed by T2 (Tk. 40,080.56/ha with benefit cost ratio of 1.22) and T1 (Tk. 13,786.67/ha with benefit cost ratio of 1.08). The present research findings suggest that an increase of feeding frequency has positive effect on growth and production of pangasiid catfish and silver carp.
Introduction
In aquaculture, diet is often considered as the single largest cost item and can represent over 50% of the operating cost in intensive aquaculture (El-Sayed, 1999).The general approach adopted to reduce diet cost has been to develop low-cost diets by replacing the costly fish meal components with cheaper plant protein sources (Jackson et al., 1982;Hossain and Jauncey, 1989;Webster et al., 1992).Apart from developing low-cost diets, different feeding management strategies and/ or good husbandry methods can also lead to significant saving in diet cost.Information on the optimum feeding regimes/schedules of cultured fish is important in achieving efficient production and to ensure best FCR (feed conversion ratio) and weight gain of cultured organism.An important step in the feeding strategy is to determine the optimal frequency of feeding.
The pangasiid catfish (Pangasius hypophthalmus) is one of the fast-growing and popular fish species in some Asian countries.This exotic species gained much popularity in Bangladesh because of its rapid growth, easy culture system, high disease resistance and tolerance to a wide range of environmental change (Bardach et al., 1972;Stickney, 1979;Sarkar et al., 2007).Pangasiid catfish are cultured completely on supplemental feed.Commercial culture and production of pangasiid catfish has recently been expanded dramatically but profit is decreasing gradually due to a number of reasons of which increased feed cost and improper management practices are important.Due to the use of large quantity of supplemental feed, pond water receives high quantity of inorganic nutrients from the microbial decomposition of unused fish feed and metabolic wastes.These nutrients favour excessive production of phytoplankton in pond water that can support additional number of planktivorous fishes without further feed or management cost.But in practice, they remain unutilized or less utilized and form algal blooms which in turn cause many unexpected problems such as decline in dissolved oxygen, reduced fish growth and off-flavour in pangasiid catfish flesh.Such problem in monoculture of pangasiid catfish could be avoided by using a polyculture approach.
Silver carp is generally considered as a planktivorous fish (Cremer and Smitherman, 1980;Spataru et al., 1983).This planktivorous species could be cultured with pangasiid catfish for the management of phytoplankton.Pangasiid catfish and planktivorous silver carp polyculture can improve the water quality by grazing down the phytoplankton by the latter species and enhance the growth of former species.It also helps to gain an extra crop of silver carp without incurring additional cost, making aquaculture more profitable to farmers (Sarkar et al., 2006;2008).Though polyculture techniques of pangasiid catfish with carps are developing in Bangladesh, there are very few literatures available on the quantification of feeding regime of pangasiid catfish (P.hypophthalmus) both in monoculture and polyculture systems.Therefore, the present study was undertaken with a view to develop a standard feeding schedule for pangasiid catfish in co-culture of pangasiid catfish and silver carp for maximizing fish growth and minimizing feed wastage and cost of fish production.
Experimental site and pond facilities
The experiment was carried out for a period of 135 days from 24 July to 5 December in nine equal sized (each 200 m 2 , 1.6 m depth), rain-fed, rectangular experimental ponds situated in the Field Laboratory, Faculty of Fisheries, Bangladesh Agricultural University (BAU), Mymensingh, Bangladesh.
Pond preparation
The ponds were drained out completely and left exposed to sunlight for about 15 days.All ponds were treated with lime at the rate of 1 kg/decimal 14 days before stocking of fish fingerlings.
Collection of experimental fish
All the fingerlings with mean initial length and weight of 130.73 cm and 9.87 g in case of pangasiid catfish and 13.15 cm and 19.25 g in case of silver carp respectively were procured from a local fry trader.
Experimental design and feeding
The experiment was carried out with three treatments each with three replications.The fingerlings of pangasiid catfish and silver carp were stocked with same species composition (1:1) and stocking density (25,000 fishes/hectare) in all treatments.The feeding rate was same in all treatments but frequency was different.The feed was supplied according to percent of the body weight of only pangasiid catfish and it was 10%, 8%, 7%, 6 %, 5%, 4% consecutively adjusted after each fortnightly sampling and 3% for the last 4 weeks of the study period.Feeding frequencies was once a day (morning at 9:00 a.m.) in T 1 , two times a day (morning at 9:00 a.m. and afternoon at 5:00 p.m.) in T 2 and three times a day (morning at 9:00 a.m., midday at 1:00 p.m. and afternoon at 5:00 p.m) in T 3 .A commercial pelleted feed produced by "Quality Fish Feed Ltd." having 28% protein and 7% lipid was used.The feeds were thrown over the pond water by hand and on a particular site of the pond regularly.About 20% of the total fish were sampled fortnightly by a seine net to monitor the fish growth and to adjust feeding rates.The weight of fish during sampling was measured by using a portable digital balance.
Water quality parameters
The water quality parameters such as temperature, dissolved oxygen (DO) and pH were recorded fortnightly.The temperature and dissolved oxygen of the ponds were determined by a DO meter (YSI, model 58, USA).Water pH was recorded by a pH meter (Jenway, model 3020, UK).Chlorophyll-a (µg/l) was measured at monthly interval.Chlorophyll-a was determined using a spectrophotometer after acetone extraction (Greenberg et al., 1992).
Statistical analysis
For the statistical analysis of data, single factor analysis of variance (ANOVA) of the mean values of growth, survival and yield was done using Randomized block design (RBD).The mean values were compared according to DMRT test (Gomez and Gomez, 1984).Significance was assigned at 0.05% level.
Economic analysis
An economic analysis was conducted to estimate the net profit from different treatments.The analysis was based on local market prices for harvested fish and all other items.The cost of leasing ponds was not included in the total cost.The net return was measured by deducting the gross cost from the gross return per hectare.The benefit cost ratio was also measured as a ratio of net benefit to gross cost.
Water quality parameters
Water quality parameters (mean ± SD) measured throughout the experimental period are presented in Table 1.
Growth and production performances
The growth performances of pangasiid catfish and silver carp in terms of initial weight, final weight, weight gain, specific growth rate, feed conversion ratio, survival rate and total production are shown in Table 2. Mean weight gains of pangasiid catfish and silver carp were 261.76 g and 42.89 g in T 1 , 330.25 g and 58.35 g in T 2 and 376.69 g and 81.02 g in T 3 respectively.There was a significant variation of mean weight gain of both species (P<0.05)among the treatments (Table 2).SGR (% per day) value of pangasiid catfish and silver carp were 2.46 and 0.86 in T 1 , 2.62 and 1.03 in T 2 and 2.71 and 1.23 in T 3 and there was a significant difference (P<0.05)among the treatments.
The average feed conversion ratio (in case of pangasiid catfish) was 2.40, 2.22 and 2.07 in T 1 , T 2 , and T 3 , respectively.The feed conversion ratio (FCR) was measured from only the total net production of pangasiid catfish and feed used among the treatments.The mean survival rate was 95.2, 96.0 and 96.8% for pangasiid catfish and 83.2, 85.2 and 86.0% for silver carp in T 1 , T 2 and T 3 , respectively.The survival rate of pangasiid catfish did not show any significant variation among the treatments, but in case of silver carp the survival rate in T 1 was significantly lower (P<0.05)than T 2 and T 3 .
The gross production of fishes in terms of kg/ha/135 days was higher (5,757.01kg) in T 3 followed by T 2 (4,908.16kg) and T 1 (3,880.54kg) and they were significantly (P<0.05)different (Table 2).A simple economic analysis of the culture operation showed that T 3 having three times feeding frequency generated the maximum benefit and net return of Tk. 68,534/ha/135days and Benefit Cost Ratio (BCR) of 1.36 followed by Tk.40,081/ha/135days with BCR value of 1.22 in T 2 , and Tk.13, 787/ha/135days with BCR value of 1.08 in T 1 (Table 3).Feeding frequency had a significant effect on food consumption, growth and production in pangasiid catfish.By the end of the experiment, fish fed at higher feeding frequencies had gained significantly more weight and added more length than fish fed at lower feeding frequencies.Fish fed at higher frequencies consumed larger quantities of food than those fed less often, but individual meal size was smaller.This is consistent with studies conducted on other species (Ishiwata, 1969), where fish fed fewer meals per day tend to eat more per meal.Fish accomplished this by increasing stomach volume and became hyperphagic (Grayton and Beamish, 1977;Jobling, 1982;Ruohonen and Grove, 1996).However, although fish fed at higher frequencies consumed larger quantities of food, when the interval between meals is short, the food passes through the digestive tract more quickly, resulting in less effective digestion (Liu and Liao, 1999).Thus, determining the optimal feeding frequency is important.
The water quality parameters measured in ponds of different treatments were found to be more or less similar and all of them were within the acceptable range for fish culture.The water temperature ranged from 21.4ºC to 33.8ºC because the study was conducted from July to December that covered part of summer and part of winter season.The mean values of pH were 7.43, 7.35 and 7.24 in ponds of T 1 , T 2 and T 3 , respectively, which indicate good productive conditions.The neutral to slightly alkaline pH in the cultured pond were possibly due to local soil condition and natural waters.Moreover, the initial lime treatment during pond preparation possibly helped in maintaining carbon buffer system in the pond water.
The mean DO contents ranged from 3.54 to 5.57 mg/l.The fluctuation of DO value might be due to alteration in the rate of photosynthesis in ponds and oxygen consumption by fish and other decomposer microorganisms.The lowest average value of DO was found in T 1 (4.28 mg/l).It might be due to higher organic content from higher amount of unutilized feed.The highest average value of dissolved oxygen (4.55 mg/l) was found in T 3 that was possibly due to less organic decomposition of supplied feed.
Chlorophyll-a concentration indicates the biological productivity of a water body.The mean chlorophyll-a values recorded were 196.92, 181.45 and 157.52 µg/l in T 1 , T 2 and T 3 , respectively.The highest chlorophyll-a value was found in T 1 that might be due to the higher concentration of phytoplankton, resulted in due to availability of nutrients from unused food particles and fish metabolic wastes.Khatrai (1984) also found a positive relationship between phytoplankton growth and chlorophyll-a content.The lower mean value was observed in T 3 , which might be due to lower abundance of microalgae.Better utilization of supplied feed in this treatment might have resulted lower nutrient supply for microalgal growth in comparison to other treatments.
The weight gain and % weight gain (376.69 g and 3816.48%) by pangasiid catfish in T 3 were significantly higher than T 2 (330.25 g and 3346.03%) and T 1 (261.76g and 2652.04%).Again in case of silver carp T 3 also showed significantly higher mean weight gain (81.02 g) and % weight gain (420.90%)followed by T 2 (58.35 g and 303.13 %) and T 1 (42.89g and 222.80%), respectively.The better weight gain attained in T 3 may be due to proper utilization of both natural and supplementary feed by the fishes and also due to good water quality conditions maintained through proper feeding frequency of three times a day.It was reported that the wastage of food particles enhances the nutrient concentration of water, which help in increase of plankton and deteriorate water quality (Lin and Diana, 1995;Lin et al., 1990).The weight gains of silver carp obtained in the present study were more or less similar with the findings of Azad et al. (2004).
The fortnightly average specific growth rate (SGR %/day) of fishes was found to increase more or less rapidly at the beginning of the experiment and then slowed down after October till the end of the experiment.Relatively slower SGR toward the end of the experiment might be due to the reduction of water temperature for seasonal change.Mean SGR (% per day) value of pangasiid catfish in the present study were 2.46, 2.62, and 2.71 in T 1 , T 2 and T 3 , respectively.This value is more or less similar with the findings of Azad et al. (2004), but slightly lower than the findings of Hung et al. (1998) and Azimuddin et al. (1999).This might be due to lower temperature during the last two months of the study period.In the present study mean SGR (% per day) value of silver carp were 0.86, 1.03 and 1.23% in T 1 , T 2 and T 3 , respectively, which were significantly (P<0.05)different from each other.
The mean survival rate of pangasiid catfish and silver carp in different treatments varied between 95.2 to 96.8 % which is more or less similar with the findings of Ali et al. (2005) but higher than that reported by Azad et al. (2004).The higher survival rate of pangasiid catfish might be due to the relatively larger size of fingerlings (9.87 g) stocked.The mean survival rate of silver carp varied between 83.2 to 86.0%.
Pelleted feed was given only for pangasiid catfish.So, FCR were calculated only for pangasiid catfsh.
The average FCR values were 2.40, 2.22 and 2.07 in T 1 , T 2 and T 3 , respectively, which were significantly (P<0.05)different from each other.Kader et al. (2003) found FCR value of 1.54 fed commercial feed (Quality fish feed Ltd.) in case of pangasiid catfish monoculture.Azimmuddin et al. (1999) found FCR value of 1.73 to 2.04 for P. sutchi.In the present study the FCR of P. hypophthalmus is more or less satisfactory.Pathmasothy and Jin (1987) found similar to higher FCR values (2.27 to 3.66) using comparatively high protein diet (32% protein).
After 135 days, the gross production of fishes in terms of kg/ha/135 days was higher (5,757.01kg) in T 3 , followed by T 2 (4,908.16kg) and T 1 (3,880.54kg).The reasons behind the highest production in T 3 might be due to proper utilization of supplied feed.Kader et al. (2003) obtained production of pangasiid catfish 3,062.01kg/ha in 70 days fed commercial feed (Quality Fish Feed Ltd.), which was slightly lower than the present study.Better production obtained in the present study might be due to prolonged culture period and productivity of the ponds.Ahmed et al. (1996) obtained production of 339.39 kg/ha for P. pangasius.Their production is very low compared to this ones.
The economic analysis revealed that T 3 could generate maximum profit of Tk 68,534/ha/135 days and BCR value of 1.36 which was significantly higher than T 2 (Tk.40,081/ha/135 days and BCR value of 1.22) and T 1 (Tk.13,787/ha/135days and BCR value of 1.08).Kader et al. (2003) obtained net profit of Tk.31,004/ha/70 days from monoculture of pangasiid catfish fed commercial feed (Quality Fish Feed Ltd.).Their profit is lower than the profit of this study.Higher profit was found in this study might be due to higher individual weight of fish resulted from rearing in prolonged time compared to that study.
From the findings of the present study, it may be concluded that feeding three times a day is better than two times or one time a day for getting higher growth of fish, net income and optimum utilization of the given feed.
Table 2 . Growth performance, production (mean ± SD) and survival of pangasiid catfish and silver carp in different treatments
a Mean values with different superscripts in the same row were significantly different (P<0.05)
|
2017-09-08T05:01:59.677Z
|
2010-04-16T00:00:00.000
|
{
"year": 2010,
"sha1": "cb2167b69b216246ce263628a58be3c0ca1c57d9",
"oa_license": "CCBY",
"oa_url": "https://www.banglajol.info/index.php/JBAU/article/download/4982/3989",
"oa_status": "HYBRID",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "cb2167b69b216246ce263628a58be3c0ca1c57d9",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Biology"
]
}
|
16569243
|
pes2o/s2orc
|
v3-fos-license
|
Outcomes and Effectiveness of Bilateral Percutaneous Transluminal Renal Artery Stenting in Patients with Critical Bilateral Renal Artery Stenosis
Background: The aim of this study was to assess the effects of percutaneous bilateral renal artery stenting in patients with atherosclerotic renal artery stenosis and in-hospital and 4 month outcome of the procedure, focusing on the changes in renal function and blood pressure.
Introduction
Percutaneous transluminal renal interventions including angioplasty and stenting are important methods for treatment of atherosclerotic renal ar-tery stenoses [1].However, as the procedure became broadly applied from the beginning conflicting results emerged.Although some patients showed major benefit after percutaneous renal interventions, others experienced further deteriorawww.cardiologyjournal.orgtion of renal function [2].Today it is acknowledged that atherosclerotic renal artery stenoses is a complex clinical condition that ranges from asymptomatic disease to high grade bilateral disease complicated by progressive renal failure, recurrent pulmonary edema, and severe hypertension.Current indications for renal interventions have been partly guided by the Angioplasty and STenting for Renal Artery Lesions (ASTRAL) study [3] that has shown this therapy makes little impact upon major outcomes.Also, Chrysochou et al. [4] showed that subgroups including acute flash pulmonary edema and acute kidney injury might benefit from intervention.But, this conditions are a non-evidenced-based indication (Class I, Level of Evidence B) according to the American College of Cardiology/American Heart Association guidelines [5].The aim of this study was to assess the effects of percutaneous bilateral renal artery stenting in patients with atherosclerotic renal artery stenosis and in-hospital and 4 month outcome of the procedure, focusing on the changes in renal function and blood pressure.
Patients
Five consecutive patients (mean age: 64.8 ± ± 9.7 years, 1 women) with bilateral renal artery stenoses underwent percutaneous interventions.Three patients was admitted with chest pain and drug resistant hypertension [6].Also, 2 patients was admitted hypertension and pulmonary edema.All patients were diagnosed with luminal narrowing ≥ 70% by renal Doppler ultrasonography or computed tomography before intervention.All subjects gave their consent for inclusion in the study.The investigation conforms with the principles outlined in the Decleration of Helsinki.All the patients was treated at the time of examination with minimum 3 antihypertensive drugs such as angiotensin receptor blockers, angiotensin-converting-enzyme inhibitors, nitrates, diuretics, alpha-blockers, beta--blockers and calcium channel blockers.The blood pressure was measured, using a mercury sphygmomanometer with a cuff appropriate to the arm circumference (Korotkoff phase I for systolic blood pressure and V for diastolic blood pressure).Blood pressure measurements were performed twice for each subject and their mean was used for statistical analysis.Estimated glomerular filtration rate was calculated using the Cockcroft-Gault formula [7].The patients were followed for 4 months.Baseline patient demographics and procedural data was presented in Table 1.
Percutaneous technique
Femoral arterial puncture was performed in all patients, and all procedures were performed through a 6-8 F sheath introducer, with a renal artery guiding catheter introduced via a 0.36-mm or 0.46-mm guide wire.The guide wire was passed through the stenosis and a balloon-expandable stent was placed via the guide wire (Fig. 1A-C).For treatment of ostial stenoses, the stent was positioned so that 1 to 2 mm protruded into the aortic lumen, ensuring complete coverage of the aortic plaque.An intervention was considered technically successful if the residual stenosis was < 30%.Antiplatelet therapy was started at least 1 day before intervention and routinely consisted of 75 mg of clopidogrel daily for 3 months and 100 mg of aspirin indefinitely.Immediately before the intervention, we administered a bolus dose of 5000 IU of heparin.
Statistical analysis
Statistics were obtained using the ready-to-use programme of SPSS version 8.0.All the values were expressed as mean ± standard deviation.The obtained results for systolic blood pressure, diastolic blood pressure and glomerular filtration rates were assessed by non-parametrik Friedman test.Number of drugs use for hypertension was assessed by non-parametrik Wilcoxon signed ranks test.The significance level was set at a value of p < 0.05.
Results
A total of 5 bilateral atherosclerotic renal artery stenosis patients underwent percutaneous transluminal renal angioplasty and 10 stents were placed (Table 1).There were not any complication during interventional procedure.The findings of preprocedural, postprocedural and 4 months later were presented in Table 1.Although systolic and diastolic blood pressures were significantly decreased at follow-up period, glomerular filtration rates were not significantly changed as compared with baseline data (p = 0.009, p = 0.008, p = 1.00, respectively).Also, the number of oral antihypertensive medications were significantly decreased at follow-up period (p = 0.03).
Discussion
Atherosclerotic renal artery stenosis may be associated with renovascular hypertension, increased cardiovascular morbidity and mortality [2].Patients with bilateral critical atherosclerotic renal artery stenosis are at increased risk for hypertension and acute pulmonary edema [4].Resistant hypertension, is defined as blood pressure that remains above goal in spite of the concurrent use of three antihypertensive agents of different classes [6], and acute pulmonary edema are accepted as 1 of the few indications for consideration of renal artery revascularization [4,8,9].However, this is Class I, Level of Evidence B indications according to the American College of Cardiology/American Heart Association guidelines [5].Atherosclerotic renal artery stenosis are usually located in the renal artery ostium, and many are extensions of calcified aortic plaque.Although, these tight and calcified lesions tend to rebound to their original shape with balloon angioplasty alone [10], we used balloon expandable stent that provides the additional force needed to permanently disrupt the lesions, leading to a longer-lasting result [10].This study showed that percutaneous transluminal bilateral renal artery stenting significantly reduced both systolic and diastolic blood pressure at postprocedural period compared to baseline.Also, it demonstrated a significant improvement in blood pressure control and reduction in the number of oral antihypertensive medications at follow-up period, as in other studies [4,10].
Although serum creatinine levels may be altered by some factors such as body muscle mass and age, we used glomerular filtration rate using the Cockcroft-Gault formula [7], which is a more sensitive marker of renal function.Revascularization of the renal artery with stenting to preserve renal function is based on the assumption that ischemia contributes to renal insufficiency and that correction of the stenosis and restoration of renal perfusion will stabilize, as in our study, or improve renal function [10].Considering the dialysis patients who reduces life expectancy and quality of life, any stabilization of renal function should be regarded as a beneficial outcome.
Limitations of the study
These conclusions may not extand to the great population, therefore; the results of this study will need confirmation in larger studies.
Conclusions
In conclusion, the findings of our study indicate that bilateral renal artery stenting provides a beneficial outcome such as stabilization of renal functions, significant improvement in blood pressure control and reduction in the number of oral antihypertensive medications at follow-up.
Figure
Figure 1A-C.The guide wire was passed through the stenosis and a balloon-expandable stent was placed via the guide wire.
Table 1 .
Baseline patient demographics and procedural data.
renal artery stenosis and percutaneous renal intervention
www.cardiologyjournal.org
|
2017-04-01T06:02:58.543Z
|
2013-01-01T00:00:00.000
|
{
"year": 2013,
"sha1": "2e8f58014362aedecc26d7c09c713fb9012f2bb6",
"oa_license": null,
"oa_url": "https://journals.viamedica.pl/cardiology_journal/article/download/CJ.2013.0005/18732",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "2e8f58014362aedecc26d7c09c713fb9012f2bb6",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
51626217
|
pes2o/s2orc
|
v3-fos-license
|
18-qubit entanglement with photon's three degrees of freedom
A central theme in quantum information science is to coherently control an increasing number of quantum particles as well as their internal and external degrees of freedom (DoFs), meanwhile maintaining a high level of coherence. The ability to create and verify multiparticle entanglement with individual control and measurement of each qubit serves as an important benchmark for quantum technologies. To this end, genuine multipartite entanglement have been reported up to 14 trapped ions, 10 photons, and 10 superconducting qubits. Here, we experimentally demonstrate an 18-qubit Greenberger-Horne-Zeilinger (GHZ) entanglement by simultaneous exploiting three different DoFs of six photons, including their paths, polarization, and orbital angular momentum (OAM). We develop high-stability interferometers for reversible quantum logic operations between the photon's different DoFs with precision and efficiencies close to unity, enabling simultaneous readout of 262,144 outcome combinations of the 18-qubit state. A state fidelity of 0.708(16) is measured, confirming the genuine entanglement of all the 18 qubits.
Quantum information is encoded by different states in certain DoFs of a physical system. For example, the quantum information of a single photon can be encoded not only in its polarization 6,7 , but also in its time 8 , OAM 9 , and spatial modes 10 . The simultaneous entanglement with multiple DoFs-known as hyper-entanglement 11 -offers an efficient route to increasing the number of entangled qubits 12,13 , and enabled enhanced violations of local realism 14,15 , quantum super-dense coding 16 , simplified quantum logic gates 17 , and teleportation of multiple DoFs of a single photon 18 .
Previous experiments have demonstrated hyper-entangled states of two photons in the form of product states of Bell states 12 , and fully entangled GHZ states with up to five photons and two DoFs 13 . However, it remained a technological challenge for the multi-photon experiments to go beyond two DoFs. To this end, we develop methods that allow not only scalable creations of hyper-entanglement of multiple photons with three DoFs, but also reversible conversion and simultaneous measurement of multiple DoFs with near-unity precision and efficiency. With these new techniques, we are able to demonstrate and confirm 18-qubit maximal entanglement in GHZ state-the largest Schrödinger cat-like state so far-by manipulating the polarization, spatial modes, and OAM of six photons.
We start by producing polarization-entangled six-photon GHZ states 19,20 . Three pairs of entangled photons are generated by beamlike type-II spontaneous parametric down-conversion (see Fig. 1a) where the signal-idler photon pairs are emitted as two separate circular beams, favorable for being collected into single-mode fiber 2 . The geometry of the down-conversion crystal, where a half-wave plate is sandwiched between two 2-mm-thick β-barium borates, ensures that the obtained photons pairs are polarization entangled 2 in the form of denotes the horizontal (vertical) polarization. The fidelities of the three pairs of entangled photons are measured to be on average 0.98 0.01 ± .
Next, we combine photons 2 and 4 on a polarization beam splitter 21 (PBS), and combine one of its output with photon 6 on another PBS (see Fig. 1a . By doing so, starting from the six-photon polarization-entangled GHZ state (Fig. 1a), we arrive at a hyper-entangled 18-qubit GHZ state in the form of First, the spatial-mode qubit is measured using a closed or open Mach-Zehnder interferometer, with or without the second 50/50 beam splitter (see Fig. 1c). The open configuration is used to measure the ( 0 , 1 ) base directly. The closed configuration, together with a small-angle prism that adjusts the phase between the two paths, is used The two outputs (labelled as yellow circles in respectively. After the PBS, the transmitted or reflected spatial modes corresponds to two orthogonal projection outcomes. The last step is the readout of the OAM, which, unlike the polarization, was known difficult to be measured with high efficiency and two-channel output simultaneously 9,12,16,23,24 . Our method here is to deterministically map the OAM qubit to the polarization through two consecutive CNOT gates between the two DoFs that together form a quantum swap gate (see the inset of Fig. 1e). In the first CNOT gate, the OAM acts as the control qubit and the polarization acts as the target qubit, which converts the initial state ( ) α β In our experiment, this is achieved using an interferometer which consists of two double-PBSs and two Dove prisms as shown in Fig. 1e (see Methods). In the second CNOT gate, the polarization acts the control qubit and the OAM is the target qubit, transforming the state α β information is coherently transferred to the polarization, which can be conveniently and efficiently readout. Thus, for each single photon carrying three DoFs, the measurement setup can give eight possible outcomes.
Finally, the OAM mode R is converted back to the fundamental Gaussian mode (denoted as G ) for efficient coupling into single-mode fibers. This task, together with the second CNOT gate, is completed using one element called q-plate 25 . It is an inhomogeneous anisotropic media that couples the polarization with the OAM, , respectively (see Methods). We develop an integrated design for the OAM-to-polarization converter (see Fig. 1h) such that the 24 interferometers used in our work achieve an average visibility of 99.6%, keeping stable for over 72 hours (see Fig. 1g). Using this method, the overall efficiency of the OAM-to-polarization converter is 92%.
The complete experimental setup for creating and measuring the 18-qubit GHZ state is shown in Fig. S1, which includes 30 single-photon interferometers in total. The outputs are detected by 48 single-photon detectors and a complete set of 262,144 combinations can be simultaneously recorded by a coincidence counting system.
To demonstrate the full entanglement among the three DoFs of the N-qubit GHZ state, we first simultaneously measure all the qubits along the base of (| 0 |1 ) / 2 i e θ 〉 ± 〉 the phase change of all the N entangled qubits. We test such behavior with single-photon polarization state (Fig. 2a), and compare it to three-DoF-encoded GHZ states with one photon (Fig. 2b), four photons (Fig. 2c), and six photons (Fig. 2d), where the phase θ ramps continuously from 0 to π . The data are fitted to sinusoidal fringes that show an N-times increase in the oscillatory frequencies for the N-qubit GHZ states, highlighting the potential of the hyper-entangled states for super-resolving phase measurements 26 .
The coherence of the 18-qubit GHZ state, which is defined by the off-diagonal element of its density matrix and reflects the coherent superposition between the
|
2018-08-01T19:32:14.880Z
|
2018-01-12T00:00:00.000
|
{
"year": 2018,
"sha1": "ea0ede611ad7a46bb06f33aefd465e3928703a2d",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1801.04043",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "8416b4f6426ccdada458fd78a7f28b69c257f615",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Medicine"
]
}
|
3375409
|
pes2o/s2orc
|
v3-fos-license
|
The sacral autonomic outflow is parasympathetic: Langley got it right
A recent developmental study of gene expression by Espinosa-Medina, Brunet and colleagues sparked controversy by asserting a revised nomenclature for divisions of the autonomic motor system. Should we re-classify the sacral autonomic outflow as sympathetic, as now suggested, or does it rightly belong to the parasympathetic system, as defined by Langley nearly 100 years ago? Arguments for rejecting Espinosa-Medina, Brunet et al.’s scheme subsequently appeared in e-letters and brief reviews. A more recent commentary in this journal by Brunet and colleagues responded to these criticisms by labeling Langley’s scheme as a historical myth perpetuated by ignorance. In reaction to this heated exchange, I now examine both sides to the controversy, together with purported errors by the pioneers in the field. I then explain, once more, why the sacral outflow should remain known as parasympathetic, and outline suggestions for future experimentation to advance the understanding of cellular identity in the autonomic motor system.
Introduction
The principles drawn from autonomic neuroscience are essential for understanding human physiology and the pathology of disease [20]. Due to the fundamental importance of autonomic physiology and pharmacology, the proposed renaming of the sacral autonomic outflow as sympathetic [10] would, if correct, force significant change in the concepts that drive biomedical research and clinical practice. For this reason alone, the new scheme requires serious consideration. In addition, one cannot ignore that the disruptive new idea came from the highly respected laboratory of Professor John-Francois Brunet in Paris and that it appeared in a prestigious, highly cited journal. The Brunet group's radical conjecture triggered a wave of negative commentaries that enumerated strong factual arguments for rejecting change by maintaining the current definition for the sacral parasympathetic motor system [19,[24][25][26][27]33]. A subsequent commentary in this journal [9] dug further into the history of the field in order to support the claim of scientific myth making. In my view, the dispute does not arise from conflicting experimental observations or deception, but instead from different interpretations of the evidence and from different readings of the field's history.
Core arguments for the Brunet conjecture
The key arguments that support the Brunet conjecture rest on the differential expression of transcription factors in mice, primarily on embryonic day 13.5 [10]. By comparing presumptive preganglionic neurons in the dorsal motor nucleus of the vagus (nX) with presumptive preganglionic neurons in the thoracic and sacral spinal cord, they detected Phox2a, Tbx20, Tbx2 and Tbx3 in nX, but not in the thoracic or sacral cord. Conversely, Foxp1 was detected in the thoracic and sacral cord, but not in nX. Based on the co-segregation of thoracic and sacral traits from cranial traits, it was concluded that thoracic and sacral pools of preganglionic neurons share a common sympathetic identity. Alternatively, another interpretation is possible. Thoracic and sacral preganglionic neurons may simply share a common spinal identity [33]. The same approach was also applied to assess the phenotypic identity of ganglionic neurons using transcription factors [10]. Hmx2 and Hmx3 were detected in several cranial parasympathetic ganglia, but not in lumbar paravertebral 1 3 sympathetic ganglia or in the pelvic ganglion. Conversely, the sympathetic ganglia and pelvic ganglion selectively express Islet1, Gata3 and Hand1. In addition, genetic deletion of Olig2, which disrupts formation of cranial parasympathetic ganglia [7,8], failed to alter the size of the pelvic ganglion or the formation of sympathetic ganglia. Although one can interpret these observations in support of the Brunet conjecture [10], it is also possible that they simply reflect the segmental origin of different ganglia, rather than phenotypic neuronal identity as sympathetic or parasympathetic.
Mythology versus insight
The Brunet group's recent commentary [9] re-asserts that neurons in the sacral autonomic outflow should be defined by their "genetic make-up and dependencies" rather than by widely used classical criteria [20]. Pointing to the early history of the field, the commentary argues that the original designation of the sacral autonomic outflow occurred "in a remarkably cursory fashion, with a brief justification in 1899". It goes on to point out errors in various anatomical schematics of the autonomic system, created between 1920 and 1949, as evidence for a mythology based on ignorance. The essay portrays the early literature as dogmatic and as motivated by a need to substantiate the dogma. No one would dispute that autonomic anatomy is complex, especially in the pelvic region, and that many studies overlook sexual dimorphism and that older literature and textbooks contain errors and oversimplifications. For a comprehensive contemporary overview of bladder control, see reviews by de Groat et al. [4,5]. However, questioning the motives of scientists in a much earlier era seems not only dubious, but unlikely to shed light on today's challenges. Moreover, it seems inappropriate to criticize the experimental approaches used before the advent of electrical recordings from nerves-this only became possible in the wake of World War I when vacuum tube technology led to the invention of electrical amplifiers and oscilloscopes [18]. Given the tools of the day-simple nerve stimulation and rudimentary pharmacology, coupled to simple observations of smooth muscle contractions, blood flow and glandular secretions-the accomplishments of Walter Gaskell and then John Langley are all the more remarkable. Did their data fully justify all their conclusions? No and certainly not by today's standards. Instead of labeling this as myth building, it is perhaps more useful to think of it as deep insight informed by 50 years of careful, systematic experimental observation. Looking back to Fin de siècle neuroscience, Langley's insight appears more akin to the imaginative ideas developed by Santiago Ramon y Cajal during his ground-breaking explorations of neuroanatomy. We can only hope that in 100 years, future neuroscientists will find something of enduring value in our early 21st century efforts, crude as they may be! Despite these cautions, I agree with the Brunet group that one should acknowledge the history of ideas as a prelude to incorporating genetic mechanisms into autonomic neuroscience.
Origins of modern nomenclature for the peripheral autonomic system
To recount the history of autonomic neuroscience, one must acknowledge John Newport Langley (1852Langley ( -1925 [6,16]. Building on the work of Walter Gaskell and others, Langley introduced the concept of an autonomic system and the logic for dividing it into three divisions-sympathetic, parasympathetic and enteric. Apart from a brief visit to Heidelberg while a student, Langley spent his entire academic career in the physiological laboratories at the University of Cambridge, where he studied peripheral autonomic pathways and their effects upon target organs in amphibians, birds and mammals. In addition to exerting influence through his research, Langley edited and owned the Journal of Physiology from 1894 until his death. At the end of his career, Langley published an important monograph that sums up his life's work and speculates about the importance of phylogeny and ontogeny for understanding how the nervous system is organized and functions [31]. In agreement with the Brunet group [9], I reject Langley's archaic speculations on evolution and development. Instead, one should focus on the words and language in Langley's monograph concerning functional divisions of the autonomic system, most of which remains remarkably clear nearly 100 years after publication.
Langley opens by recognizing Jacques-Benigne Winslow [42], an influential Professor of Anatomy and Surgery at the University of Paris, whose textbook of human anatomy describes the vagus and splanchnic nerves as sympathetic. This usage signified the bringing of internal organs into harmony and extended to all autonomic nerves, a notion that Langley rejects.
"Sympathetic nerves have no special relation to sympathies." page 7 [31] Langley introduced the concept of an "autonomic" nervous system in order to distinguish nerves that control smooth muscles and glands from the somatic motor nerves that control striated muscles. In choosing the term autonomic, Langley also sought to find a better word than "vegetative" and "involuntary". He argued that vegetative implied a false relationship with plants and that involuntary was inadequate because people can initiate certain autonomic actions as a matter of will (e.g. changes in heart rate, tear production through crying). Despite this rationale, use of the term vegetative persists in some non-English speaking countries [23].
3
The decision to move beyond Winslow's earlier usage by dividing the autonomic system into three divisions was essential in order to capture its organization and function.
"…the chief objection to calling the whole autonomic system sympathetic is that it confuses instead of simplifying nomenclature:" page 7 [31].
Ironically, the same logic applies today to Brunet's conjecture. Reverting to a common name for the thoraco-lumbar and sacral autonomic outflows does not simplify discussion or understanding of the autonomic motor system because these elements display distinct features [25,33].
Gaps in the central outflow
Langley defined the sympathetic, parasympathetic and enteric systems using several criteria [31], beginning with their different central outflows. He noted that the enteric system operates exclusively within the gastro-intestinal tract and is relatively independent from central control. Today, all agree that the enteric system contains sensory neurons and interneurons in addition to motor neurons and that enteric circuits undergo inhibitory modulation by sympathetic motor pathways through prevertebral ganglia and splanchnic nerves and excitatory modulation by parasympathetic motor pathways through the vagus and sacral outflow [14,23].
Today's controversy, like most of Langley's monograph [31] focuses on the distinctions between the sympathetic and parasympathetic divisions. The monograph notes that gaps exist in the central outflow of autonomic nerves at the levels of the limb enlargements. Between the limb enlargements, he calls the system sympathetic. Rostral and caudal components become parasympathetic because they are located beside the sympathetic region. Although all agree that these gaps exist [9,10], Brunet's group speculates that they are an unimportant consequence of limb motorneuron development that exhausts the local segmental pools of motor progenitor cells. I agree it would be interesting to investigate this conjecture and related hypotheses concerning the mechanistic origin of segmental gaps in the central autonomic outflow.
Differences in targets and territories
The sympathetic and parasympathetic outflows differ in terms of their targets, the pathways of their peripheral nerves and their functional attributes. One can trace these concepts to Langley, together with the idea of functional oppositionthey all remain deeply embedded as core principles of autonomic neuroscience.
"The facts that the sympathetic innervated the whole body, whilst the cranial and sacral outflows innervated parts only, and that the sympathetic had, in general, opposite functional effects from those of the other autonomic nerves, indicated that the sympathetic was distinct from the rest." page 8 [31].
The following examples support Langley's view, but are not explained by the Brunet conjecture.
• In general, the sympathetic system, but not the parasympathetic system (cranial and sacral), innervates the skin. An exception to this pattern has been reported in the lower lip of the cat [21,22]. • Thermoregulation is exclusively sympathetic and arises primarily through thoraco-lumbar control of the cutaneous circulation, piloerection, sweat glands and brown fat. • Blood pressure control operates primarily through sympathetic regulation of the cardiovascular system and kidneys. • Vagal parasympathetic inhibition of the heart opposes sympathetic excitation. • Sacral parasympathetic activation promotes micturition, while sympathetic activation promotes urine retention [4,5]. • Sympathetic neurons in paravertebral chain ganglia are selectively innervated by thoraco-lumbar sympathetic preganglionic neurons, and not by sacral or suprasegmental parasympathetic preganglionic neurons. The axons of sacral preganglionic neurons never enter the paravertebral sympathetic chain. • Sympathetic ganglia (paravertebral and prevertebral) are intimately associated with large arteries, while parasympathetic ganglia are often embedded within target organs (e.g., salivary glands, bladder). Brunet's group takes issue with this interpretation [9], noting that the pelvic ganglion may not always be diffusely organized and that it often contains an apparent sympathetic component. They have a point, but are only partially correct. The anatomy of pelvic autonomic ganglia is complex and variable in different animals. For example, the pelvic ganglion in mice and rats tends to be discrete, but is broken into many mini-ganglia in the human [29].
Neurotransmitter phenotype distinguishes between sympathetic and parasympathetic neurons
The recent papers from Brunet's group [9,10] correctly argue that the pharmacology and transmitter status of autonomic neurons is more intricate than often portrayed in textbooks. They point to cholinergic sympathetic neurons as evidence that transmitter status cannot serve as a criterion to define sympathetic and parasympathetic neurons. Langley was aware of this anomaly, but not its full explanation.
"The only structures markedly influenced by sympathetic stimulation which are not influenced by adrenaline after nerve section are the sweat glands of the cat and some other mammals." page 29 [31] By the 1930s it became clear that cholinergic sympathetic mediation of sweating was an exception to the rule that postganglionic sympathetic neurons use norepinephrine as their transmitter [3,36]. The first step in understanding the developmental origin of cholinergic sympathetic neurons came through the discovery that environmental factors could switch the functional neurotransmitter status of rat sympathetic neurons in primary cell culture from noradrenergic to cholinergic [15,34,35]. Subsequent studies demonstrated that cholinergic sympathetic neurons innervate the periosteum as well as sweat glands [1]. Careful analysis showed that these neurons undergo a process of transdifferentiation in which they initially express a functional noradrenergic phenotype and then, under the influence of factors released by the sweat glands and periosteum, undergo a transition to a functional cholinergic phenotype [12]. This switch in transmitter status depends on signaling through the gp130 cytokine receptor [38]. It is important to note that this differs from cranial parasympathetic neurons, which sometimes express tyrosine hydroxylase, but do not synthesize detectable levels of norepinephrine [30,32]. Tyrosine hydroxylase has also been detected in 5% of parasympathetic paracervical pelvic ganglion neurons, but the functional transmitter status of these cells remains unknown [28]. Together, these observations suggest that the transdifferentiation of cholinergic sympathetic neurons differs from the genesis of cholinergic parasympathetic neurons.
Moving forward
The work from Professor Brunet's laboratory serves an important purpose by illustrating the power of developmental molecular genetics to illuminate important features of the autonomic motor system. It should motivate us to reexamine long-held beliefs. Although the interesting results from their experiments do not justify a reclassification of the sacral parasympathetic as sympathetic, they point to a path forward. For the time being, I conclude that Langley got it right concerning the three divisions of the autonomic motor system. Moving forward, transcriptomic methods now make it possible to identify patterns of gene expression that characterize distinctions between populations of adult autonomic neurons [13]. This approach should strengthen future efforts to understand in molecular genetic terms the functional organization of the autonomic outflow and its developmental origins. Another important issue regards the development of functional subclasses of autonomic neurons that innervate blood vessels, different types of glands, brown fat and other targets. Although such phenotypic specializations begin to appear during ganglionic development [17,[39][40][41], the underlying mechanisms remain poorly understood [2,37]. Coming to a deeper understanding of autonomic behavior will require solving the problem of neuronal identity in terms of multiple criteria based on molecular genetics, developmental origins, functional circuitry and neuronal activity [11]. Bringing together all these facets of autonomic neuroscience will provide the data required to test classical concepts and then build upon them. Basic and clinical autonomic neuroscientists should embrace such interactions and they should remain open to the possibility of change. If Langley were alive today, he might agree.
|
2018-03-27T15:00:19.964Z
|
2018-02-16T00:00:00.000
|
{
"year": 2018,
"sha1": "a986451fca25cc7b202a53c851b9bc0af2e046a3",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s10286-018-0510-6.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "911bf0fb48d96340dc99a89499121fcfde2279f9",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
245769693
|
pes2o/s2orc
|
v3-fos-license
|
A Generalized Bootstrap Target for Value-Learning, Efficiently Combining Value and Feature Predictions
Estimating value functions is a core component of reinforcement learning algorithms. Temporal difference (TD) learning algorithms use bootstrapping, i.e. they update the value function toward a learning target using value estimates at subsequent time-steps. Alternatively, the value function can be updated toward a learning target constructed by separately predicting successor features (SF)--a policy-dependent model--and linearly combining them with instantaneous rewards. We focus on bootstrapping targets used when estimating value functions, and propose a new backup target, the $\eta$-return mixture, which implicitly combines value-predictive knowledge (used by TD methods) with (successor) feature-predictive knowledge--with a parameter $\eta$ capturing how much to rely on each. We illustrate that incorporating predictive knowledge through an $\eta\gamma$-discounted SF model makes more efficient use of sampled experience, compared to either extreme, i.e. bootstrapping entirely on the value function estimate, or bootstrapping on the product of separately estimated successor features and instantaneous reward models. We empirically show this approach leads to faster policy evaluation and better control performance, for tabular and nonlinear function approximations, indicating scalability and generality.
The fundamental goal of reinforcement learning (RL) is to maximize return, i.e. (temporally discounted) cumulative reward. Value functions provide an estimate of the expected return from a specific state (and action), and as such, they are a fundamental component of RL algorithms. Modern deep RL methods require numerous environment interactions to solve complex tasks, which can be expensive or impossible to obtain, particularly for tasks resembling the real-world. This makes it essential to develop data-efficient methods for learning accurate value functions.
The problem we address in this work is that of credit assignment, namely how to associate (distant) rewards to the states and actions that caused them. Value-based RL methods tackle this problem through temporal difference (TD) learning algorithms (Sutton 1988). TD algorithms rely on bootstrapping: using the value estimate at a subsequent timestep, together with the observed data (e.g. rewards), to construct the learning target-the return-for the current timestep. However, the value estimate in the backup target does not need to come from the current value function Copyright © 2022, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. being learned. For instance, value can be estimated using successor features-the (discounted) cumulative featureslinearly combined with an estimate of instantaneous rewards (Barreto et al. 2017). This approach can make use of the same TD methods (Sutton 1988) to estimate the successor features as the former does when learning the value function, requiring similar amounts of sampled experience. Moreover, the backup target and the value function can be completely distinct (e.g. if the successor features and learned value function are dis-jointly parameterized); they can share feature representations (e.g. when the value function and the successor features are both linear functions of the features); or partially share representations (e.g. through Polyak averaging). Since the value function is regressed toward the target, the method of computing the target influences the quality of the value function.
In this paper, we aim to improve credit assignment and data efficiency for value-based methods, by proposing a new method of constructing a learning target, which borrows properties from all aforementioned approaches of target construction. This η-return mixture uses a parameter η to combine an ηγ-discounted successor features model (ηγ-SF) with the current value function estimate to parameterize the learning target used during bootstrapping-with the η parameter controlling the combination of value-predictive and feature-predictive knowledge. We observe an intermediate value of η incorporates the benefits of both approaches in a complementary way, using sampled experience more efficiently.
Contributions In this paper we make three contributions: (i) We introduce the η-return mixture, a simple yet novel way of constructing a backup target for value learning, using an ηγ-discounted SF model to interpolate between a direct value estimate and the fully factorized estimate relying on SF and instantaneous rewards. (ii) We describe a new learning algorithm using the η-return mixture as the bootstrap target for value estimation. (iii) We provide empirical results showing more efficient use of experience with the η-return mixture as the backup target, in both prediction and control, for tabular and nonlinear approximation, when compared to baselines.
Reinforcement learning problem setup
A discounted Markov Decision Process (MDP) (Puterman 1994) is defined as the tuple (S, A, P, r), with state space S, action space A, reward function r : S × A → R, and transition probability function P : S ×A×S → P(S) (with P(S) the set of probability distributions on S, and P (s |s, a) the probability of transitioning to state s by choosing action a at state s). A policy π : S → P(A) maps states to distributions over actions; π(a|s) denotes the probability of choosing action a in state s. Let S t , A t , R t denote the random variables of state, action and reward at time t, respectively.
Policy evaluation implies estimating the value function v π , defined as the expected discounted return: where γ ∈ [0, 1) is the discount factor. The learner's goal is to find a policy, π which maximizes the value v π . When the Markov chain induced by π is ergodic, we denote with d π the stationary distribution induced by policy π. We henceforth shorthand the expectation over the environment dynamics and the policy π with E π [·].
Value learning
Typically, v π is represented directly, using a linear parametrization over some state features φ(s) ∈ R d , where d is the dimension of the representation space: with θ ∈ R d learnable parameters, and φ(s) are features. 1 Learning v π with TD methods involves bootstrapping on a target, U t , at each timestep t, and updating θ by regressing it towards the target: with learning rate α. The TD(0) algorithm (Sutton 1988) uses the one-step TD return as the value target: The forward view of TD(λ) constructs the λ-return target-a geometrically weighted average over all possible multi-step returns (Sutton and Barto (2018), chapter 12.1): 1 φ(s) can be a parameterized non-linear function jointly learned with θ, as is the case for many end-to-end deep reinforcement learning algorithms.
where λ ∈ [0, 1] controls the weight of value estimates from the distant future, interpolating between the one-step return (equation (5)) (λ = 0) and the Monte Carlo return (λ = 1). The λ-return can only be computed offline at the end of an episode, since it requires the entire future trajectory to calculate the multi-step returns.
Successor features (SF)
Previous work (Dayan 1993;Zhang et al. 2017;Barreto et al. 2017Barreto et al. , 2018 has shown it can be useful to decouple the reward and transition information of the value function by factorizing it into immediate rewards and SF. The SF, ψ π : R d → R d , are defined as the expected cumulative discounted features under a policy π: and can be learned by TD learning algorithms, similar to the standard value function: with Ξ ∈ R d×d (learnable) parameters. 2 An alternative approach to the direct representation of value (equation (3)) is to used a factorization of SF and instantaneous reward: the instantaneous reward function with (learnable) parameters w ∈ R d .
The η-return mixture
We take inspiration from the canonical λ-return (equation (6)), to write a similar quantity. 3 A full derivation of this section is given in appendix B.
As both v θ (equation (3)) and r w (equation (13)) are linear in features, we can express the geometric sums in equation (14) using ηγ-discounted SFs, We can separately estimate this SF-model using equation (10). Further, we can use the SF-model in the bootstrapping process by substituting equation (15) into equation (14). This yields a learning target which uses predictive features (ψ η ), along with a mixture of value (θ) and reward (w) parameters. This is the η-return mixture: This target can be used to replace e.g. the standard TD(0) backup target from equation (5). Despite its similarity to the standard λ-return, the η-return mixture does not assume access to a full episodic trajectory.
Interpretation Consider learning using single-step transition tuple (S t , A t , R t+1 , S t+1 ). TD(0) propagates information locally from S t+1 to S t by constructing a bootstrapping target. Using the value function in the target (equation (5)) propagates only value information; bootstrapping using the product of estimated SF and instantaneous rewards (equation (12)) relies on separately learning the SF, which also uses TD(0), and thus propagates only feature information.
We can more effectively use the same single-step of experience if we simultaneously use the sampled information to predict both the value and the features, and update the value function using a mixture of both in the way specified in equation (16).
Fixed-point solution With accurate SF and instantaneous reward models, one-step value-learning with the η-return mixture as bootstrapping target has the same fixed-point solution as the standard TD(0) target, per the following.
Proposition 1. Assume the SF parameters Ξ have converged to their fixed-point solution, , and the instantaneous reward parameters have achieved the optimal solution w = denotes the expectation over the stationary distribution d π for policy π, which we assume exists under mild conditions (Tsitsiklis and Van Roy 1997). Then, value learning using the η-return mixtureas the target has the TD(0) fixed point solution: Proof. In Appendix C. Follows from the linearity of the policy evaluation equations.
Furthermore, it has been shown that on-policy planning with linear models converges to the same fixed point as direct linear value estimation (Schoknecht 2002;Sutton et al. 2008). However, despite the fact that the fixed point solution is subject to the same bias as one-step TD methods, our method may still benefit from substantial learning efficiency while moving towards this solution. In fact, our finite sample empirical evaluation shows exactly this.
Interpolating between value and feature prediction with η Similar to how λ-return interpolates between the onestep TD and Monte-Carlo returns, the η-return mixture interpolates between bootstrapping on the "value-predictive" parameters of the value function, or on the "featurepredictive" parameters of the SF.
When η = 0, the η-return mixture recovers the standard TD(0) learning target (equation (5)): At the opposite end of the spectrum, when η = 1, the η-return mixture relies on the full SF (equation (12)) and the instantaneous reward model, akin to using an implicit infinite model: Consequently, the η-return mixture is a simple generalization that spans the spectrum of learning target parameterizations using η ∈ [0, 1], with the traditional learning target and the SF factorization as extremes.
Compared to the standard learning target used in TD(0), the η-return mixture with an intermediate value of η (0 < η < 1) uses information more effectively than the extremes G η=0 t (equation (5)), and G η=1 t , approximating the true value faster given the same amount of data (see figure 1 for an intuitive illustration).
Estimating the η-return mixture
There are different choices with respect to how the learning target is estimated, depending on (i) the form or elements used in building the target; (ii) the parametrization of the elements making up the target; (iii) the learning methods used to estimate the elements of the target.
Regarding (i), the form of the η-return mixture target requires access to SF, instantaneous rewards, and the value parameters themselves. Regarding (ii), we parameterize all these estimators as linear functions of features, and share feature parameters in cases where the feature representation is learned and not given (e.g. in the nonlinear control empirical experiments).
With respect to (iii), we can use any learning method for estimating the SF model ψ η Ξ and the instantaneous reward model r w . In this paper, we make the choice of using TD(0) to learn the SF model, and supervised regression for the reward model, since one-step methods are ubiquitous in contemporary RL, and require the use of only single-step transitions van Hasselt, Guez, and Silver 2015;Lillicrap et al. 2015;Wang et al. 2016;Schaul et al. 2015;Haarnoja et al. 2018). Likewise, we use the η-return mixture as a one-step bootstrap target (equation (16)) for estimating of the value parameters θ (equation (4)). Although we have chosen to focus here on one-step learning targets for their simplicity and ease of use, these methods can be extended to multi-step targets (e.g. TD(n) or TD(λ)) analogously as the one-step target.
All components of the η-return mixture are now learnable with one-step transitions tuples of the form (S t , A t , R t+1 , S t+1 ), which make these methods amenable to both the online setting and the i.i.d. setting. In the former, the algorithm is presented with an infinite sequence of state, actions, rewards From an algorithmic perspective, we now describe a computationally congenial way for learning the value function online, from a single stream of experience, using For η = 0.7 (center) the estimation for the η-return mixture combines the parameters of the value function (θ) and the SF predictions (ψ η ) to more quickly propagate value information than either extremes. (C) The estimated value function for all states (columns) across learning episodes (rows). For η = 0.7 value information propagates faster than η = 0 and 1. (D) Absolute value error: for different η values over episodes. For η = 0.7 error reduction is faster.
Algorithm 1: Value prediction using a linear η-return mixture Output: Value function v θ ≈ v π for a policy π.
1: while sample one-step experience tuple using π, Instantaneous reward learning update: Value learning update: : end while our method. As mentioned, in the online setting, the agent has access to experience in the form of tuples (S t , A t , R t+1 , S t+1 ) at each timestep t. The pseudo-code in algorithm 1 describes the online value estimation process, for the linear case, with given representations.
Empirical studies
We start with two simple prediction examples to provide intuition about our approach, after which, we verify that our method scales by extending it to a more complex non-linear control setting.
Value prediction in a deterministic chain
Experiment setup: Consider the 16-state deterministic Markov reward process (MRP) with tabular features illustrated in figure 1-A. The agent starts in the left-most state (s 0 ), deterministically transitions right to the right-most absorbing state. The reward is 0 everywhere except for the final transition into the absorbing state, where it is +1. We apply algorithm 1 to estimate the value function in an online incremental setting. We use a discount factor γ = 0.9999 and learning rate α = 1.0. Results: Figure 1-B illustrates the result of combining the successor features model ψ Ξ , with the value parameters θ, and reward parameters w into a prediction of the η-return mixture for the starting state s 0 , v η (s 0 ), for different values of η. When completely relying on the canonical value bootstrap target (η = 0, recovering TD (0)), we have v η=0 = v θ , which corresponds to an unchanging feature representation.
In this setting, the value information (in θ) moves backward one state per episode. For the opposite end, when bootstrapping on the full successor features (η = 1), the instantaneous reward is learned immediately (parameter w) for the final state, while the successor features (parameter ψ) learns about one additional future state per episode. For both cases, we require ∼ 16 episodes for the information to propagate across the entire chain and for the value estimate of s 0 to improve (Figure 1-D). However, with an intermediate value of 0 < η < 1 (Figure 1-B, middle, η = 0.7 here), we are able to both propagate value information backward by bootstrapping on θ, as well as improve the predictive features (using ψ η ) to predict further in the forward direction. This results in an improved value estimate much earlier, as we can observe in figure 1-B middle, C middle, and D.
Interpretation: In an online prediction setting, using the ηreturn mixture (with an intermediate η: 0 < η < 1), in place of the standard TD(0) learning target, effectively combines both backward credit assignment by bootstrapping the value estimates, as well as forward feature prediction, to more quickly estimate the correct values.
Value prediction in a random chain
Experiment setup: We now switch to a slightly harder setting, a stochastic 19-state chain prediction task, still with tabular features (Sutton and Barto 2018, Example 6.2). The agent starts in the centre (state 10) and randomly transitions left or right until reaching the absorbing states at either end (figure 2-A). The reward is 0 everywhere except upon transitioning into the right-most terminal state, when it is +1. Hyperparameters were chosen by sweeping over learning rates α ∈ {0.01, 0.1, 0.2, 0.3, 0.5}, and mixing parameter η = {0.0, 0.3, 0.5, 0.7, 0.9, 0.99, 1.0}. Figure 2-B,D illus-trate value error averaged over the first 400 episodes.
Results: In figure 2-B, we observe that mixing with η ∈ [0, 1] results in a U-shape error curve, illustrating that an intermediate value of η is optimal. For each value of η, we plot the optimal learning rate α.
Value-based control in Mini-Atari
We hypothesize that efficient value prediction using the ηreturn can help in value-based control, so we extend our proposed algorithm to the control setting, simply by estimating the action-value function q θ using the η-return mixture. We build on top of the deep Q network (DQN) architecture , and simply replace the bootstrap target with an estimate of the η-return mixture starting from a state and action. Given a sampled transition (S t , A t , R t+1 , S t+1 ), DQN encodes features φ t = φ(S t ), then estimates the actionvalues q θ (φ t , A t ) ≈ q(S t , A t ) using the canonical bootstrap target in which it relies on the next value estimate, max a q θ (φ t+1 , a ), with φ t+1 = φ(S t+1 ). We use the same feature encoding φ(·) to track the successor features of the current policy ψ η t = ψ η Ξ (φ t ) ≈ ψ(φ t ), and estimate the instantaneous rewards r w (φ t ). This allows us to construct the η-return mixture and use it in the learning target of Q-learning when updating the parameters θ: where q η is the value estimate of the η-return mixture used in the learning target, and ∇ θ q θ = ∇ θ q θ (φ t , A t ). We simultaneously estimate the feature representation and the actionvalues in an end-to-end fashion. See Appendix algorithm 2 for a complete description.
Experiment Set-up: We test our algorithm in the Mini-Atari (MinAtar, Young and Tian (2019), GNU General Public License v3.0) environment, which is a smaller version of the Arcade Learning Environment (Bellemare et al. 2013) with 5 games (asterix, breakout, freeway, seaquest, space invaders) played in the same way as their larger counterparts. Other than the architectural update to the bootstrap target, we make no other changes (e.g. to policy, relay buffer, etc.). Unless otherwise stated, we use the same hyperparameters as DQN version from Young and Tian (2019). Details on environment, algorithms and hyperparameters can be found in appendix D.
Intermediate η improves nonlinear control. Figure 3-A illustrates a parameter study on the mixing parameter η after training for 5 million environmental steps. We again observe the U-shaped performance curve as we interpolate across η, confirming the advantage of using an intermediate η value. Figure 3-B shows the learning curves of our proposed model that uses an intermediate value of η in comparison to the two baseline algorithms: bootstrapping entirely on the value parameters (η = 0, equivalent to vanilla DQN with a reward prediction auxiliary loss), and bootstrapping entirely on the full SF value (η = 1). The latter baseline is remarkably unstable, while the η-return mixture, with an intermediate η = 0.5, outperforms both in 4 /5 games, and is competitive with η = 0 in freeway. The poor performance for higher η values in freeway is likely due to sparse reward, as the reward gradient used to shape the representation φ(·) is un-informative most of the time, leading to a collapse in representation (this is explicitly measured in appendix section E). This highlights a weakness of learning the feature encoding and SF simultaneously, since poor features result in poor SF, and thus poor value estimates. The use of auxiliary losses can help ameliorate this issue ), although it is not explored here as we found the issue to only be significant for high values of η.
Parameter study: robustness to the learning rates of the SF and instantaneous reward models. Figure 4 shows parameter studies for an intermediate η that illustrate the sensitivity to the learning rates of the successor features and reward heads used in learning the value function. We vary the learning rates for these estimators while keeping the learning rates of the representation torso and the value function head fixed (at the same values used by Young and Tian (2019): α θ = 2.5e-4). We observe that performance is not highly dependent on the SF and reward learning rates (figure 4, green), but a higher learning rate for the SF than the one used by the representation torso facilitates tracking the changes in the feature representations (φ) by the SF. This choice is important in freeway. For comparison, we also sweep over the value and encoder learning rates of a vanilla DQN (figure 4, blue), and see that it is sensitive to the learning rate, i.e. performance drops as learning rate settings deviate from the recommendation of Young and Tian (2019) (most prominently observed in asterix, seaquest and space invaders, and for high learning rates in breakout). Additionally, we also sweep over the learning rates of all parameters making up the η-return mixture used as target for the q-function: either keeping all learning rates the same (figure 4, brown) or setting the suc- Figure 4: Parameter study on the learning rates of the SF and instantaneous reward model: The y-axis shows the average return over 10k evaluation steps using an -greedy policy with = 0.05, after stopping training after 5e6 steps. For our algorithm, shown here as the Deep η-Q algorithm (green), we sweep over the SF and reward learning rates while keeping the learning rates for the representation torso and the value function head fixed at 0.00025. For the vanilla DQN (blue), we vary the learning rates of the representation torso and the value function head. We also show the cumulative sensitivity to the parameters as we vary all the learning rates in our algorithm (brown). Error bar denote 95 confidence intervals and each setting is ran using 3 independent seeds.
cessor feature and reward learning rates to be 10× the encoder learning rates (figure 4, pink). Overall, we again observe that the agent is most sensitive to learning rates in the value head and encoder torso: performance decreases in all games other than breakout.
Related work
Successor features (SF, equation (8) Our method adds to this repertoire, by using the SF inside the learning target in bootstrapping methods. Forward model-based planning can facilitate efficient credit assignment. Among the algorithms that address this topic, are Dyna-style methods, which use explicit models to generate fictitious experience, that they then leverage to improve the value function (Schoknecht 2002;Sutton et al. 2008;Yao et al. 2009b). Closest to our method is the work by Yao et al. (2009a,b) which learns an explicit λ-model and uses it to generate fictitious experience for kstep updates to the value function. Our work is different in that the model we use is an implicit model, used to construct a learning target. Particularly, the SF here are akin to models for implicit planning, aiding in speeding up the value learning process within a single-task setup. Furthermore, we extend our method to learned non-linear feature representations and combine it with batch learning algorithms (DQN) in MinAtar.
Building state representation is fundamental for deep RL. The SF model is a type of general value function (Sutton et al. 2011), hypothesized to be a core component in building internal representations of autonomous agents (Sutton et al. 2011;White 2015;Schlegel et al. 2018). Our work ties to this topic, since we can view the partial SF model as a new learned representation of the value function used as learning target in the η-return mixture.
Discussion
In this work we propose a new, generalized learning target that combines the previous approaches, making more efficient use of the same experience. The approach we proposed uses an implicit model represented by the SF model, and can thus also be viewed as implicit planning with a multistep policy-dependent expectation model. The η-return mixture we proposed for the learning target can easily be used in place of the bootstrap target used in any value-based algorithm (e.g. TD(n), TD(λ)), as we have illustrated in this work for one-step returns used by TD(0). Empirically, we showed that this method, while using the same amount of sampled experience, is more effective, resulting in more efficient value function estimation and higher control performance.
Many potential directions of investigation have been left for future work. (i) The η-return mixture contains a successor feature estimate, which could also be further leveraged for exploration and transfer. (ii) Chelu, Precup, and Hasselt (2020) investigates the complementary properties of explicit forward and backward models and argues for the potential of optimally combining both "forward" and "backward" facing credit assignment schemes. Further, introduces expected eligibility traces as implicit backward models, a kind of "predecessor features" (time-reversed successor features). Future work can explore the differences and commonalities between implicit models in the forward and backward direction using our proposed SF model and expected eligibility traces. The right balance between using backward credit assignment through the use of eligibility traces, and forward prediction through predictive representations remains an open question with fundamental implications for learning efficiency. (iii) How to best use predictive representations to build an internal agent state is central to generalization and efficient credit assignment. Our work opens up many exciting new questions for investigation in this direction.
Appendices A Broader impact statement
Our results are theoretical and provide fundamental insights into reinforcement learning algorithms by proposing and investigating a spectrum of bootstrapped learning targets, including one-step TD (which directly estimates the next-state value) and the successor features value estimate (which separately estimates future cumulative features and instantaneous rewards to be linearly combined) as special cases. We hope our work will contribute to improved understanding of RL and the goal of developing generally intelligent real-world systems. However, we do not focus on applications in this work, and substantial additional work will be required to apply our methods to real-world settings. Regarding training resources, each run for the tabular experiments (figures 1 and 2) take < 1 minute on CPU, and each Mini-Atari (figures 3 and 4) takes 7-10 hours on a single RTX8000 GPU on our internal cluster. We estimate the total training compute hours is 5.5-6k hours, for a carbon emission footprint of 673.92 kg CO2 as estimated by https://mlco2.github.io/impact/.
B Lambda return and the η-return mixture B.1 Lambda return Lemma 1. The infinite horizon discounted lambda return can be written equivalently in the following two ways: Proof. We write V t = v θ (S t ) = φ(S t ) θ for brevity, We go from equation (25) to (26) by pulling out each column of rewards and noting that the λ's sum to one, (1 − λ) ∞ k=0 λ k = 1.
B.2 The η-return mixture
For notation purposes to avoid confusion, we henceforth write η in place of λ. We first replace the sampled instantaneous reward in the λ-return (equation (23)) with an instantaneous reward function (equation (13) The above equation (30) is equivalent to equation (14) in the main text. We now derive the η-return mixture, by putting the random variables after t + 1 in expectation, This gives us equation (16) in the main text, which we arrive at by factorizing out the SF, ψ η (S t+1 ) = E π [ ∞ n=0 (ηγ) n φ t+1+n ], to be separately estimated.
C Proof of proposition 1
We consider the setting of policy evaluation with linear function approximation. Given a Markov Reward Process (MRP) M π = S, r π , P π with state space S, reward function r π (s) = E π [R t+1 |S t = s], and policy dependent transition function P π (s |s). Assuming there are finite countable number states, we can write the MRP in matrix form with transition matrix P π ∈ R |S|×|S| , reward vector R ∈ R |S| , R i = E π [R t+1 |S t = i], along with discount factor γ ∈ [0, 1). Let each state be described by a d-dimensional feature, and Φ ∈ R |S|×d be the feature matrix. We assume Φ have linearly independent columns.
We consider all learning to be done on-policy with one-step TD given single-step experience tuples (S t , R t+1 , S t+1 ). Let D ∈ R |S|×|S| denote a diagonal matrix whose diagonal is the stationary distribution of the MRP.
The following sections are as follows: we first review pre-existing results on solving for the fixed-point of the regular linear TD(0) with direct value prediction, and the factorized value estimate using successor feature and an instantaneous reward model. Finally, we will present our main result in solving for the fixed point of on-policy learning with the one-step η-return mixturetarget.
C.1 MF value fixed point
We first consider the traditional "model-free" value learning (we refer to this as the "MF value"). Given a linear MF value function with parameter θ, Given an experience tuple (φ t , R t+1 , φ t+1 ), doing TD(0) with the MF value uses the following update with step-size parameter α ∈ (0, 1], Linear TD belongs to a family of linear fixed-point methods and solve for the following fixed point , The above has the following fixed point solution, which is also referred to as the TD fixed point, Note the fixed point implies the following, Discounting by η We can write a similar system with a ηγ-discounted value function, with the following fixed point,
C.2 SF value fixed point
Here we write the fixed point for a linear successor feature (SF) parameterized value function.
SF fixed point Given linear successor features (SFs) with linear parameters
With experience tuple (φ t , R t+1 , φ t+1 ), doing TD(0) for SF learning has the following update, Similar to value-learning with TD(0), SF learning with TD(0) corresponds to solving the following, The SF fixed point is as follows, Discounting by η We similarly denote a ηγ-discounted SF, ψ η (s) = E π [ ∞ n=0 (ηγ) n φ t+n | S t = s], with the following fixed point, Reward regression solution Given a linear reward function with parameters w, estimating the instantaneous reward, With experience tuple (φ t , R t+1 , φ t+1 ), reward learning follows a supervised update, The reward regression solution is,ŵ SF Value We construct the SF value estimate as a dot product v ψ (s) = ψ π (s) · w, written in matrix form we get, At the parameters' respective fixed points, we recover the value (TD) fixed point (equation 38), Similarly, we have Ξ η T Dŵ = θ η T D .
C.3 Linear η-return mixture value fixed point
We now consider doing value learning with the η-return mixture. Given an experience tuple (φ t , R t+1 , φ t+1 ), hyper-parameter η ∈ [0, 1], and some SF and reward parameters Ξ, w, the one-step η-return mixturehas the following update, Written in matrix form, the above iteration solves for the following, {2, 4, 6, 8, 10, 12, 14, 16, 18, 20} We set up the 16 state deterministic chain. As everything is deterministic, we set the learning rate α = 1.0 so new information can be learned right away. Thus the main point here is to see the speed of best possible (one-step transition-based) credit propagation.
D.2 Value prediction in a random chain
We compare the algorithms in a prediction setting in the 19-state random walk chain with tabular features. We train in the online incremental setting-the agent receives a stream of episodic experiences (S 1 , R 1 , S 2 , R 2 , ...), and updates its parameter immediately upon receiving the most recent one-step experience tuple (for example, (S t−1 , R t , S t ) at timestep t). Table 1 details the parameters tested. Figure 5 details the architecture design of a deep Q network (based on the DQN architecture of ) trained using an η-return mixture to do nonlinear action-value function approximation. The corresponding pseudo-code is detailed in algorithm 2.
8:
Copy feature with stop gradient (sg): φ de k ← (φ k ).sg() 9: Successor features TD learning loss 10: Reward supervised learning loss 11: q η (s k+1 , a ) = (1 − η)q θ (ψ k+1 , a ) + ηr w (ψ k+1 ) , Construct the η-return mixture 12: Figure 5: Architecture for the η-return mixture augmented Deep Q Network. (A) Base architecture, we augment a DQNlike architecture ) (encoder and action-value head) with two additional heads for SF prediction and instantaneous reward prediction. (B) Training with the η-return mixture , given an experience tuple (S t , A t , R t+1 , S t+1 ), we use S t+1 to generate the next step estimate Q η (S t+1 , ·) in constructing the target, and S t to generate the current step predictions. Training is done by minimizing the (MSE) loss between the prediction and targets. We use the MinAtar environment (Young and Tian 2019). 5 We build our "deep η-Q" agent based on the DQN provided by Young and Tian (2019) in examples/dqn.py. We implement the same DQN architecture and replicate to the best of our abilities the same hyperparameters as Young and Tian (2019), which was built to mimic the architecture and training procedure of the original DQN of , albeit miniaturized for the smaller Atari environments. Unlike the original DQN, training is done every frame, using the PyTorch ) implementation of the RMSprop optimizer . We report in details the architectures in figure 5 and the training hyperparameters in table 2. Specifically, the main text results presented in figure 3 follows exactly the hyperparameters reported in table 2, along with evaluations for a number of η's for a parameter study of η = {0.0, 0.4, 0.5, 0.7, 0.95, 1.0}. Each setting was conducted for 10 independent runs with seeds {2, 5, 8, 11, 14, 17, 20, 23, 26, 29}.
Main text figure 4 conducts a parameter study on the learning rates of the individual components of the deep η-Q agent: the value head and convolutional torso (α θ , these two components make up exactly the "vanilla" DQN), the successor feature head (α Ξ ), and the reward prediction head (α w ). We investigated learning rates {0.00025, 0.0005, 0.001, 0.0025, 0.005}, and also compare our deep η-Q agent (with intermediate η = 0.4) against a "vanilla" DQN agent.
Averaging of return is done during training by averaging over the episodic return (total undiscounted reward received in a single episode) of 10 episodes. The steps are "binned" into increments of length 1e4 to account for the fact that different runs will generate episodic returns at different environmental steps, making it difficult to compute confidence interval in a "per-step" way. That is, the logged steps (x-axis of training plots) are rounded to the nearest multiple of 1e4 for all runs.
E.1 Measuring representation collapse
For deep Q learning, we learn the feature representation φ(·) simultaneously to the successor features, action-values, and rewards (which are based on the learned feature layer). As our feature representation is shaped by back-propagated gradients from the action-value and reward heads (see figure 5 and algorithm 2), we measure the informativeness of the learned representation for different values of η (of the expected η-return). Concretely, we measure the effective rank ) of the feature learned after 5e6 training environment steps, measured as, with σ i (Φ) being the i-th singular value of matrix Φ, in decreasing order. We set δ = 0.01 similar to . Since we do not have access to the full feature matrix for MinAtar, we approximate Φ by sampling a large (n = 2048) minibatch of samples from the replay buffer and encoding them using the convolutions torso for a matrixΦ ∈ R n×d , n = 2048, d = 128. We measure the averaged srank for 8 sampled minibatches per run, though standard deviation is low between the independently sampled minibatches. Figure 6 (bottom) reports the srank for the same models as figure 3 (we duplicate figure 3-A in the top row here). The maximum possible srank achievable is 128 (i.e. the feature dimension, d), with lower srank indicating the learned feature representation is less informative. We observe that in general, srank is high (> 75) and similar for η's up to η = 0.7 (with the exception of freeway, to be discussed later). However, for high η's (η = {0.95, 1.0}), we observe a decrease in srank for 4 /5 MinAtar games, with η = 1.0 having srank's that tend towards 0. In the case of freeway, srank decreases monotically as we increase η. We hypothesize this is the result of sparse reward for freeway. Since the feature layer is shaped in part by reward gradients, sparse reward may push the features to be less informative-an issue that is worsened as we depend more on the feature prediction rather than value prediction with higher η's.
Importantly, we observe the evaluation performance is related to the feature srank. Specifically, in cases where feature srank is similar, an intermediate η value out-performs "extreme" values of η (e.g. η = 0). However, higher η appear to suffer from representation collapse which worsen performance, especially in sparse reward settings. The issue of learning good representation for successor feature learning can be addressed using auxiliary objectives (such as image reconstruction in or next-state prediction in ). We leave the interplay between the η-return mixture and additional feature-learning auxiliary tasks for future investigation.
|
2022-01-07T02:15:36.474Z
|
2022-01-05T00:00:00.000
|
{
"year": 2022,
"sha1": "44b0ed8536919985c16179e39608996929dc9f75",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "44b0ed8536919985c16179e39608996929dc9f75",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
231906424
|
pes2o/s2orc
|
v3-fos-license
|
Distinguishing between Warm and Stratiform Rain Using Polarimetric Radar Measurements
Modeled statistical differential reflectivity–reflectivity (i.e., ZDR–Ze) correspondences for no bright-band warm rain and stratiform bright-band rain are evaluated using measurements from an operational polarimetric weather radar and independent information about rain types from a vertically pointing profiler. It is shown that these relations generally fit observational data satisfactorily. Due to a relative abundance of smaller drops, ZDR values for warm rain are, on average, smaller than those for stratiform rain of the same reflectivity by a factor of about two (in the logarithmic scale). A ZDR–Ze relation, representing a mean of such relations for warm and stratiform rains, can be utilized to distinguish between warm and stratiform rain types using polarimetric radar measurements. When a mean offset of observational ZDR data is accounted for and reflectivities are greater than 16 dBZ, about 70% of stratiform rains and approximately similar amounts of warm rains are classified correctly using the mean ZDR–Ze relation when applied to averaged data. Since rain rate estimators for warm rain are quite different from other common rain types, identifying and treating warm rain as a separate precipitation category can lead to better quantitative precipitation estimations.
Introduction
Warm rain is formed mostly by the coalescence of cloud water droplets into rain drops taking place primarily in the atmosphere with temperatures above 0 • C (i.e., the freezing level temperature). Warm rain precipitation events have traditionally presented a challenge for weather radar-based quantitative precipitation estimation (QPE) [1]. Compared to stratiform rain drops, which typically originate from melting snowflakes that form and grow above the freezing level height, warm rain drops (for approximately the same rain rates) are, on average, (with some relatively rare exceptions [2]) much smaller and more numerous [3].
The atmospheric layer where snowflake melting takes place is usually manifested by the reflectivity enhancement, which is also known as the radar bright band, so stratiform rain is sometimes called bright-band (BB) rain, and warm rain is referred to as non-bright band (NBB) rain [3]. Reflectivity bright bands are usually accompanied by a drop in the correlation coefficient between horizontally and vertically polarized radar returns [4]. At higher radar frequencies, the attenuation of radar signals in liquid phase can enhance BB for nadir pointing measurements [5].
Due to differing drops size distribution (DSD) shapes, warm NBB and stratiform BB rains are characterized by distinctly different average relations between equivalent radar reflectivity factor (hereafter just reflectivity, Z e ) and rain rate, R (i.e., Z e = aR b relations) [3,6]. Such relations are often utilized for operational radar-based QPE using the multi-radar multi sensor (MRMS) approach [7]. The coefficients a in the warm NBB rain Z e =aR b relations are typically a factor of about 3 (on average) smaller than those for stratiform rain, while the exponents b could be close. This can lead to about 40% total rain accumulation underestimation when default stratiform BB rain or convective rain Z e -R relations are applied to warm NBB rain [6].
While rain rate estimators that use polarimetric radar variables (e.g., specific differential phase, K DP ) can potentially provide more accurate QPE retrievals [8,9], applying these estimators for warm NBB rain is challenging, as smaller rain drops are more spherical than larger ones [10]; thus, dual-polarization signatures of such rain, especially at lower weather radar frequencies (e.g., at S-band) are often rather noisy [3]. As a result, reflectivity-based precipitation rate estimators are often used in practice. However, warm rain accumulations are climatologically important because the contributions of such rain to total annual liquid precipitation amounts are usually on the order of 20% on the US West coast [11]. Warm rain is more frequent over ocean [12,13].
For radar-based QPE improvements, it is important to differentiate between different rain types. Segregation procedures between stratiform and convective rain exist and are already incorporated in the MRMS approach [14]. Recently, mean statistical relations between differential reflectivity (i.e., Z DR , which is defined as the logarithmic difference between horizontal and vertical polarization reflectivities) and reflectivity, Z e , for warm NBB and stratiform BB rains were suggested [6]. These relations, which allow for statistical differentiation between these types of rain, were developed theoretically using observed DSDs. The objective of the current study was to evaluate the theoretically derived mean warm and stratiform rain Z DR -Z e theoretical relations with operational measurements from an S-band (wavelength ≈ 10 cm) Weather Surveillance Radar-1988 Doppler (WSR-88D) unit when the warm-stratiform rain segregation is independently known. Of particular interest was assessing the efficacy of differentiating between these rain types using observational data, which have measurement uncertainties and, possibly, biases.
Data and Methods
The mean statistical S-band Z DR -Z e relations for low radar beam tilt (≈0.5 • ) found in [6] through modeling using observed DSDs and independent identification of rain types are: Z DR = 6.8·10 −5 Z e 2.68 (for stratiform BB rain), where Z DR is in decibels (dB) and Z e is in decibels relative to 1 mm 6 m −3 (dBZ). These relations were derived using one-year long observations from the National Oceanic and (1) and (2) were developed in [6] using concurrent measurements from vertically pointing S-band profiler measurements, which were utilized to independently segregate warm NBB and stratiform BB rain profiles, and calculations of Z e and Z DR using DSDs measured by a quality-controlled Particle Size and Velocity (Parsivel) optical disdrometers [15], which were deployed next to the profilers. Reflectivity and differential reflectivity values (in linear units) were derived using formulas where λ is the radar wavelength, |K w | 2 ≈ 0.93, σ h and σ v are backscatter cross-sections for horizontal and vertical polarizations, which are drop size and shape dependent, and the angle brackets denote the summation over DSD bin sizes in a unit volume and averaging over drop orientations. Drop shapes were approximated by oblate spheroids with aspect ratios dependent on drop size [10]. It was assumed that the distribution of zenith angles of drop symmetry axes was Gaussian with a 0 • mean value and a 10 • standard deviation. Size-dependent drop fall velocities, which are needed to calculate drop concentrations Remote Sens. 2021, 13, 214 3 of 10 from disdrometer bin counts, were adopted from [16], and the backscatter cross-sections were calculated using the T-matrix method [17]. It can be seen from Equations (1) and (2) that for the same values of reflectivity, differential reflectivity values for warm NBB rain are, on average, about a factor of 2 (in the logarithmic scale) smaller than those for stratiform BB rain. It is because the former rain type contains larger relative amounts of smaller more spherical drops than the latter type. As shown in [6], the mean Z DR -Z e relations for these rain types have relatively little sensitivity to the choice of existing drop shape models, which relate an average rain drop oblateness to its size.
The operational weather radar data used in the current study came from the Portland, Oregon WSR-88D unit, which has a four-letter identifier KRTX. The KRTX radar is located at an altitude of about 0.5 km above mean sea level (MSL) at 45.715 • N, 122.965 • W. This radar is part of about 160 S-band polarimetric Doppler radars operated by the United States National Weather Service (NWS). These radars employ the simultaneous transmission and simultaneous reception of horizontally and vertically polarized waves. Level II WSR-88D reflectivity and differential reflectivity data, which are used in this study, are available from the Next-Generation radar (NEXRAD) archive at https: //www.ncdc.noaa.gov/nexradinv/.
The NOAA Physical Sciences Laboratory (PSL) operated a temporary site at Troutdale (TDE), Oregon ( 45.5535 • N, 122.3864 • W, altitude 0.012 km MSL) as part of the Hydrometeorology Testbed (HMT) deployment to study atmosphere dynamics in the Columbia River gorge. Measurements from a NOAA PSL vertical pointing S-band profiler (S-Prof) [18] deployed at that site were used to independently differentiate between various types of rain, and the operational KRTX measurements above the TDE site were used to collect observational Z DR -Z e correspondence data.
The map locations of the KRTX and TDE sites are shown in Figure 1. The TDE site was located at a distance of about 48 km in the 111 • azimuthal direction from the KRTX radar location. The data from the TDE observational site are available from the PSL HMT data archive at https://psl.noaa.gov/data/obs/datadisplay/archive/InactiveSites.html. A procedure to differentiate among different rain types using vertically pointing S-Prof data is based on analyzing the vertical profiles of radar moments and is described in [6]. Stratiform BB, warm NBB, and deep convective rain types are differentiated using this procedure. In general, rain type partitioning does not require absolute calibration of the vertically pointing radar.
In a precipitation measurement mode, WSR-88D units perform volume scans consisting of plan position indicator (PPI) measurements at different beam tilts, which generally range from 0.5 • to 20 • radar beam elevations. The WSR-88D beam width is about 1 • (at a 3 dB level). Only the lowest beam tilt measurements from the KRTX radar were used in this study, because radar resolution volumes at higher beam tilts often were (at least partially) within regions of solid and/or melted precipitation. One volume scan takes about 6 min, which was a time interval between two consecutive lowest tilt KRTX measurements above the TDE site. Figure 2 shows an example of a precipitation event observed at the TDE site by the vertically pointing S-Prof radar. In this example, the stratiform bright-band rain over the TDE site was observed during the time interval from about 13:00 UTC on 9 March 2016 to 10:00 UTC on 10 March 2016. The precipitating cloud system was very deep between these times with radar echo heights reaching 10 km MSL. The top of the radar bright-band is indicative of a freezing level height, which generally separates ice and melting hydrometeors. According to meteorological observations at the ground (not shown), the near-surface air temperatures throughout the event varied from about 7 to 14 °C with a warm front passage occurring approximately at 09:00 UTC on 10 March 2016. For this event, the KRTX polarimetric radar data were mostly collected from a layer of liquid precipitation below the layer of melting hydrometeors.
Identification of Different Rain Types
Warm NBB rain during the event shown in Figure 2 was observed during a time period between about 02:00 and 10:00 UTC on 9 March 2016. Although radar echoes for warm NBB rain could extend somewhat higher than the environmental freezing level due to atmospheric updrafts, the precipitation formation is still dominated by warm-rain processes.
Short periods of convective rain were also present during this rain event (mostly after 11:00 UTC on 10 March 2016) with a most significant one occurring at approximately 22:00 UTC on 10 March 2016. As warm rain, deep convective rain does not exhibit the bright-band, but its radar returns with high reflectivity cores reach much further above the environmental freezing level, and the ice phase plays an important role in the convective precipitation processes. Figure 2 shows an example of a precipitation event observed at the TDE site by the vertically pointing S-Prof radar. In this example, the stratiform bright-band rain over the TDE site was observed during the time interval from about 13:00 UTC on 9 March 2016 to 10:00 UTC on 10 March 2016. The precipitating cloud system was very deep between these times with radar echo heights reaching 10 km MSL. The top of the radar brightband is indicative of a freezing level height, which generally separates ice and melting hydrometeors. According to meteorological observations at the ground (not shown), the near-surface air temperatures throughout the event varied from about 7 to 14 • C with a warm front passage occurring approximately at 09:00 UTC on 10 March 2016. For this event, the KRTX polarimetric radar data were mostly collected from a layer of liquid precipitation below the layer of melting hydrometeors.
Identification of Different Rain Types
Warm NBB rain during the event shown in Figure 2 was observed during a time period between about 02:00 and 10:00 UTC on 9 March 2016. Although radar echoes for warm NBB rain could extend somewhat higher than the environmental freezing level due to atmospheric updrafts, the precipitation formation is still dominated by warm-rain processes.
Short periods of convective rain were also present during this rain event (mostly after 11:00 UTC on 10 March 2016) with a most significant one occurring at approximately 22:00 UTC on 10 March 2016. As warm rain, deep convective rain does not exhibit the bright-band, but its radar returns with high reflectivity cores reach much further above the environmental freezing level, and the ice phase plays an important role in the convective precipitation processes.
The KRTX measurements for the lowest radar beam measurements are sampled at a 0.25 km resolution. KRTX reflectivity and differential reflectivity values observed within a 1 km range from the TDE location and within 1 • from the KRTX-TDE azimuthal direction were averaged in order to reduce measurement noise. To avoid ground clutter, the KRTX data were taken into consideration only when the copolar correlation coefficients between horizontally and vertically polarized radar echoes were greater than 0.9. The corresponding averages were used for further analysis. Estimates of the upper and lower KRTX radar beam edges are also shown in Figure 2. These estimates were calculated with accounting for mean atmospheric refraction and Earth's sphericity [19]. To avoid contaminations of rain layer radar variables by the melting hydrometeors, the stratiform BB rain time periods were considered only when the upper KRTX beam edge estimates were lower than the bright band bottom by at least 0.2 km. The KRTX measurements for the lowest radar beam measurements are sampled at a 0.25 km resolution. KRTX reflectivity and differential reflectivity values observed within a 1 km range from the TDE location and within 1° from the KRTX-TDE azimuthal direction were averaged in order to reduce measurement noise. To avoid ground clutter, the KRTX data were taken into consideration only when the copolar correlation coefficients between horizontally and vertically polarized radar echoes were greater than 0.9. The corresponding averages were used for further analysis. Estimates of the upper and lower KRTX radar beam edges are also shown in Figure 2. These estimates were calculated with accounting for mean atmospheric refraction and Earth's sphericity [19]. To avoid contaminations of rain layer radar variables by the melting hydrometeors, the stratiform BB rain time periods were considered only when the upper KRTX beam edge estimates were lower than the bright band bottom by at least 0.2 km.
Mean ZDR-Ze Correspondences for Different Rain Types
Precipitation events observed by the KRTX polarimetric operational radar over the TDE site during the January 2016-March 2017 period, when the independent information on rain-type partitioning from the S-Prof measurements was available, were further analyzed. To mitigate possible partial KRTX beam-filling effects, precipitation occurrences with a spatial coverage over 10 km and continuously lasting over 1 hour were considered. Figure 3 shows cumulative frequency scatter plots of observed KRTX differential reflectivity-reflectivity correspondences for stratiform BB and warm NBB rains. The best
Mean Z DR -Z e Correspondences for Different Rain Types
Precipitation events observed by the KRTX polarimetric operational radar over the TDE site during the January 2016-March 2017 period, when the independent information on rain-type partitioning from the S-Prof measurements was available, were further analyzed. To mitigate possible partial KRTX beam-filling effects, precipitation occurrences with a spatial coverage over 10 km and continuously lasting over 1 hour were considered. Figure 3 shows cumulative frequency scatter plots of observed KRTX differential reflectivity-reflectivity correspondences for stratiform BB and warm NBB rains. The best fit mean Z DR -Z e relations found previously in [6] for these two rain types through modeling are also shown.
It can be seen from Figure 3 that the reflectivities and differential reflectivities observed by the KRTX radar, on average, align well with the previously modeled mean theoretical Z DR -Z e correspondences for stratiform BB and warm NBB types of rain. This is an important result, especially given the fact that the theoretical relations were obtained using DSDs observed during the HMT Southeastern (HMT-SE) United States deployment, but the radar variables were observed by the radar near the U.S. West Coast. This fact indicates that a potential polarimetric radar-based differentiation between these rain types could have a rather general applicability. served by the KRTX radar, on average, align well with the previously modeled mean theoretical ZDR-Ze correspondences for stratiform BB and warm NBB types of rain. This is an important result, especially given the fact that the theoretical relations were obtained using DSDs observed during the HMT Southeastern (HMT-SE) United States deployment, but the radar variables were observed by the radar near the U.S. West Coast. This fact indicates that a potential polarimetric radar-based differentiation between these rain types could have a rather general applicability. (1) and (2) found through modeling in [6].
According to the results of theoretical modeling in [6], a ZDR-Ze relation, which for the most part separates warm NBB from stratiform BB rain types, can be given as: which approximately corresponds to the mean of In a precipitation measurement mode,s (1) and (2). Most (≈80%) of stratiform rains exhibit theoretical differential reflectivities greater than those expressed by Equation (5), while most of the warm NBB rains are characterized by ZDR values which are smaller than that. As modeled radar variables, observational data from the operational KRTX radar also show that on average, for a given reflectivity value, differential reflectivities of stratiform BB rains are greater by a factor of approximately 2 compared to warm NBB rains (when ZDR is in the logarithmic scale). However, radar measurement data are often quite noisy (especially differential reflectivity data at small ZDR values). As can be seen from Figure 3, the observational data scatter of ZDR-Ze correspondences is rather substantial. This scatter is most significant for warm NBB rain at lower reflectivity values. On average, warm NBB rain reflectivities are smaller than those for stratiform BB rain. It is instructive to estimate the effectiveness of relation Equation (5) for differentiation between warm and stratiform rain using observational WSR-88D data. Differential reflectivity measurements can be especially noisy for small ZDR values and subject to biases, as they are especially difficult to calibrate in the absolute sense for radars, which, as WSR-88D systems, do not have an option of pointing the radar beam vertically [20]. A general noisiness of differential reflectivity measurements is evident from Figure 3, since there are some negative values, whereas positive ZDR values are expected in rain measurements. (1) and (2) found through modeling in [6].
According to the results of theoretical modeling in [6], a Z DR -Z e relation, which for the most part separates warm NBB from stratiform BB rain types, can be given as: which approximately corresponds to the mean of Equations (1) and (2). Most (≈80%) of stratiform rains exhibit theoretical differential reflectivities greater than those expressed by Equation (5), while most of the warm NBB rains are characterized by Z DR values which are smaller than that. As modeled radar variables, observational data from the operational KRTX radar also show that on average, for a given reflectivity value, differential reflectivities of stratiform BB rains are greater by a factor of approximately 2 compared to warm NBB rains (when Z DR is in the logarithmic scale). However, radar measurement data are often quite noisy (especially differential reflectivity data at small Z DR values). As can be seen from Figure 3, the observational data scatter of Z DR -Z e correspondences is rather substantial. This scatter is most significant for warm NBB rain at lower reflectivity values. On average, warm NBB rain reflectivities are smaller than those for stratiform BB rain. It is instructive to estimate the effectiveness of relation Equation (5) for differentiation between warm and stratiform rain using observational WSR-88D data. Differential reflectivity measurements can be especially noisy for small Z DR values and subject to biases, as they are especially difficult to calibrate in the absolute sense for radars, which, as WSR-88D systems, do not have an option of pointing the radar beam vertically [20]. A general noisiness of differential reflectivity measurements is evident from Figure 3, since there are some negative values, whereas positive Z DR values are expected in rain measurements.
To assess an average Z DR bias in KRTX measurements, the mean differential reflectivity values for low radar reflectivities, which are typical for drizzle-like rain (i.e., 5 dBZ < Z e < 10 dBZ) were calculated. Since drizzle drops are practically spherical, their Z DR values are expected to be around 0 dB. Modeling results with observational DSDs and realistic rain drop shape models in [6] also indicate that mean Z DR values are less than 0.1 dB when reflectivities are less than about 15 dBZ. However, for low reflectivity KRTX measurements, a mean differential reflectivity value was found to be approximately 0.3 dB. Then, this value was assumed to be a mean Z DR offset (bias) for the KRTX radar dataset considered in this study. This is consistent with a few tenths of 1 dB Z DR positive offset, which is also present between mean relations Equations (1) and (2) and the observational data in Figure 3.
With accounting for the mean 0.3 dB differential reflectivity offset found for low reflectivity drizzle-like rains, the mean residual biases of observational Z DR values versus those predicted by modeling as given by Equations (1) and (2) are 0.05 dB and 0.12 dB, for BB and NBB warm rains, respectively. Corresponding standard deviations are 0.38 dB (for BB stratiform rains) and 0.4 dB (for NBB warm rains).
It is of particular interest to assess from WSR-88 observations (i.e., those shown as scatter plots in Figure 3) how well the relation Equation (5) can segregate BB stratiform and NBB warm rains. The analysis shows that Equation (5), where Z e and Z DR data are from the KRTX measurements, segregates the warm NBB and stratiform BB rain types (as independently inferred from the S-Prof measurements) with an overall effectiveness of about 65% when observed reflectivity values are greater than 16 dBZ. About 67% of observed stratiform BB rains exhibited Z DR values that are greater than the one prescribed by Equation (5), while about 63% of detected warm NBB rain had differential reflectivities smaller than that. Without accounting for the differential reflectivity offset, the above percentages of the correct rain type differentiation using observed radar variables diminishes by approximately 5 percentage points.
A decrease in the overall effectiveness of segregating warm NBB and stratiform BB rain types using polarimetric radar observations compared to the results of theoretical modeling (i.e., 65% vs. 80%) can be attributed, in part, to uncertainties and noisiness of radar measurements. One way to increase the effectiveness of rain type identifications is to perform some additional averaging of reflectivity and differential reflectivity measurements. Applying the suggested here segregation approach to additionally averaged data shows that averaging nine neighboring Z e -Z DR data pairs increases the correct identification of the rain type (as determined from the profiler measurements) to about 70%. While some additional averaging improves the effectiveness of the BB and NBB rain type segregation, too much averaging can provide occurrences when measurements from both rain types are present in a same sample of Z e -Z DR data.
Differences in Z e -R Estimators for Warm and Stratiform Rains
A Parsivel disdrometer was added to the TDE observational site instrumentation suite at a later stage of the deployment. Drop-size distribution data from disdrometer measurements were used for calculating Z e -R estimators, which are characteristic for observed warm BB and stratiform NBB rains. Figure 4a shows an example of vertically pointing S-Prof measurements during a TDE precipitation event observed on 9 March 2017 at the time when disdrometer measurements were available.
A clear separation of the warm NBB (i.e., approximately between 11:00 and 15:00 UTC) and stratiform BB (i.e., after 15:00 UTC) rain types is obvious from the data in Figure 4a. For this observation event, Z e -R estimators corresponding to these rain types are shown in Figure 4b. They were derived using disdrometer DSD-based modeling of radar reflectivity and rain rate.
For comparisons, Figure 4b also shows mean Z e -R estimators for warm and stratiform rain types obtained in [6] using DSD data from the EWN Southeastern United States observational site. It can be seen from Figure 4b that differences in Z e -R estimators for warm and stratiform rains are more significant than those between relations for the same rain type but from different observational sites. Underestimation of warm rain rates could be about as much as factors of 2 and 3 (for reflectivities of 20 dBZ and 35 DBZ, respectively) if a stratiform rain estimator is used for QPE. Thus, identifying warm NBB rain as a separate precipitation category and applying rain rate estimators, which are appropriate for this category, could lead to improvements in radar-based QPE retrievals. Remote Sens. 2020, 17, x FOR PEER REVIEW 8 of 10 (a) (b) A clear separation of the warm NBB (i.e., approximately between 11:00 and 15:00 UTC) and stratiform BB (i.e., after 15:00 UTC) rain types is obvious from the data in Figure 4a. For this observation event, Ze-R estimators corresponding to these rain types are shown in Figure 4b. They were derived using disdrometer DSD-based modeling of radar reflectivity and rain rate.
For comparisons, Figure 4b also shows mean Ze-R estimators for warm and stratiform rain types obtained in [6] using DSD data from the EWN Southeastern United States observational site. It can be seen from Figure 4b that differences in Ze-R estimators for warm and stratiform rains are more significant than those between relations for the same rain type but from different observational sites. Underestimation of warm rain rates could be about as much as factors of 2 and 3 (for reflectivities of 20 dBZ and 35 DBZ, respectively) if a stratiform rain estimator is used for QPE. Thus, identifying warm NBB rain as a separate precipitation category and applying rain rate estimators, which are appropriate for this category, could lead to improvements in radar-based QPE retrievals.
Discussion and Conclusions
Correspondences between radar reflectivity and differential reflectivity values observed in liquid precipitation can effectively be used to differentiate between warm no bright-band and stratiform bright-band rain. The mean S-band ZDR-Ze relations for warm and stratiform rain, which were previously found using data through theoretical modeling with observed DSDs and independent information on rain type partitioning, were found to be generally applicable to measurements from the operational NWS polarimetric WSR-88D KRTX system. For both modeling and observational data, differential reflectivity values for warm NBB rain are, on average, by a factor of about 2 smaller (if expressed in decibels) than those for stratiform BB rain (for the same value of reflectivity).
The ZDR-Ze relation Equation (5), which approximately represents an average of the theoretically found relations for warm NBB and stratiform BB rains, can be used to segregate these rain types. The segregation method suggests that if the observed differential reflectivity (for a given observed Ze value) is larger/smaller than that found from the relation Equation (5), then the observed rain type is identified as stratiform BB/warm NBB
Discussion and Conclusions
Correspondences between radar reflectivity and differential reflectivity values observed in liquid precipitation can effectively be used to differentiate between warm no bright-band and stratiform bright-band rain. The mean S-band Z DR -Z e relations for warm and stratiform rain, which were previously found using data through theoretical modeling with observed DSDs and independent information on rain type partitioning, were found to be generally applicable to measurements from the operational NWS polarimetric WSR-88D KRTX system. For both modeling and observational data, differential reflectivity values for warm NBB rain are, on average, by a factor of about 2 smaller (if expressed in decibels) than those for stratiform BB rain (for the same value of reflectivity).
The Z DR -Z e relation Equation (5), which approximately represents an average of the theoretically found relations for warm NBB and stratiform BB rains, can be used to segregate these rain types. The segregation method suggests that if the observed differential reflectivity (for a given observed Z e value) is larger/smaller than that found from the relation Equation (5), then the observed rain type is identified as stratiform BB/warm NBB rain. Convective rain periods in radar observations could be identified [14] prior to applying the method suggested here.
Statistical evaluations of the BB-NBB rain differentiation method were performed on the independent data set consisting of measurements by the operational weather radar when rain types were known from profiler measurements. These evaluations revealed that the suggested method correctly identifies rain type for about 70% of observed Z DR -Z e pairs if a differential reflectivity measurement bias in operational weather radar measurements is approximately accounted for and an averaging of reflectivity and differential reflectivity data is performed. This ≈70% effectiveness estimate provides a probabilistic measure to a "binary" decision rule for distinguishing between NBB and BB rain types based on the relation shown in Equation (5). Due to measurement uncertainty of polarimetric radar variables, this effectiveness estimate is smaller than the about 80% efficacy previously found for modeling data.
The fact that Z DR -Z e relations, which were found through modeling using DSDs observed during one-year long deployment in the Southeastern United States [6], fit well, Remote Sens. 2021, 13, 214 9 of 10 on average, the observational data from an operational weather radar on the West Coast of the United States points out to a general robustness of the warm-stratiform rain separation method based on the mean correspondence between reflectivity and differential reflectivity. It also indicates that this method could have a broader potential utility for enhancing radar-based QPE by adding a warm-stratiform rain differentiation in addition to currently existing procedures to itentify convective rain. The suggested here method/approach should be used for rain observations with lowest radar elevation beam measurements that are sufficiently below the environmental freezing level to avoid contaminations by melting and ice hydrometeors. Higher elevation beam measurements are often prone to such contaminations.
Z e -R estimators for warm NBB rain suggest significantly higher rain rains for the same reflectivity values compared to stratiform BB rains. The variability between warm rain estimators obtained from different DSD datasets is generally smaller than differences between mean warm and stratiform rain estimators. Differentiating warm rain as a separate rain type and applying appropriate rain rate estimators could lead to better radar-based QPE.
Funding: This research was funded by the NOAA Physical Sciences Laboratory "Columbia river regional project".
Institutional Review Board Statement: Not applicable.
Informed Consent Statement: Not applicable.
Data Availability Statement:
The data used in this study are available from the NEXRAD and NOAA PSL archives.
|
2021-02-12T20:19:10.049Z
|
2021-01-01T00:00:00.000
|
{
"year": 2021,
"sha1": "b741a96ebde1b5ba32fec9c6e161fae55f7a3bbe",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2072-4292/13/2/214/pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "b741a96ebde1b5ba32fec9c6e161fae55f7a3bbe",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
7249018
|
pes2o/s2orc
|
v3-fos-license
|
ASSESSMENT OF MEDICAL COURSES IN BRAZIL USING STUDENT-COMPLETED QUESTIONNAIRES . IS IT RELIABLE ?
Introduction: Debates about the quality of medical education have become more evident in the recent past, and as a result several different assessment methods have been refined for that purpose. The use of questionnaires filled out by medical students to assess the quality of lectures is one of the most common methods employed in our milieu. However, the reliability of this investigation method has not yet been systematically tested. The authors present the reliability of a specific form applied to the fourth grade medical students during the clinical psychiatry course. Method: Eighty-one fourth grade medical students were instructed to complete a form immediately after each clinical psychiatry lecture. Thirty-four students (42%) failed to turn in the forms after the final lecture. These students were given an identical form to assess the lectures in a retrospective fashion. The grades given by both groups of students for each performed lecture and the number of students who have graded an unperformed lecture were compared. Statistical significance for both groups was determined by means of the chi-square test (p< 0.05). Results: Eighteen out of the 34 students who filled out the forms retrospectively (53%) rated the unperformed lecture, whereas only 5 out of the 47 students who filled out the forms during the course (11%) did so. This is statistically significant (p< 0.05). There was no statistical difference for the grades given to the lectures that were actually performed. Discussion: The authors concluded the low reliability rate of the retrospective evaluation warrant a continuous assessment method during the course.
A growing interest in education quality in Brazil has become evident since the Ministry of Education implemented the national test a few years ago.The national test aims at assessing the quality of high school and college education in this country.Important issues such as how to improve the education quality as well as the usefulness of the national test have been examined.The public believes that these assessment methods are highly necessary to improve the quality of education.However, the outcome of a test applied to the students has a question-able value in the assessment of the education rendered to learners.Some authors advocate that the entire education process should be assessed as opposed to appraising its final outcome only.A few large-scale endeavors like the CINAEM -"Comissão Interinstitucional Nacional de Avaliação das Escolas Médicas" have been made to date.On the other hand, smaller en-deavors evaluating the quality of lectures in a given discipline within an educational institution is a current practice.
The literature lists several investigations in which the quality of the education is assessed by means of direct inquiry's to the students.Teaching methods 11,12 , skills and attitude of faculty members 3,4,6,8 and teaching settings 1,5 , in addition to other issues 2,7,9,10 have been appraised in these investigations.A number of those investigations report on qualitative evaluations.However, other investigations employ re-search instruments on which reliability and validity of the outcomes are the main concerns 1,3,4,6,7,9,12 .These research instruments are too specific (instruments for assessing learning environment 1,5 , a measure of medical instructional quality in ambulatory settings 7 , a measure of the faculty staff attitude before students creativity 6 , etc...) and therefore, none of them is adequate for general use.
The most commonly used method in our milieu consists of forms with questions that are answered by the students at the end of a specific course of study.These questions gather the students' opinions about the quality of the lectures rendered to them.However, despite the widespread use of these questionnaires, it is not really known how thoughtfully the students answer the questions, thus jeopardizing the original purpose.Additionally, there are no investigations reporting on the reliability of these methods.
The authors report on the reliability of a specific form applied to medical students during a clinical psychiatry course.
MATERIALS AND METHODS
Clinical psychiatry is a discipline taught to fourth grade medical students over 7 weeks.Tutoring is rendered twice a week, one morning from 8 to 12 AM and one afternoon from 2 to 6 PM.Three theoretical lectures a week are given, and the remaining 2-hour period is used for seeing in-patients on the psychiatric floor.
Eighty-one subjects participated in this investigation during the first semester of 1995.Appraisal of the lectures was performed by means of a form handed out to the students during the opening lecture.Students were asked to fill out the assessment form soon after each performed lecture.
Forms that were filled out were handed back to one of the authors soon after the last lecture before the final test.The students who happened to be without a personal copy were asked to fill out a supplementary copy at that time.
The forms (Fig. 1) portrayed each lecture by the title and the lecturer's name (in figure 1 the lecturer's name is not mentioned).There were 5 boxes reading very good, good, regular, bad, and no grade.It was said to the students that the no grade box should be used whenever one of the following was the case: the specific lecture did not occur, the student did not attend the lecture or the student did not want to issue an opinion about the given lecture.
The lecture entitled "Normal Emotional Development in Childhood and Adolescence" was canceled.
The data collected from forms that were filled out prospectively were compared to the forms filled out retrospectively after the last lecture.Individual lecture grading and the number of respondents in each group who rated the unperformed lecture were analyzed.
RESULTS
A total of 81 students participated in the study.Forty-seven students (58%) delivered the filled out forms as initially indicated.Thirty-four students (42 %) filled out a new copy of the forms retrospectively.
Only 5 students who filled out the forms during the course (11%) rated the unperformed lecture, whereas 18 students (53%) who filled out the forms retrospectively did so (Fig. 2).This is statistically significant (p<0.05)according to the chi-square test.There was no statistical difference for the grades given to the lectures that were actually performed.
DISCUSSION
There are several methods for assessing the quality of teaching.One of the most effective methods is when the tutor discusses with the students every step of the teaching-learning process.This process is not feasible in a teaching setting where more than one faculty member teaches or when there are a large number of participant students.A second alternative is to ask the students to record in writing their opinion about the course.This usually results in an undesirable number of blank sheets of paper and just a few sharp points about the course itself.Therefore, the use of structured questionnaires for the purpose of assessing teaching quality seems reasonable.
However, a complete questionnaire-based evaluation given by the students does not necessarily correlate with effectiveness.Most of these questionnaires have never been tested for reliability.
Taking into consideration that the method is devised to evaluate the outcome of a course, a grade given to an unperformed lecture can be considered as a "false-positive".It is well known that a high rate of false-positives reflects low specificity.Therefore, the testing method in question (if its falsenegative rate is low) should only be used as a screening method to further assist what subjects should undergo further testing.
Twenty-three out of 81 students (29%) rated 1 unperformed lecture (29% "false-positive" rate).Yet, to make this matter more complex, there is not a more specific test to apply.Should it have been because lack of attention, or motivation, or any other reason, it is possible these 23 students answered the remaining of the questions in a unheeding fashion, thus jeopardizing the results of the method.
It is also possible to extend this hy- pothesis to any course evaluation where a similar method is used.It is likely that reliability affects the validity of the assessment method and if the method aims at reflecting the real quality of the course ministered to the students, the assessing method has to be improved to a higher degree of reliability and validity.On the other hand, there is a statistically significant difference (p< 0.05) between the prospective and retrospective data.The prospective respondents produced a 10% rate of wrong answers whereas the retrospective respondents displayed a more than 50% rate.
Taking into consideration the significance of nearly one-third false-positives, one should question the reliability of the method in reflecting the actual students' opinion.The fact that more than 50% of the students who filled out forms retrospectively rated an unperformed lecture shows a significant problem with retrospective evaluations.
On the other hand, a small percent-age of students who completed the questionnaire prospectively (11%) also rated the unperformed lecture, showing the prospective evaluations are also not 100% reliable.Nevertheless, the results from prospective evaluations were significantly better than those for the retrospective evaluations.
It is clear to us that this type of evaluation, with this type of form, must be performed in a prospective fashion rather than retrospectively to avoid the risk of results that do not reflect the actual situation.
CONCLUSION
The quest for pedagogical improvement aiming at increasing the efficacy of the learning process calls for a constant evaluation of the teaching methods; therefore, the impressions of the students should definitely be taken into account.However, presented with the opportunity to bring forward their im-pressions, a considerable number of students do not become involved in this process, denying the tutors access to valuable information for improvement.
Structured questionnaires for assessing a course of study are a viable solution for providing the necessary information.However, if a large percentage of the respondents, either by means of lack of interest or attention, produce low quality information, the final outcome of the assessment effort will be compromised.
This study shows that prospective evaluations are better in quality than the evaluations performed in a retrospective fashion.Therefore, we suggest that an assessment system should be continuously administered during the courses to avoid retrospective data collection.Whenever possible, individual lectures should be assessed by attending students immediately after termination of the lecture, assuring that only the individuals qualified to give an opinion will participate.Introdução: Discussões sobre qualidade de ensino têm se tornado cada vez mais freqüentes em nosso meio e métodos variados de avaliação têm sido pesquisados.O uso de questionários, preenchidos por alunos, avaliando a qualidade de aulas ministradas está entre os métodos mais utilizados em nosso meio, no entanto sua confiabilidade não tem sido testada.Os autores apresentam a avaliação da confiabilidade de um destes questionários, o qual foi desenvolvido para um curso de psiqui-atria clínica ministrada no quarto ano de graduação em medicina.
|
2017-09-18T14:48:36.963Z
|
2000-04-01T00:00:00.000
|
{
"year": 2000,
"sha1": "a71ab67afedd0efba6919e4edcee883b174e00b0",
"oa_license": "CCBYNC",
"oa_url": "https://www.scielo.br/j/rhc/a/7m45pgDDK3yhhTcrhVmXJck/?format=pdf&lang=en",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "a71ab67afedd0efba6919e4edcee883b174e00b0",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
204746000
|
pes2o/s2orc
|
v3-fos-license
|
A Fast Charging Balancing Circuit for LiFePO 4 Battery
: In this paper, a fast charging balancing circuit for LiFePO 4 battery is proposed to address the voltage imbalanced problem of a lithium battery string. During the lithium battery string charging process, the occurrence of voltage imbalance will activate the fast balancing mechanism. The proposed balancing circuit is composed of a bi-directional converter and the switch network. The purpose of bi-directional is that the energy can be delivered to the lowest voltage cell for charging mode. On the other hand, the energy stored in the magnetizing inductors of the transformer can be charged back to the higher voltage cell in recycling mode. This novel scheme includes the following features: (1) The odd-numbered and even-numbered cells in the string with the maximum di ff erential voltage will be chosen for balancing process directly. In this topology, there is no need to store and deliver the energy through any intermediate or the extra storing components. That is, the energy loss can be saved to improve the e ffi ciency, and the fast balancing technique can be achieved. (2) There is only one converter to complete the energy transfer for voltage balancing process. The concept makes the circuit structure much simpler. (3) The structure has bi-directional power flow and good electrical isolation features. (4) A single chip controller is applied to measure the voltage of each cell to achieve the fast balancing process e ff ectively. At the end of the paper, the practical test of the proposed balancing method on LiFePO 4 battery pack (28.8 V / 2.5 Ah) is verified and implemented by the experimental results.
Introduction
In recent years, lithium batteries and related techniques have developed and are widely used. The battery industries have experiences and capabilities for mass production of battery packs and modules. Battery packs and modules are composed of several battery cells for high power energy storing applications. However, there is a critical issue about imbalance of electric charge [1][2][3][4][5] within high power battery strings. The issue is caused by the characteristic [6], depth of discharge [7], and aging problem of each cell [8,9]. Based on the reasons above, when the battery string is being charged or discharged, the imbalance of each cell of the battery string becomes more serious. In addition, as the cycle of charging-discharging from the battery string increases, the internal resistance and the capacity of each cell will be varied to shorten the life cycle of battery strings.
In order to increase the efficiency and extend the lifetime of battery strings, a battery management system (BMS) [10][11][12][13][14][15][16] is a key feature which is utilized to monitor the parameters of the battery. Also, BMS plays an important role for management and protection of the battery. The main functions of BMS are monitoring, protection, and balancing parts [17,18]. The monitoring is to sense the relative key parameters from the battery packs, like voltage, current, and temperature. The protection is to avoid
The Proposed Balancing Circuit
As shown in Figure 1, the proposed balancing method utilizes a bi-directional converter [28,29] to balance the voltage of each cell in a battery string. This method helps to deliver the energy from higher-voltage battery to lower-voltage battery directly. In other words, there is no more energy loss during the delivering process, and this technique can also shorten the balancing time effectively. In this study, the forward converter has good isolation feature and simple structure for bi-directional function. The strategy is to balance the voltage between the odd-numbered battery and the even-numbered battery which exist maximum differential voltage in the battery string. The switch network shown in blue dotted block is formed by several couples of MOSFETs with back-to-back connection, and each connection also becomes a bi-directional switch set with two MOSFETs which is connected with the cell. Hence, this connection can provide a bi-directional path to deliver the energy. In addition, each of the bi-directional switches set can avoid the other currents flowing through the cell during the balancing process.
The Operation Mode Analysis
The operation mode can be divided by two different modes. The first one is to balance the voltage from odd-numbered battery to even one. The other mode is to balance the voltage from evennumbered battery to odd one. The operation principle of these two modes will be discusses in detail below. In the following analysis, each battery denoted from VBat1 to VBat8 can be represented as a battery cell in the green dotted block in Figure1.
The Operation Mode of Balancing Process from the Odd-Numbered Battery to the Even-Numbered Battery
The following analysis is stated for balancing from VBat1 to VBat8. The theoretical waveforms are shown in Figure 2. Mode I-Charging Mode (t0 < t < t1) As shown in Figure 3, the bi-directional switch set Sa, Sodd_0, Sodd_1, Seven_7, and Seven_8 are all turned on. The others are all turned off. In this mode, the current from VBat1 will charge to Lma through π filter
The Operation Mode Analysis
The operation mode can be divided by two different modes. The first one is to balance the voltage from odd-numbered battery to even one. The other mode is to balance the voltage from even-numbered battery to odd one. The operation principle of these two modes will be discusses in detail below. In the following analysis, each battery denoted from V Bat1 to V Bat8 can be represented as a battery cell in the green dotted block in Figure 1.
The Operation Mode of Balancing Process from the Odd-Numbered Battery to the Even-Numbered Battery
The following analysis is stated for balancing from V Bat1 to V Bat8 . The theoretical waveforms are shown in Figure 2.
The Operation Mode Analysis
The operation mode can be divided by two different modes. The first one is to balance the voltage from odd-numbered battery to even one. The other mode is to balance the voltage from evennumbered battery to odd one. The operation principle of these two modes will be discusses in detail below. In the following analysis, each battery denoted from VBat1 to VBat8 can be represented as a battery cell in the green dotted block in Figure1.
The Operation Mode of Balancing Process from the Odd-Numbered Battery to the Even-Numbered Battery
The following analysis is stated for balancing from VBat1 to VBat8. The theoretical waveforms are shown in Figure 2. Mode I-Charging Mode (t0 < t < t1) As shown in Figure 3, the bi-directional switch set Sa, Sodd_0, Sodd_1, Seven_7, and Seven_8 are all turned on. The others are all turned off. In this mode, the current from VBat1 will charge to Lma through π filter Mode I-Charging Mode (t 0 < t < t 1 ) As shown in Figure 3, the bi-directional switch set S a , S odd_0 , S odd_1 , S even_7 , and S even_8 are all turned on. The others are all turned off. In this mode, the current from V Bat1 will charge to L ma through π filter I. In the meantime, the energy from the primary winding N pa can be transferred to the secondary winding N sa ; thus, D b is turned on by the forward bias. The current i Nsa begins to charge to V Bat8 through π filter II. The red dotted current path shows the charging condition. Figure 3. Mode I, the battery balancing process from higher VBat1 to lower VBat8.
Mode II-Recycling Mode (t1 < t < t2) As shown in Figure 4, the bi-directional switch set Sa turns off, but Sodd_0, Sodd_1, Seven_7, and Seven_8 are all still turned on. The rest of switch sets are turned off. In this interval, the energy stored in Lma, and the current iLma remains continuously. Thus, iLma (shown with green dotted current) will flow through the primary winding Npa to induce a current iNsb from Nsb. The induced current iNsb also flows through π filter I and charge back to VBat1. During this mode, the energy stored in the magnetizing inductor can be released and also recycled to the battery(VBat1) effectively. At the right side of the converter, the energy of VBat8 is provided by π filter II. Mode II-Recycling Mode (t 1 < t < t 2 ) As shown in Figure 4, the bi-directional switch set S a turns off, but S odd_0 , S odd_1 , S even_7 , and S even_8 are all still turned on. The rest of switch sets are turned off. In this interval, the energy stored in L ma , and the current i Lma remains continuously. Thus, i Lma (shown with green dotted current) will flow through the primary winding N pa to induce a current i Nsb from N sb . The induced current i Nsb also flows through π filter I and charge back to V Bat1 . During this mode, the energy stored in the magnetizing inductor can be released and also recycled to the battery(V Bat1 ) effectively. At the right side of the converter, the energy of V Bat8 is provided by π filter II. Figure 3. Mode I, the battery balancing process from higher VBat1 to lower VBat8.
Mode II-Recycling Mode (t1 < t < t2) As shown in Figure 4, the bi-directional switch set Sa turns off, but Sodd_0, Sodd_1, Seven_7, and Seven_8 are all still turned on. The rest of switch sets are turned off. In this interval, the energy stored in Lma, and the current iLma remains continuously. Thus, iLma (shown with green dotted current) will flow through the primary winding Npa to induce a current iNsb from Nsb. The induced current iNsb also flows through π filter I and charge back to VBat1. During this mode, the energy stored in the magnetizing inductor can be released and also recycled to the battery(VBat1) effectively. At the right side of the converter, the energy of VBat8 is provided by π filter II.
The Operation Mode of Balancing Process from the Even-Numbered Battery to the Odd-Numbered Battery
The following analysis is stated for balancing from V Bat8 to V Bat1 , and Figure 5 shows the theoretical waveforms during the balancing process. The following analysis is stated for balancing from VBat8 to VBat1, and Figure 5 shows the theoretical waveforms during the balancing process.
Ⅳ t 2 t 3 t 4 Figure 5. The theoretical waveforms in balancing process from VBat8 to VBat1.
Mode III-Charging Mode (t2 < t < t3) As shown in Figure 6, the bi-directional switch set Sb, Sodd_0, Sodd_1, Seven_7, and Seven_8 are all turned on. The others are all turned off. The current from VBat8 will charge to Lmb through π filter II and also deliver the energy from the primary winding Npb to the secondary winding Nsb. In this transferring state, Da is turned on, and iNsb starts to charge to VBat1 through π filter I. The charging path is shown in red-dotted current. Mode Ⅳ-Recycling Mode (t3 < t < t4) As shown in Figure 7, the bi-directional switch set Sb turns off, but Sodd_0, Sodd_1, Seven_7, and Seven_8 are all still turned on. The rest of the switch sets are turned off. In this interval, the energy stored in Lmb, and the current iLmb remains continuously. Thus, iLmb (shown with green dotted current) will flow thru the primary winding Npb to induce a current iNsa from Nsa. The induced current iNsa also flows through π filter II and charge back to VBar8. During this mode, the energy stored in the magnetizing Mode III-Charging Mode (t 2 < t < t 3 ) As shown in Figure 6, the bi-directional switch set S b , S odd_0 , S odd_1 , S even_7 , and S even_8 are all turned on. The others are all turned off. The current from V Bat8 will charge to L mb through π filter II and also deliver the energy from the primary winding N pb to the secondary winding N sb . In this transferring state, D a is turned on, and i Nsb starts to charge to V Bat1 through π filter I. The charging path is shown in red-dotted current. The following analysis is stated for balancing from VBat8 to VBat1, and Figure 5 shows the theoretical waveforms during the balancing process.
Ⅳ t 2 t 3 t 4 Figure 5. The theoretical waveforms in balancing process from VBat8 to VBat1.
Mode III-Charging Mode (t2 < t < t3) As shown in Figure 6, the bi-directional switch set Sb, Sodd_0, Sodd_1, Seven_7, and Seven_8 are all turned on. The others are all turned off. The current from VBat8 will charge to Lmb through π filter II and also deliver the energy from the primary winding Npb to the secondary winding Nsb. In this transferring state, Da is turned on, and iNsb starts to charge to VBat1 through π filter I. The charging path is shown in red-dotted current. Mode Ⅳ-Recycling Mode (t3 < t < t4) As shown in Figure 7, the bi-directional switch set Sb turns off, but Sodd_0, Sodd_1, Seven_7, and Seven_8 are all still turned on. The rest of the switch sets are turned off. In this interval, the energy stored in Lmb, and the current iLmb remains continuously. Thus, iLmb (shown with green dotted current) will flow thru the primary winding Npb to induce a current iNsa from Nsa. The induced current iNsa also flows through π filter II and charge back to VBar8. During this mode, the energy stored in the magnetizing Figure 6. Mode III, the battery balancing process from higher V Bat8 to lower V Bat1 .
Mode IV-Recycling Mode (t 3 < t < t 4 )
As shown in Figure 7, the bi-directional switch set S b turns off, but S odd_0 , S odd_1 , S even_7 , and S even_8 are all still turned on. The rest of the switch sets are turned off. In this interval, the energy stored in L mb , and the current i Lmb remains continuously. Thus, i Lmb (shown with green dotted current) will flow thru the primary winding N pb to induce a current i Nsa from N sa . The induced current i Nsa also flows through π filter II and charge back to V Bar8 . During this mode, the energy stored in the magnetizing inductor can be released and also recycled to the battery (V Bat8 ) effectively. At the left side of the converter, the energy of V Bat1 is provided by π filter I. In this recycling mode, the energy is saved during this interval.
Design Consideration and Specification of Cell
In this part, the design consideration is discussed in detail. Table 1 lists the experiment key parameters (switching frequency, duty cycle, turns ratio, capacitance, and inductance) in this study. In addition, Table 2 is the specification of LiFePO4 battery. These parameters of battery help to design the charger and the related components. At first, the turns ratio (NS/NP) of the transformer has to be determined by using the nominal voltage of cell. In order to simplify the derivation, all the switches are assumed to be ideal. Besides, this application is operated in low voltage, the power switches and the diodes can be selected for low power rating to decrease the cost of the converter.
Design Consideration and Specification of Cell
In this part, the design consideration is discussed in detail. Table 1 lists the experiment key parameters (switching frequency, duty cycle, turns ratio, capacitance, and inductance) in this study. In addition, Table 2 is the specification of LiFePO 4 battery. These parameters of battery help to design the charger and the related components. At first, the turns ratio (N S /N P ) of the transformer has to be determined by using the nominal voltage of cell. In order to simplify the derivation, all the switches are assumed to be ideal. Besides, this application is operated in low voltage, the power switches and the diodes can be selected for low power rating to decrease the cost of the converter.
Model Number ANR26650M1B
Charging Voltage The turns ratio can be derived by the voltage of the charging behavior. Refer to Figure 8, V in is fed in π filter I, and the voltage across the primary side N Pa is also the V in (assuming the filters and the switches are ideal). At the secondary side, the voltage V Nsa across N Sa is induced by N Pa . During the charging state from V in to V o , the condition (V NSa − V o ) > V D has to be satisfactory to turn on the diode D b . Therefore, the charging path can be established as red dotted current. Based on (1), the across voltage on the diode has to be higher than the cut-in bias VD for turning on the diode. That is, the input voltage Vin is definitely higher than Vo. After the battery balancing process ends, Vin will approach to the charging voltage of the battery and also to be identical to Vo In the steady state, Vin = Vo. Thus, the equation (1) can be rewritten which is shown in (2).
In order to obtain the turns ratio from (3), the voltage of Vin is 3.3V, and the forward bias voltage of VD is 0.45 V respectively. Based on these parameters which are substituted into (4), the derived turns ratio is 1.14.
In this study, the actual turns ratio is chosen for 1.2 in this experiment.
Fast Battery Balancing Control Strategy and the Algorithm
The main digital controller utilized in this proposed structure is dsPIC33EP128GM304 from Microchip Technology. The first step of the procedure is to sense the voltage of each battery by the voltage detector circuit and to send the information to the processor with A/D converters. After analyzing from the processor, the battery cell in the string which needs to be activated for balancing will be chosen by the processor. In other words, the related switches (S1, S2, Sodd_0~Sodd_7, and Seven_1~Seven_8) around the imbalanced battery cells will be turned on or off for balancing process. Figure 9 shows the structure of the proposed fast battery charging balancing circuit. Based on (1), the across voltage on the diode has to be higher than the cut-in bias V D for turning on the diode. That is, the input voltage V in is definitely higher than V o . After the battery balancing process ends, V in will approach to the charging voltage of the battery and also to be identical to V o In the steady state, V in = V o . Thus, the equation (1) can be rewritten which is shown in (2).
In order to obtain the turns ratio from (3), the voltage of V in is 3.3V, and the forward bias voltage of V D is 0.45 V respectively. Based on these parameters which are substituted into (4), the derived turns ratio is 1.14.
In this study, the actual turns ratio is chosen for 1.2 in this experiment.
Fast Battery Balancing Control Strategy and the Algorithm
The main digital controller utilized in this proposed structure is dsPIC33EP128GM304 from Microchip Technology. The first step of the procedure is to sense the voltage of each battery by the voltage detector circuit and to send the information to the processor with A/D converters. After analyzing from the processor, the battery cell in the string which needs to be activated for balancing will be chosen by the processor. In other words, the related switches (S 1 , S 2 , S odd_0~Sodd_7 , and S even_1~Seven_8 ) around the imbalanced battery cells will be turned on or off for balancing process. Figure 9 shows the structure of the proposed fast battery charging balancing circuit. When the battery string is being charged, the digital processor utilizes A/D converters and the voltage detection circuit to detect the battery voltage from VBat1 to VBat8. After detecting the actual battery voltage, the average voltage Vavg can be calculated by the processor, and the formula is shown in (5).
Besides, the start-up voltage Vbalance_start for balancing process is shown in (6), and △V is the threshold voltage which can be determined by users' demand.
If any of the battery voltage is higher than the preset of Vbalance_start, the proposed balancing mechanism will be activated. Once the battery balancing mechanism enables, these batteries which exist the maximum differential voltage between the odd-numbered and even-numbered will be selected. During the balancing procedure, the digital processor keeps detecting the batteries voltage and refreshing the average voltage. Until the highest battery voltage VH ≤ Vavg or the lowest battery voltage VL ≥ Vavg, the balancing process will be finished. Figure 10 is the flow chart of dynamic battery charging balancing strategy. When the battery string is being charged, the digital processor utilizes A/D converters and the voltage detection circuit to detect the battery voltage from V Bat1 to V Bat8 . After detecting the actual battery voltage, the average voltage V avg can be calculated by the processor, and the formula is shown in (5).
Besides, the start-up voltage V balance_start for balancing process is shown in (6), and V is the threshold voltage which can be determined by users' demand.
If any of the battery voltage is higher than the preset of V balance_start , the proposed balancing mechanism will be activated. Once the battery balancing mechanism enables, these batteries which exist the maximum differential voltage between the odd-numbered and even-numbered will be selected. During the balancing procedure, the digital processor keeps detecting the batteries voltage and refreshing the average voltage. Until the highest battery voltage V H ≤ V avg or the lowest battery voltage V L ≥ V avg , the balancing process will be finished. Figure 10 is the flow chart of dynamic battery charging balancing strategy. Figure 10. Flow chart of dynamic battery charging balancing strategy.
In the balancing process, the voltage of each cell will be kept detecting and measuring all the time. In order to sense the voltage of each cell precisely, and to avoid the load effect between the cell and the input of the analog to digital converter (ADC), the detection circuit for battery voltage is adopted. From Figure 9, the detection of battery voltage is composed of a differential-voltage operational amplifier (OPA) and a low pass filter (LPF) to achieve the voltage measurement of a cell. The function of LPF is to filter the high frequency noise at the output of OPA. Then, the output of LPF will connect to the ADC's input of the MCU. In facts, the charging voltage of cell, VBat is 3.6 V, but the maximum input voltage of the ADC is 3 V; therefore, the proportion of the resistors around the OPA needs to be considered (R1 = R3, R2 = R4) for full scale voltage as shown in (7). In this circuit, the LPF has no attenuation in low frequency. Thus, V = V . In the meanwhile, the voltage of cell can also be measured by the voltage recorder. The detection of battery voltage circuit is shown in Figure 11. In the balancing process, the voltage of each cell will be kept detecting and measuring all the time. In order to sense the voltage of each cell precisely, and to avoid the load effect between the cell and the input of the analog to digital converter (ADC), the detection circuit for battery voltage is adopted. From Figure 9, the detection of battery voltage is composed of a differential-voltage operational amplifier (OPA) and a low pass filter (LPF) to achieve the voltage measurement of a cell. The function of LPF is to filter the high frequency noise at the output of OPA. Then, the output of LPF will connect to the ADC's input of the MCU. In facts, the charging voltage of cell, V Bat is 3.6 V, but the maximum input voltage of the ADC is 3 V; therefore, the proportion of the resistors around the OPA needs to be considered (R 1 = R 3 , R 2 = R 4 ) for full scale voltage as shown in (7). In this circuit, the LPF has no attenuation in low frequency. Thus, V 1 = V O . In the meanwhile, the voltage of cell can also be measured by the voltage recorder. The detection of battery voltage circuit is shown in Figure 11.
Differential-voltage operational amplifier Low-pass filter
To ADC
To voltage recorder
Detection of battery voltage Figure 11. The detection of battery voltage circuit.
Experimental and Simulation Results
In order to verify the proposed battery charging balancing circuit with the theoretical derivation, Figures 12-16 show the triggering waveforms for building up the balancing loop and the related waveforms when VBat1 charges to VBat8. In the opposite, Figures 17-21 provide the waveforms when VBat8 charges to VBat1. These waveforms are measured and simulated to prove that the balancing process is feasible and implemented.
Waveforms for VBat1 Charges to VBat8
In this section, the simulation results are shown to compare with the experiments. Figure 11 shows the gate signals for turning on Sodd_0, Sodd_1, Seven_7, and Seven_8.
Experimental and Simulation Results
In order to verify the proposed battery charging balancing circuit with the theoretical derivation, Figures 12-16 show the triggering waveforms for building up the balancing loop and the related waveforms when V Bat1 charges to V Bat8 . In the opposite, Figures 17-21 provide the waveforms when V Bat8 charges to V Bat1 . These waveforms are measured and simulated to prove that the balancing process is feasible and implemented.
Waveforms for V Bat1 Charges to V Bat8
In this section, the simulation results are shown to compare with the experiments. Figure 11 shows the gate signals for turning on S odd_0 , S odd_1 , S even_7 , and S even_8 .
Differential-voltage operational amplifier Low-pass filter
To ADC
To voltage recorder
Detection of battery voltage Figure 11. The detection of battery voltage circuit.
Experimental and Simulation Results
In order to verify the proposed battery charging balancing circuit with the theoretical derivation, Figures 12-16 show the triggering waveforms for building up the balancing loop and the related waveforms when VBat1 charges to VBat8. In the opposite, Figures 17-21 provide the waveforms when VBat8 charges to VBat1. These waveforms are measured and simulated to prove that the balancing process is feasible and implemented.
Waveforms for VBat1 Charges to VBat8
In this section, the simulation results are shown to compare with the experiments. Figure 11 shows the gate signals for turning on Sodd_0, Sodd_1, Seven_7, and Seven_8. When the balancing loop keeps turning on all the time, S a also turns on (V GSa : high) in the meantime. The converter goes to charging mode when V Bat1 charges to V Bat8 . If S a turns off (V GSa : low), the converter goes to recycling mode. In recycling mode, the rest of energy stored in the magnetizing inductor from the previous stage will charge back to V Bat1 . The following experiments and simulations are shown from Figures 13-16. Electronics 2019, 9, x FOR PEER REVIEW 10 of 16
Differential-voltage operational amplifier Low-pass filter
To ADC
To voltage recorder
Detection of battery voltage Figure 11. The detection of battery voltage circuit.
Experimental and Simulation Results
In order to verify the proposed battery charging balancing circuit with the theoretical derivation, Figures 12-16 show the triggering waveforms for building up the balancing loop and the related waveforms when VBat1 charges to VBat8. In the opposite, Figures 17-21 provide the waveforms when VBat8 charges to VBat1. These waveforms are measured and simulated to prove that the balancing process is feasible and implemented.
Waveforms for VBat1 Charges to VBat8
In this section, the simulation results are shown to compare with the experiments. Figure 11 shows the gate signals for turning on Sodd_0, Sodd_1, Seven_7, and Seven_8. Figures 15 and 16, when the converter is being operated in charging mode, iNpa starts to increases for charging the magnetizing inductor and to deliver the energy to lower voltage cell. Once the converter goes to recycling mode, iNsb is induced by the magnetizing inductor. Thus, the stored energy is charged back to the higher voltage cell through the filter. From these two figures below, the current direction of iNpa is opposite to iNsb. This phenomenon proves that the recycling mode is successful.
Waveforms for VBat8 Charges to VBat1
This operation principle is almost the same as the previous section, but the only difference is that the charging direction is inverse (VBat8 charges to VBat1). The simulation results are shown to compare with the experiments. Figure 17 shows the gate signal for turning on Sodd_0, Sodd_1, Seven_7, and Seven_8 for building up the charging path. In addition, Figures 18-21 present the related waveforms when VBat8 charges to VBat1. Figures 15 and 16, when the converter is being operated in charging mode, i Npa starts to increases for charging the magnetizing inductor and to deliver the energy to lower voltage cell. Once the converter goes to recycling mode, i Nsb is induced by the magnetizing inductor. Thus, the stored energy is charged back to the higher voltage cell through the filter. From these two figures below, the current direction of i Npa is opposite to i Nsb . This phenomenon proves that the recycling mode is successful. Referred to Figures 15 and 16, when the converter is being operated in charging mode, iNpa starts to increases for charging the magnetizing inductor and to deliver the energy to lower voltage cell. Once the converter goes to recycling mode, iNsb is induced by the magnetizing inductor. Thus, the stored energy is charged back to the higher voltage cell through the filter. From these two figures below, the current direction of iNpa is opposite to iNsb. This phenomenon proves that the recycling mode is successful.
Waveforms for VBat8 Charges to VBat1
This operation principle is almost the same as the previous section, but the only difference is that the charging direction is inverse (VBat8 charges to VBat1). The simulation results are shown to compare with the experiments. Figure 17 shows the gate signal for turning on Sodd_0, Sodd_1, Seven_7, and Seven_8 for building up the charging path. In addition, Figures 18-21 present the related waveforms when VBat8 charges to VBat1. Referred to Figures 15 and 16, when the converter is being operated in charging mode, iNpa starts to increases for charging the magnetizing inductor and to deliver the energy to lower voltage cell. Once the converter goes to recycling mode, iNsb is induced by the magnetizing inductor. Thus, the stored energy is charged back to the higher voltage cell through the filter. From these two figures below, the current direction of iNpa is opposite to iNsb. This phenomenon proves that the recycling mode is successful.
Waveforms for VBat8 Charges to VBat1
This operation principle is almost the same as the previous section, but the only difference is that the charging direction is inverse (VBat8 charges to VBat1). The simulation results are shown to compare with the experiments. Figure 17 shows the gate signal for turning on Sodd_0, Sodd_1, Seven_7, and Seven_8 for building up the charging path. In addition, Figures 18-21 present the related waveforms when VBat8 charges to VBat1.
Waveforms for V Bat8 Charges to V Bat1
This operation principle is almost the same as the previous section, but the only difference is that the charging direction is inverse (V Bat8 charges to V Bat1 ). The simulation results are shown to compare with the experiments. Figure 17 shows the gate signal for turning on S odd_0 , S odd_1 , S even_7 , and S even_8 for building up the charging path. In addition, Figures 18-21 present the related waveforms when V Bat8 charges to V Bat1 . Referred to Figures 15 and 16, when the converter is being operated in charging mode, iNpa starts to increases for charging the magnetizing inductor and to deliver the energy to lower voltage cell. Once the converter goes to recycling mode, iNsb is induced by the magnetizing inductor. Thus, the stored energy is charged back to the higher voltage cell through the filter. From these two figures below, the current direction of iNpa is opposite to iNsb. This phenomenon proves that the recycling mode is successful.
Waveforms for VBat8 Charges to VBat1
This operation principle is almost the same as the previous section, but the only difference is that the charging direction is inverse (VBat8 charges to VBat1). The simulation results are shown to compare with the experiments. Figure 17 shows the gate signal for turning on Sodd_0, Sodd_1, Seven_7, and Seven_8 for building up the charging path. In addition, Figures 18-21 present the related waveforms when VBat8 charges to VBat1. To sum up the measurements and simulations above, these results are compared and proved that the proposed fast charging and balancing circuit is feasible.
Before the balancing process starts, each of the battery cells has been discharged for test and experiment. The open loop voltage of each cell is listed in Table 3 from VBat1 to VBat8 individually. When the cell is being charged with 1C current, the △V is set for 0.03 V during the balancing interval I to IV. If the voltage reaches to 3.5 V, the △V is set from 0.03 V to 0.02 V during the balancing interval V to VII. As the battery voltage rises from 3.5 V to 3.6 V at interval V, the main concept of choosing △V is set for lower voltage to make the balancing process much more precise. During the experiments, the voltage and the curve of each cell is measured and drawn by a voltage recorder (model number: midi LOGGER GL800, manufacture: GRAPHTEC). To sum up the measurements and simulations above, these results are compared and proved that the proposed fast charging and balancing circuit is feasible.
Before the balancing process starts, each of the battery cells has been discharged for test and experiment. The open loop voltage of each cell is listed in Table 3 from VBat1 to VBat8 individually. When the cell is being charged with 1C current, the △V is set for 0.03 V during the balancing interval I to IV. If the voltage reaches to 3.5 V, the △V is set from 0.03 V to 0.02 V during the balancing interval V to VII. As the battery voltage rises from 3.5 V to 3.6 V at interval V, the main concept of choosing △V is set for lower voltage to make the balancing process much more precise. During the experiments, the voltage and the curve of each cell is measured and drawn by a voltage recorder (model number: midi LOGGER GL800, manufacture: GRAPHTEC). To sum up the measurements and simulations above, these results are compared and proved that the proposed fast charging and balancing circuit is feasible.
Before the balancing process starts, each of the battery cells has been discharged for test and experiment. The open loop voltage of each cell is listed in Table 3 from VBat1 to VBat8 individually. When the cell is being charged with 1C current, the △V is set for 0.03 V during the balancing interval I to IV. If the voltage reaches to 3.5 V, the △V is set from 0.03 V to 0.02 V during the balancing interval V to VII. As the battery voltage rises from 3.5 V to 3.6 V at interval V, the main concept of choosing △V is set for lower voltage to make the balancing process much more precise. During the experiments, the voltage and the curve of each cell is measured and drawn by a voltage recorder (model number: midi LOGGER GL800, manufacture: GRAPHTEC). To sum up the measurements and simulations above, these results are compared and proved that the proposed fast charging and balancing circuit is feasible.
Before the balancing process starts, each of the battery cells has been discharged for test and experiment. The open loop voltage of each cell is listed in Table 3 from VBat1 to VBat8 individually. When the cell is being charged with 1C current, the △V is set for 0.03 V during the balancing interval I to IV. If the voltage reaches to 3.5 V, the △V is set from 0.03 V to 0.02 V during the balancing interval V to VII. As the battery voltage rises from 3.5 V to 3.6 V at interval V, the main concept of choosing △V is set for lower voltage to make the balancing process much more precise. During the experiments, the voltage and the curve of each cell is measured and drawn by a voltage recorder (model number: midi LOGGER GL800, manufacture: GRAPHTEC). To sum up the measurements and simulations above, these results are compared and proved that the proposed fast charging and balancing circuit is feasible.
Before the balancing process starts, each of the battery cells has been discharged for test and experiment. The open loop voltage of each cell is listed in Table 3 from V Bat1 to V Bat8 individually. When the cell is being charged with 1C current, the V is set for 0.03 V during the balancing interval I to IV. If the voltage reaches to 3.5 V, the V is set from 0.03 V to 0.02 V during the balancing interval V to VII. As the battery voltage rises from 3.5 V to 3.6 V at interval V, the main concept of choosing V is set for lower voltage to make the balancing process much more precise. During the experiments, the voltage and the curve of each cell is measured and drawn by a voltage recorder (model number: midi LOGGER GL800, manufacture: GRAPHTEC). From Figure 22, each of these cells has 7 balancing intervals from interval I to VII. The balancing time and the energy loss of the proposed converter from I to VII are summarized as Table 4. As shown in Table 4, the balancing time becomes shorter when each cell reaches to the charging voltage 3.6 V. Also, the energy loss of the converter gets lower because the balancing process goes to the end. Figure 22, each of these cells has 7 balancing intervals from interval I to VII. The balancing time and the energy loss of the proposed converter from I to VII are summarized as Table 4. As shown in Table 4, the balancing time becomes shorter when each cell reaches to the charging voltage 3.6 V. Also, the energy loss of the converter gets lower because the balancing process goes to the end. In Table 5, the measurements and experimental results are listed. The total balancing time is to sum up the time from interval I to VII. To compare with the conventional balancing method, the proposed concept can shorten the balancing time effectively. Besides, after the balancing process, the maximum differential voltage among the balanced cells is 0.018 V. In Table 5, the measurements and experimental results are listed. The total balancing time is to sum up the time from interval I to VII. To compare with the conventional balancing method, the proposed concept can shorten the balancing time effectively. Besides, after the balancing process, the maximum differential voltage among the balanced cells is 0.018 V. As mentioned above, V is determined by user's demand and the balancing performance. If the user requires a lower differential voltage among these cells in the battery string, V has to be set lower to achieve the better balancing performance; however, it takes more time for the balancing process. Table 6 gives a comparison of different value of V. In the opposite, a higher V will obtain a worse balancing performance even the balancing time is shorter.
Conclusions
The paper proposes a fast charging balancing circuit for LiFePO 4 battery. The main concept is to give a fast voltage balancing strategy for each cell within a single battery string. In this study, a novel bi-directional forward converter is utilized and connected with the balancing bi-directional switches network to form a bi-directional battery balancing circuit.
The feature of the proposed scheme is to balance the maximum differential voltage between the odd-numbered battery and the even-numbered battery in the battery string directly. In order to verify the proposed structure, a fast battery charging circuit for a string with 8 pieces of cell (3.6 V/2.5 Ah for each cell) is implemented. In addition, the merit of this circuit can also avoid over charging situation during the balancing process.
Referred to the experimental and simulation results in previous section, a faster charging balancing circuit for a battery string is achieved. The advantage of this scheme also improves the precision of each cell after balancing procedure. The future research will keep improving the balancing algorithms for random cells in the battery string. Moreover, the proposed study can save more time during the balancing process. At present, industrial applications, electric vehicles batteries, and higher capacity batteries are always composed of many cells connected in series and parallel; therefore, the characteristics and aging phenomenon of each cell have to be concerned with care. In the near future, a battery management system can be added for the proposed balancing circuit to obtain a high precision and faster battery equalizer for each cell.
Conflicts of Interest:
The authors declare no conflict of interest.
|
2019-10-16T00:15:12.300Z
|
2019-10-10T00:00:00.000
|
{
"year": 2019,
"sha1": "c5b189634d7b7c59d25df6a554a9aaadebe6d7bc",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2079-9292/8/10/1144/pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "c5b189634d7b7c59d25df6a554a9aaadebe6d7bc",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Engineering"
]
}
|
15840436
|
pes2o/s2orc
|
v3-fos-license
|
Carbon accretion in unthinned and thinned young-growth forest stands of the Alaskan perhumid coastal temperate rainforest
Background Accounting for carbon gains and losses in young-growth forests is a key part of carbon assessments. A common silvicultural practice in young forests is thinning to increase the growth rate of residual trees. However, the effect of thinning on total stand carbon stock in these stands is uncertain. In this study we used data from 284 long-term growth and yield plots to quantify the carbon stock in unthinned and thinned young growth conifer stands in the Alaskan coastal temperate rainforest. We estimated carbon stocks and carbon accretion rates for three thinning treatments (basal area removal of 47, 60, and 73 %) and a no-thin treatment across a range of productivity classes and ages. We also accounted for the carbon content in dead trees to quantify the influence of both thinning and natural mortality in unthinned stands. Results The total tree carbon stock in naturally-regenerating unthinned young-growth forests estimated as the asymptote of the accretion curve was 484 (±26) Mg C ha−1 for live and dead trees and 398 (±20) Mg C ha−1 for live trees only. The total tree carbon stock was reduced by 16, 26, and 39 % at stand age 40 y across the increasing range of basal area removal. Modeled linear carbon accretion rates of stands 40 years after treatment were not markedly different with increasing intensity of basal area removal from reference stand values of 4.45 Mg C ha−1 year−1to treatment stand values of 5.01, 4.83, and 4.68 Mg C ha−1 year−1 respectively. However, the carbon stock reduction in thinned stands compared to the stock of carbon in the unthinned plots was maintained over the entire 100 year period of observation. Conclusions Thinning treatments in regenerating forest stands reduce forest carbon stocks, while carbon accretion rates recovered and were similar to unthinned stands. However, that the reduction of carbon stocks in thinned stands persisted for a century indicate that the unthinned treatment option is the optimal choice for short-term carbon sequestration. Other ecologically beneficial results of thinning may override the loss of carbon due to treatment. Our model estimates can be used to calculate regional carbon losses, alleviating uncertainty in calculating the carbon cost of the treatments. Electronic supplementary material The online version of this article (doi:10.1186/s13021-015-0035-4) contains supplementary material, which is available to authorized users.
Background
Forests play a key role in the global carbon cycle, containing an estimated 861 Pg C and providing a sink of 1.1 Pg C year −1 [1]. Forests are critical sinks for atmospheric greenhouse gases [2], and carbon fluxes occur across many carbon pools in forests, including live biomass, soils, and woody debris [3,4]. The terrestrial carbon stock is generally stable over time scales of decades and can only slowly alter the total terrestrial carbon balance through gains or losses [4]. Disturbances that alter forest stands can provide dramatic departures from this characteristic pattern. An example is removal of carbon due to clearcut harvesting of forests, leading to a large loss of terrestrial carbon. The increase in biomass, or carbon accretion, as stands regenerate and grow after harvest is unknown in many forests. Thinning is a common silvicultural practice for increasing growth of individual trees and maintaining or increasing wildlife habitat. However, the influence of thinning on the carbon balance in young forests is uncertain in southeast Alaska. Carbon fluxes need to be evaluated across a range of management options to understand and estimate the short and long-term impacts of silvicultural treatments on carbon pools.
Estimates of carbon flux in young-growth stands are needed to address land management planning goals and regional, national [5] and international carbon accounting protocols [6]. Mandates to understand the potential for forests to mitigate increasing concentrations of atmospheric CO 2 require accurate accounting of forest carbon fluxes. The USDA Forest Service, for example, has priortized understanding carbon dynamics in forests as part of an overall strategy to protect the long-term health of forests [7]. Necessary information about carbon cycling is particularly lacking in the perhumid coastal temperate rainforests (PCTR) of the northeast pacific coastal margin [8] (Fig. 1).
Widespread commercial forest harvest has occurred across southeast Alaska for over 50 years. However, there is no estimate of the potential carbon sequestration across the ~452,000 ha [9] of young-growth forests in the region. Natural regeneration in PCTR forests is generally vigorous and leads to rapid and nearly complete occupation of space by conifer seedlings and saplings [10]. Densely-stocked stands can produce wood products similar to thinned stands [11], but the loss of light and density of overstory trees degrades the wildlife habitat [12,13]. A common management intervention to alleviate the high stand density is thinning [14]. Felling of a portion of the stand basal area across a specific or variable [15] spacing can be applied to achieve maximum Fig. 1 Locations of the 68 Farr and 12 Taylor installations in southeast Alaska (a). Each CSDS ("Farr") installations consists of four plots: a control plot, a low-intensity thinned plot, a medium-intensity thinning plot, and a heavily-thinned plot. The Taylor installations consist of unthinned plots only and are generally in older stands. For a full description of the plots and thinning treatments see [19]. Data from the CSDS study arranged by age of stand at time of plot establishment (b). Numbers on Y axis refer to installation number with Farr plots <100 and Taylor plots ≥100. Productivity classes are the tertiles of the observed range for these sites as reported in [19]. Each symbol in an installation represents a measurement individual tree growth. However, thinning also alters the carbon accretion trajectory of the stand [4]. When left on site, the carbon content of thinned trees, and any trees that die naturally, can be accounted for by estimating decomposition rates. The impact of stand thinning and subsequent loss of biomass via decomposition are key components in calculationg a carbon sequestration rate for use in land management planning.
Quantifying the effects of young-growth forest management on carbon storage is challenging. Allometric equations linked to direct tree measurements can be used to estimate aboveground biomass production [16][17][18] across stand age, and this can be converted to carbon accretion. Estimation of the long-term differences between forests with varying management treatments requires remeasurement of the same plots over decades. Long-term plots provide an excellent source of information on biomass accretion over time where plots have been maintained and re-measured.
Experimental plots maintaned by the USDA Forest Service Pacific Northwest Research Station [19] offer an opportunity to estimate carbon change over time with varying levels of thinning. This dataset includes 284 plots across 68 installation sites, remeasured over several decades and spanning stand ages up to 161 years. The temporal and geographic breadth of these experimental plots provides an excellent foundation for investigating carbon standing stocks and carbon accretion rates in young-growth forests of the PCTR. In addition, the plot system allows analysis of the effects of forest thinning on carbon storage through the combination of allometric equations and repeated tree measurements over decades. We designed this study to address the critical need for an improved understanding of carbon storage in younggrowth forest of the PCTR and to quantify the effects of thinning on carbon gain or loss. We hypothesized that while thinning may increase carbon accretion in individual trees, across whole stands thinning will have a neutral to negative impact on the sequestration of carbon, depending on the intensity of thinning.
Methods overview
We utilized data from two long-term silvicultural datasets young-growth forests of southeast Alaska to estimate total tree carbon stock and accretion rate. One set of plots was started in the 1920's and were not thinned ("Taylor plots", 12 of 284). The other plot system included unthinned controls and thinning treatments applied at three intensities in a randomized block design ("Farr plots", 272 of 284). Plot measurements included both live and dead trees, so estimates for both pools were calculated to account for the loss of dead tree carbon decomposing over time in both unthinned and treated forest stands. A new allometric model for small diameter trees was developed to fill a needed information gap in determination of carbon in small trees.
Results
Live and dead tree carbon pools in naturally-regenerating young-growth stands Live-tree carbon increased in unthinned young-growth stands across the stand age gradient and reached an asymptote of 398 (±20) Mg C ha −1 based on a best fit, non-linear mixed effect model (NLME) (Fig. 2a). The estimated asymptotic maximum carbon stock in the stands increased to 484 (±26) Mg C ha −1 with the inclusion of dead-tree carbon ( Table 1) Dead trees in unthinned plots typically represent suppression mortality as tree density decreases through time. However, these mean carbon stock estimates for the measured plots have a great deal of uncertainty. A prediction interval was derived by considering observed variability within-and among-plots, in addition to the parameter uncertainty around the asymptote described above. The 90 % prediction intervals for the asymptotic carbon stock ranged from 145 to 653 Mg C ha −1 for the live-tree carbon model and 161-808 Mg C ha −1 for the model including both live-and dead-tree carbon.
We plotted carbon accretion as the change in the carbon pool over time in plots with only live tree carbon and calculated a peak at age 34.7 years (±0.5, bootstrap SE; Fig. 2b). The carbon accretion peaked at 39.3 years (±0.5, bootstrap SE; (Fig. 2c) for the model with both live and dead tree carbon. These carbon accretion rates varied dramatically across the chronosequence of measurements in the sampled stands (Fig. 2b, c). The high variability makes it difficult to estimate quantities with any reasonable level of precision directly from accretion data. While carbon accretion was more variable than carbon stock estimates, carbon accretion can also be estimated as the derivative of carbon stocks over time. The general shape of the data cloud suggests that accretion rates peak at 39 years and then decreases, tapering off at about 100 years. The shape of the accretion curves (Fig. 2d, e) derived directly from the fitted model for the total carbon stock (Fig. 2a) indicates that accretion peaks in young stands between 35 and 40 years and then tapers off as the stands age. The estimated weighted average carbon accretion rate based on the fitted model to total carbon [45] Dashed lines are NMLE best fit models for all tree carbon (live and dead trees) and for live tree carbon. Note that the plots with low carbon stock values are all located in the same sites which occurred on the lowest productivity areas that were sampled. Observed carbon accretion rates across stand age for b for all carbon (live and dead trees) and for c for live trees only. Implied carbon accretion (derivative of the NMLE model) as a function of stand age for d for all carbon (live and dead trees) and for e for live trees only
Influence of thinning on carbon accretion in young-growth stands
There was a systematic decrease in the total stand carbon correlated with increasing intensity of thinning (Fig. 3).
The portion of the carbon stock data in untreated younggrowth stands that is nearly linear (20-100 years) was used as a basis for comparison between treated stands. The estimated average carbon pool in the unthinned control plots (Farr plots only, see methods) was greater than the estimated average carbon pool in any of the three thinning treatments at 40 years (Table 2); estimated average carbon pools at a given age consistently decreased with thinning intensity of treatments from low to high ( Fig. 3; Table 2). The slope of the linear model fit to the data describes the stand-scale accretion rate. This accretion rate systematically decreased with thinning when decomposition of cut trees and any trees that died naturally is included and the total carbon stock was reduced by 16, 26, and 39 % across the low to high intensity thinning treatments at 40 years (Tables 2, 3). However, no major pattern between accretion rate and thinning intensity was noted with only live trees ( Table 2; Fig. 4). We note that several plots, both control and treatment, displayed particularly low accretion rates and these plots were generally all located on one set of sites (Figs. 3, 4). The residual model error in the linear models fit was similar across treatments for live trees using all plots (Tables 2, 3). This was also the case in models for live trees, cut trees, and natural mortality using plots for which cut tree data were available (221 of 284). This residual model error describes variability in carbon stocks within a plot over time after accounting for the effects of stand age and treatment. This standard error among plots for the accretion rate increased somewhat predictably across the three treatments suggesting that at more intensive levels of management it might be more difficult to predict accretion rates for an individual plot. Control plots were intermediate in their across-plot variability. We also note that residuals for both the livetree and live-tree plus cut and natural dead tree models showed no trends over stand age, indicating that the linear model accurately described the underlying effect of stand age on carbon stocks, but residuals did show a somewhat increasing trend over chronological time indicating a potential increase in variability of carbon stocks in recent years.
Simulation of stand carbon dynamics immediately after thinning
We simulated a hypothetical carbon accretion scenario under different thinning intensities, all of which occur when stands reach 20 years of age, based on our fitted statistical models (Fig. 5). The simulated carbon stock at the plot scale accumulates at a rapidly accelerating pace in all plots until the stands are subjected to a simulated thinning at age 20. This thinning leads to the rapid drop in the carbon stock of thinned stands, as we only accounted for the carbon in the remaining live trees in the stands for the simulation. Stands in all four treatments begin accumulating carbon again after the thinning treatment is applied according to the linear models. We expect that increased growth rates of individual trees lead to a more rapid rate of carbon accretion after thinning on a per-tree basis. Note, however, that while individual trees may accrete carbon at a more rapid rate after thinning due to increased growth rates, there are many fewer trees accreting carbon in a thinned plot. Overall, at the plot scale, there is an initial loss of carbon in the thinned stands and a similar accretion rate to the control stands ( Table 2). The simulated carbon stock in all thinning treatments remains lower than that of unthinned plots up to 100 years (Fig. 5).
Carbon balance in unthinned forest stands
The rate and location of terrestrial carbon sinks is critical to understanding the global carbon balance. Younggrowth forests sequester carbon in biomass, but at widely varying rates and over different timeframes. The calculation of total carbon stock and estimated accretion rates across the age gradient of the naturally regenerating young-growth forests of southeast Alaska fills a critical information gap for this region. The loss of live carbon after thinning in naturally-regenerating stands must be considered in calculating carbon sequestration estimates for young-growth forests. Thinning treatments are applied to achieve many ecosystem services in addition to carbon sequestration goals; therefore, our quantitative estimates of the loss of carbon after thinning enable evaluation of the carbon cost of a range of management actions for young-growth stand improvement. Model calibration is essential for obtaining accurate carbon balance estimates across large regions [20]. Forest carbon models need to consider the entire range of stand types and ages to accurately portray the balance of carbon stock across the landscape [21]. Mature forest stands (>200 years) can accumulate carbon at an estimated 2.4 Mg C ha −1 year −1 [22]. The carbon stock in young-growth stands is particularly critical in these estimates as these stands are generally the most active zones of carbon change on the landscape due to rapid biomass accumulation and carbon storage in trees [23]. The estimated mean accretion rate of 3.53 Mg C ha −1 year −1 over 150 year in our study area confirms the strong net gain in carbon in young-growth stands in the Alaskan PCTR. This rate is higher than the 40 years mean of 2.71 Mg C ha −1 year −1 estimated in young-growth stands in the PCTR of British Columbia [24]. Frustratingly, the uncertainty in determining the response of an individual stand is high, which limits the usefulness of model predictions for site specific estimates of carbon stock, often needed to evaluate specific management scenarios. Our models are most appropriately applied across an entire population of stands for regional and national carbon assessments. Site-specific descriptors (e.g., site productivity) that might help stratify the data and provide more accurate predictions of carbon pools will need to be applied in order to help refine our predictions of carbon accretion rates in particular locations.
Carbon balance in thinned young-growth stands
Maximizing the carbon stored in forests is a key goal of climate change mitigation programs [25]. The majority of the young-growth forest in the Alaskan PCTR result from harvest that occurred from 1960 to 1990 [9]. Thinning young-growth stands in the PCTR is a common management strategy to improve stand structure and wood production [14] and to improve wildlife forage production [13,26]. Renewable energy recommendations for the Alaskan PCTR highlight the potential for wood energy projects using this stock of young-growth forest [27]. However, the usual management scenario for these young-growth stands is thinning at 15-20 years [13] and nearly half of the 25-50 year old stands have been precommercially thinned [9,14]. Therefore, recognizing the tradeoff between thinning for stand improvement, biomass energy, and carbon sequestration in young-growth forest stands is important for making land management decisions. A key finding in our study is that thinning persistently reduces the carbon stock in young growth stands. The rate of carbon accretion in thinned stands is higher than control plots after the initial carbon loss; but, the gap created by the initial carbon loss is maintained and the total stock of carbon in thinned stands does not equal the stock of carbon in the control plots over the entire 100 year period of observation. This is consistent with the observation that the reduction in total stand carbon stock may not change the net ecosystem exchange between pre-and post-thinning [28]. The maintenance of tree growth would explain the similarity in the trajectory of carbon accretion among the treatments after the initial period of disturbance. Reduced carbon stocks due to thinning have been recognized in other forests [4,29,30], but is not often included in forest carbon accounting or management actions due to the lack of adequate stand response data. Our quantification of the reduction in carbon stock across a range of thinning treatments allows estimates of the effects of thinning on regional carbon stocks. The systematic variation in the carbon stock related to thinning intensity may offer a mitigation measure for achieving benefits for wildlife, wood quality, or understory abundance and diversity in managed stands. The enhanced growth of understory plants after thinning represents a tradeoff of energy from trees to forest floor and a reduction in overstory carbon compared to unthinned stands. Benefits of thinning young growth need to be balanced with the desire to maximize carbon storage in forests. For example, the less intensive thinning treatments maintain more carbon, but still provide a benefit for other desired conditions in a stand. As demonstrated by our comparison, the unthinned option provides the greatest carbon accretion of all of the thinning prescription options.
Limitations of analysis and information gaps
The carbon values provided in our study will be critical for estimating the carbon stock in the pool of When the stand reaches 20 years, we assume the plot was treated by thinning, and compare the carbon trajectories predicted by the four different treatment models. At 20 years, the treated plots immediately lose a large quantity of live carbon due to removal of woody material that was felled during the thinning from the live carbon pool. Ribbons cover the 50 % prediction interval, but do not include random effect variance among plots. The discontinuity in the control plots occurs because two separate models were used in this hypothetical scenario; the jump is due to the structural uncertainty in the models young-growth forest in southeast Alaska, but, there is still considerable uncertainty in the range of carbon accretion values among the stands in our analysis. Therefore, site specific projects will need an improved model that is able to better reflect local conditions to carbon flux values.
Factors that influence the variability in forest productivity among the sites or the response to thinning were included as random effects, but not specifically as predictive variables. Possible interactions with temperature [28], geology [31], soil saturation [32], nutrients [33] or other site-specific factors may play a role in site productivity. This uncertainty might be addressed by obtaining further information on the site factors that may influence the productivity of the plots such as soil, hydrology, or climate variables.
Potential alternate trajectories in the carbon accretion of thinned stands may arise that lead to different conclusions related to unthinned stands. We applied the same allometric equations to both unthinned and thinned stands in our analysis. It is possible that tree growth forms differ by thinning treatment and so biomass allocation would change in thinned stands. We are not aware of any existing allometric models for thinned stands of the PCTR. Therefore, we rely on the literature from other regions to support our conclusions and highlight that thinning has been found to primarily impact the biomass of the bole [34] and crown [35] of the thinned trees. Thinned stands can shift biomass accumulation from branch to leaf, but measured changes in bole biomass have been demonstrated to be small [36] unless very heavily thinned [37]. These observations provide some confidence that the total biomass calculated by our approach will not substantially change, but may be redistributed within the tree after thinning.
The residual trees left after thinning grow at an accelerated rate, but these trees are generally left in a condition where they do not maximize the growing space for many years. Thinning goals such as increased individual tree growth and allocation of energy to the forest floor for plant diversity lead to lower overstory biomass accumulation in thinned plots. While the growth rate for individual trees is greater in these plots, the amount of biomass accumulation that would be required by the individual residual trees to match the loss in biomass of similar unthinned stands would be physiologically difficult to attain. The difference is illustrated in our evaluation of the stands at 40 years in Tables 2 and 3. There could be cases where a light thinning leaves a higher density than other thinning treatments, in which case, the thinned stand may accumulate biomass similar to unthinned stands due to the additional growth of the residual trees. However, this scenario is unlikely to be applied under most operational applications.
Conclusions
Knowledge of the stock and rate of carbon accretion greatly enhances the understanding of carbon dynamics in the coastal forests of Alaska. The loss of carbon due to thinning can be used in the evaluation of management scenarios that address young-growth stand improvement. Regional carbon budgets will also be improved with estimates that include the carbon pool in younggrowth stands of the PCTR.
Source of data
This study used data from the Cooperative Stand Density Study (CSDS; Fig. 1), comprised of two long-term silvilcultural field studies, previously compiled and published [19,38,39] and an earlier study implemented by Ray Taylor ("Taylor plots"; Fig. 1). Most data (272 of 284 plots) were from a study of thinning treatments on even-aged young-growth (<100 years) stands begun in 1974 and with remeasurements continuing until 2003 ("Farr plots"). The remaining 12 plots ("Taylor plots") were located in older even-aged stands initiated by windthrow or early timber harvest in the late 19th century. The original intent of the studies was to measure sites that represented commercially harvested forests. Both the harvested landscapes and the plots in this study are weighted towards higher productivity classes. The Taylor plots were first measured in the late 1920s, with remeasurement occurring periodically through 2003. The Farr plots were established to examine growth and yield and how regenerating forest stands were impacted by light (mean 47.7 % BA removal); medium (mean 60.9 % BA removal); and heavy (mean 73.5 % BA removal), thinning at varying stand ages across varying productivity classes (Fig. 1). A complete description of thinning prescriptions is available in [19]. Most of these stands initiated following clear-cut harvest, with a smaller number of the older stands initiated by windthrow. All plots were dominated by western hemlock (Tsuga heterophylla (Raf.) Sarg) and Sitka spruce (Picea sitchensis (Bong.) Carr), with small amounts of western redcedar (Thuja plicata) and red alder (Alnus rubra). Stand age at thinning treatment ranged from 10 to 93 years (Fig. 2b). In general, the four treatments (control, light, medium, and heavy thinning) were applied in a randomized block design across 62 installations. Plot age, productivity class, and remeasurement dates are shown in Fig. 1b.
Estimating biomass of live trees
Tree species and diameter at breast height (DBH) were recorded for each tree in the original study and at each remeasurement interval (roughly 2-5 years). A subset of trees (7308 of the 27562) was measured for height during each remeasurement using a clinometer and tape or laser. Tree heights were estimated from diameter and height relationships for the remaining trees (Additional file 1: Appendix A). DBH and height were used to estimate carbon using allometric equations of the form: where d is the diameter at breast height (DBH, in meters), h is the height above breast height (m), and B is the dry biomass (kg) for all the aboveground and belowground components of the tree [17]. The constant b 0 is the biomass of a tree at breast height and b 1 is related to the tree's density. The constants b 0 and b 1 are speciesspecific. We separately accounted for red alder (Alnus rubra Bong.), Shore pine (Pinus contorta var. contorta Douglas ex. Loudon), western redcedar (Thuja plicata Donn ex D. Don), Sitka spruce (Picea sitchensis) and western hemlock (Tsuga heterophylla). Calculations for any other species were done with the western hemlock equations from [17]. Note that Sitka spruce and western hemlock account for more than 98 % of all tree measurements.
The equations developed by Standish et al., [19] had a minimum tree diameter of 3.1 to 5.3 cm, and due to the large intercept terms, did not accurately estimate the biomass of small trees. The presence of many small diameter trees in our database required the development of a new equations We developed allometirc biomass equations for small trees by sampling 60 small diameter Sitka spruce and western hemlock and calculating the total biomass based on whole tree harvest and weighing (Additional file 2: Appendix B). These empirical biomass relationships for small diameter trees were based on Sitka spruce and western hemlock trees (<7.5 cm dbh) sampled in three locations arrayed across the geographic region of the database (Additional file 2: Appendix B). The dbh threshold for using our empirical biomass estimates for small trees versus the constants from Standish et al. [19], suitable for larger trees, was defined by the intersection of our local parameterization curve and the Standish parameterization under the assumed height-diameter relationship (Additional file 2: Appendix B). Because the height-diameter relationship and allometric parameterizations were species-specific, the diameter threshold that determined which biomass equation to apply was also unique to each species.
Estimating biomass of dead trees
Dead trees, both those cut during thinning and left on site and those that died from natural mortality, are often ignored in estimates of forest carbon pools and fluxes. In our analysis all cut trees were considered to be left on site to decompose. Cut trees were recorded in 164 of 215 treatment plots. Most plots missing cut tree data were reported in [19] as lacking pre-thinning data. The exceptions are the 16 treatment plots of installation 62 ("Staney Creek"), for which no explanation of the missing cut tree data is given. In all cases, analysis that considered the effect of management on dead trees was based on the 164 plots for which cut tree data were available.
We estimated carbon content of dead trees using the following deterministic relationship previously parameterized for the region in a study of the decomposition rate of thinning slash [40]: where B 0 is the estimated biomass at the time of death in kilograms, and t is time since tree death in years. This equation was used for both trees that were cut at the beginning of the study during initial thinning and left on site as well as for trees that died of natural causes, typically from suppression, at some point during the study's duration. For the latter case, we assumed the tree died and began decomposition at the midpoint between the date on which the last live measurement was taken and the date on which it was marked as dead.
Estimating carbon at the plot level from individual tree biomass
We assumed that carbon made up 48 % of the dry biomass [41] of an individual tree for both live and dead trees and that the root to aboveground biomass ratio was 0.2 [17]. Carbon estimates over all trees within a plot were aggregated into a single estimate of megagrams of carbon per hectare.
Ingrowth
Due to irregular inclusion of ingrowth measurements, our analysis of carbon estimates did not account for biomass additions due to ingrowth of new trees. We evaluated the potential impact of excluding ingrowth in our carbon estimates for plots with available ingrowth measurements. In 95 % of the measurements, the contribution of ingrowth was <5 % of total plot carbon. However, the error from excluding ingrowth likely increases with stand age as these forests begin to reach the understory re-initiation phase [42].
How does carbon accretion change with stand age in naturally-regenerating forests?
To understand basic underlying carbon dynamics of young growth stands in the PCTR, we first evaluated naturally-regenerating plots. By combining data from the Farr control plots and the Taylor plots, none of which were thinned, we had a very long chronosequence of naturally-regenerating plots (Fig. 1b), measured between 1926 and 2000 that were 10 to 170 years of age. We fit an asymptotic nonlinear equation to relate carbon content to stand age [43]: where TC is total carbon in a stand (Mg ha −1 ) and stand age (age) is measured in years. We used non-linear mixed effects models to account for correlation among repeated measures within plots, thereby allowing the stand index to implicitly enter the model as a random feature of each plot. The random effect was placed on the asymptotic amount of carbon in the plot, consistent with the idea that the random effect reflects differences in site productivity index. Models were fit using the nlme package in R [44].
The model was first fit using estimates of carbon from live trees only. We then fit the model again to estimate carbon based on both live and dead trees. We estimated the weighted average rate of carbon accretion as: Using this equation [45], we weight the instantaneous rate of accretion, so that the steeper portion the curve, is most influential when accounting for overall carbon. Estimates of parameter uncertainty were derived using parametric bootstrapping.
How does carbon accretion change with thinning?
We examined the impact of the three thinning treatments on carbon accretion using the 272 Farr plots. We did not include data from the Taylor plots in this analysis as there were no equivalent examples of older thinned plots. Carbon dynamics in the first 10 years after thinning were nonlinear due to the rapidly deccelerating pace of decomposition of cut trees. These early data describe a different ecological process than data from >10 years post-thinning and were therefore excluded from our model. We excluded the first 10 years of measurements from control plots in the same blocks to balance the design. Within this age range of approximately 20-100 year-old stands, the carbon stock increased linearly among all four treatments. Therefore, we fit a linear mixed effects model to this data set. A random effect was placed on both the intercept and the slope, which was supported by likelihood ratio test, P < 0.001. These slopes describe the estimated average carbon accretion rate for stands within each treatment.
|
2018-04-03T02:26:41.076Z
|
2015-10-20T00:00:00.000
|
{
"year": 2015,
"sha1": "ee49e76430ec0aa9cf2eb9ff0db3e515ab12f50f",
"oa_license": "CCBY",
"oa_url": "https://cbmjournal.biomedcentral.com/track/pdf/10.1186/s13021-015-0035-4",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ee49e76430ec0aa9cf2eb9ff0db3e515ab12f50f",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine",
"Environmental Science"
]
}
|
203009950
|
pes2o/s2orc
|
v3-fos-license
|
Structural and electrical properties of ceramic Li-ion conductors based on Li1.3Al0.3Ti1.7(PO4)3-LiF
The work presents the investigations of Li1.3Al0.3Ti1.7(PO4)3-xLiF Li-ion conducting ceramics with 0<x<0.3 by means of X-ray diffractometry (XRD), 7Li, 19F, 27Al and 31P Magic Angle Spinning Nuclear Magnetic Resonance (MAS NMR) spectroscopy, thermogravimetry (TG), scanning electron microscopy (SEM), impedance spectroscopy (IS) and density method. It has been shown that the total ionic conductivity of both as-prepared and ceramic Li1.3Al0.3Ti1.7(PO4)3 is low due to a grain boundary phase exhibiting high electrical resistance. This phase consists mainly of berlinite crystalline phase as well as some amorphous phase containing Al3+ ions. The electrically resistant phases of the grain boundary decompose during sintering with LiF additive. The processes leading to microstructure changes and their effect on the ionic properties of the materials are discussed in the frame of the brick layer model (BLM). The highest total ionic conductivity at room temperature was measured for LATP-0.1LiF ceramic sintered at 800{\deg}C and was equal to {\sigma}tot = 1.1 x 10-4 Scm-1.
The apparent density of the composites was determined using Archimedes method with isobutanol as an immersion liquid. We estimated the accuracy of the used method as ca.
1%. The microstructure was investigated by means of SEM employing Raith eLINE plus. For the SEM imaging, always freshly fractured pellets were used.
For electric measurements, both bases of the as-formed pellets were, at first, polished and then covered with graphite as electrodes. Impedance investigations were carried out employing Solatron 1260 frequency analyzer in a frequency range of 10 −1 -10 7 Hz. Impedance data were collected in the temperature range from 30 to 100°C, both during heating and cooling runs.
MAS NMR spectra were acquired on a 400 MHz Bruker Avance II NMR spectrometer equipped with a 4 mm HXY probe used in double resonance mode. The NMR spectra were acquired at room temperature (RT) at the MAS frequency of 10 kHz. Single-pulse experiments were performed with pulse lengths of 1.0 µs for 7 Li, 1.0 µs for 27 Al and 2.5 µs for 31 P. Radiofrequency (rf) nutation frequencies were equal to 115 kHz for 7 Li, 60 kHz for 27 Al and 50 kHz for 31 P. The 7 Li, 27 Al and 31 P NMR spectra result from the averaging of 512, 128, 8 transients using relaxation delays of 0.3, 1.0 or 20 s. The 19 F NMR spectra were acquired at room temperature and a MAS frequency of 30 kHz using single-pulse experiments with a pulse length of 2.8 μs and an rf nutation frequency of 89 kHz. The 19 F NMR spectrum resulted from averaging 144 transients with a relaxation delay of 10 s. The 7 Li, 27 Al and 31 P chemical shifts were referenced to 1 mol.L −1 LiCl, 1 mol.L −1 AlCl3 and 85 wt% H3PO4 aqueous solutions. The 19 F chemical shifts were referenced to CFCl3 using the resonance of CaF2 (−108.6 ppm) as secondary reference. The NMR spectra were simulated using dmfit software [32]. The 31 P and 27 Al NMR spectra were simulated assuming Gaussian lineshapes and considering only the isotropic shifts, whereas the 7 Li NMR spectra were simulated assuming Lorentzian lineshapes for all bands and considering isotropic chemical shift and quadrupolar interaction. Fig. 1 presents the X-ray diffraction patterns for the as-prepared, polycrystalline LATP as well as the LATP-0.3LiF composite before and after sintering at 900°C. The position and relative intensity of the main XRD reflections for the as-prepared LATP powder, correspond to the NASICON-type compounds with R-3c symmetry group. Besides them, some additional weak diffraction lines located at 2θ angles 20.6° and 26.3° can be observed. They were ascribed to the berlinite aluminophosphate phase (denoted AlPO4). In the case of the non-sintered composite, the XRD pattern is similar and the only additional reflections being weak peaks at 38.6° and 45.0° assigned to the LiF additive. After sintering at 900°C, some new weak reflections are observed at the following 2θ angles: 22.2, 26.7, 27.7, 31.5 and 39.3°. They were assigned to LiTiPO5 and Li4P2O7 phases. Furthermore, the diffraction peaks attributed to the LiF phase disappeared. To determine the temperatures at which the above phase transformations occur, the HTXRD was performed. During annealing, above 300°C, the intensity of the LiF diffraction peaks started to decrease with increasing temperature and finally at about 500°C the peaks completely vanished. Next changes were observed when temperature reached 700°C. The diffraction peaks related to AlPO4 phase faded out and simultaneously those attributed to the LiTiPO5 and Li4P2O7 appeared. The results suggested that at high temperatures the AlPO4 phase reacted with the LATP and resulted in the formation of new compounds, including LiTiPO5 and Li4P2O7 crystalline phases.
Thermal gravimetric analysis (TGA)
The thermal gravimetric (TG) results of the studied composites are presented in Fig. 2.
Between RT and about 500°C, for all the investigated materials with different LiF molar ratios, mass loss occurs, whereas above 500°C there is no significant mass loss. The mass loss from RT to about 200°C stems from the evaporation of the moisture and residual ethanol adsorbed on the surface of the grains. At higher temperatures, the decomposition of LiF and the associated release of the fluorinated compound produces additional mass loss. Figure 1 XRD patterns of the as-prepared LATP as well as the LATP-0.3LiF composite before and after sintering at 900°C. shows the SEM image of the ceramic LATP material, which is formed of two kinds of grains: small grains with a diameter of ca. 1.5 μm as well as bigger ones with a size of a few micrometers. In addition, some voids, microcracks and grain boundaries are visible. The microstructure of the composite containing 0.2 mol of LiF and sintered at 700°C is different (Fig. 3B). First of all, this sample is made mainly of grains with a size of ca. 1.5 μm.
SEM and density
Additionally, the concentration of pores is higher than in the pristine material and above all, the LATP grains seem to adhere better to each other. When the composite is sintered at 800°C, the grains become bigger and more densely packed (Fig. 3C). No large voids are observed.
After sintering at 900°C (Fig. 3D), the grain size exceeds 5 μm. They are well matched to each other, however some microcracks are observed. In summary, the SEM investigations show that LiF acts as ceramic densification agent. The LATP grains in the ceramic composite are bigger and more densely packed with sintering temperature.
The results of the density measurements for the LATP and studied composites are listed in Table 1. The apparent densities of the composites vary in the narrow range of 2.71-2.79 g•cm -3 . They are slightly larger than those of the pristine material sintered at the same temperature and slightly increase with sintering temperature. The obtained data shows a negligible influence of LiF on the density values. The relative density values were calculated in regards to theoretical density of LATP (2.946 g•cm -3 ) [11,21].
MAS NMR
The MAS NMR investigations were focused on the as-prepared LATP powder, the LATP ceramic sintered at 900°C, the LATP-0.3LiF composite non-sintered and sintered at 800°C.
19 F MAS NMR
The 19 F MAS NMR spectrum (Fig. 4) of the non-sintered LATP-0.3LiF sample is dominated by a signal with a centerband at −204 ppm assigned to LiF [33]. An additional weak signal with a centerband at −165 ppm is also observed. This peak may result from the formation of Al-F bonds in LATP-0.3LiF [34]. Conversely after sintering, no 19 F NMR signal is detected. These results are consistent with the HTXRD and TG data and indicate that LiF decomposes during sintering with an associated release of fluorine atoms from the material.
27 Al MAS NMR
The 27 Al MAS NMR spectra ( resonances were assigned to some amorphous or strongly disordered phase containing aluminum, which was detected by the XRD [12,23]. The MAS NMR spectrum of the LATP after sintering at 900°C shows the same lines. However, the sintering modifies their relative integrated intensity (see Table S1). The sintering enhances the relative intensity of NASICON, whereas the relative intensity of the signal resonating at 13 ppm is reduced.
Conversely the relative integrated intensity of the line at 40 ppm remains practically unchanged. An additional weak peak at 31 ppm is observed. It was assigned to the berlinite 800°C. Only the peak of NASICON structure is visible. This observation clearly indicates that the annealed LATP-0.3LiF is practically free from foreign phases containing Al 3+ ions.
In summary, the 27 Al MAS NMR investigation indicate that the as-prepared material contains, besides LATP and berlinite phases, which have been observed by XRD, some amorphous or highly disordered phase or phases containing aluminum ions, which were not detected by XRD. Annealing at high temperature promotes chemical processes causing Table S1.
The spectra of the four samples are displayed with identical intensity scales.
destruction of impurity phases accompanied with the transfer of aluminum ions to the crystal lattice of LATP. However, the sintering without LiF assistance does not fully destroy the amorphous phase and the berlinite. The total decomposition of all foreign phases containing Al 3+ is achieved after the sintering of LATP with LiF. We also noticed a significant decrease of the integrated relative intensity of the P(OTi)4 peak accompanied with an increase of P(OTi)1(OAl)3 signal and the appearance of P(OAl)4 peak. This conversion is more efficient than for the sintering of LATP.
31 P MAS NMR
Furthermore, after the sintering, additional peaks at −3.6, −5.9, −9.8 and −23.6 ppm are observed (Fig. 6). The narrow peaks at −3.6, −5.9 and −9.8 ppm are ascribed to 31 P nuclei in some crystalline phosphates, including LiTiPO5 or/and Li4P2O7 phases, which were observed by XRD [36]. The broad peak at −23.6 ppm is ascribed to P atoms linked to three titanium atoms via oxygen bridge in amorphous titanium phosphate (TiPO4) [37]. These phases probably result from decomposition of the aluminophosphate phases. The released aluminium ions diffuse into the bulk of grains, where they substitute titanium ions in the crystal structure.
In turn, the released titanium ions diffuse outside the grain and react with PO4 groups and lithium ions from decomposed LiF. The resulting phases deposit on the grain surface.
In the summary, in the as-prepared LATP, only P(OTi)4-x(OAl)x (x = 0, 1 and 2) coordinations are observed and the P(OTi)4 signal dominates. After sintering of both LATP or LATP-LiF, the relative amount of the P(OTi)4 site decreases, while the P(OTi)1(OAl)3 sites are formed. Additionally, in the case of sintered LATP-LiF material, P(OAl)4 coordination is detected. The relative amounts of P(OTi)4-x(OAl)x (x = 1 and 2) remain unchanged during the sintering (see Table S2). As a result, the amount of Al 3+ ions in the sintered LATP is higher than that in the as-prepared one. Furthermore, the concentration of Al 3+ ions in the LATP phase is the highest after sintering in the presence of LiF. The result is consistent with the 27 Al MAS NMR observations that indicate the decomposition of the foreign phases containing aluminum ions and the diffusion of the released Al 3+ ions into the crystal structure of LATP, where they replace Ti 4+ ions in the crystal lattice. Table S3). Nevertheless, after sintering, the CQ value of both 7 Li sites decreases. Such decrease may be related to the replacement of Ti 4+ ions by Al 3+ ones during sintering. Furthermore, sintering broadens the 7 Li NMR signals of both sites. This broadening is consistent with an accelerated 7 Li relaxation after the sintering.
7 Li MAS NMR
Furthermore, the integrated intensity of Li3 site increases, which shows a preferential occupation of the Li3 site. The sintering of LATP-0.3LiF composite produce a similar modification of the 7 Li spectrum. Hence, the presence of the LiF additive does not affect the mobility of Li + ions and the occupation rates of the Li sites in LATP.
Impedance spectroscopy
The results of the impedance spectroscopy investigations for LATP sintered at 900°C and LATP-0.2LiF annealed at 800°C are shown in Fig. 8 in the Nyquist plot representation.
The data have been collected for the materials kept at 30°C. The geometries of the samples used in this investigation were similar. The selected impedance plots exhibit typical and characteristic shapes for the studied materials. The Nyquist plot for the ceramic LATP forms a single, almost regular, large semicircle, followed by a spur. Besides that, another, small semicircle can be observed at high frequencies. The plot for the LATP-0.2LiF is representative for the sintered LATP-LiF composite family. It also consists of two semicircles however, the low frequency one is much smaller than that of the LATP ceramic.
The electrical properties of the LATP and derived composites could be modeled via an electrical equivalent circuit approach. An impedance plot, that consists of two separate semicircles is typical for the materials in which ion transport occurs in two different media. In the case of the studied materials, the ions move through the bulk (grains) and inter grain phase area of the sample respectively. The determined value of the total ionic conductivity of the ceramic LATP sintered at 900°C is ca. 4.7 × 10 −5 S•cm −1 at 30°C. Note that such value is too low for the LIB applications. As seen in Table 2, for materials containing 0.1 or 0.2 mol of LiF and sintered at 800°C, the measured total ion conductivity can be enhanced up to 1.1 × 10 −4 S•cm −1 . Temperature dependent impedance spectroscopy shows that for all materials under study, the total conductivity exhibits an Arrhenius dependence (Fig. 9). The estimated values of the activation energy of the total conductivity for various sintering temperatures and LiF contents are reported in the Table 2.
As seen in ). The conductivity of grain boundary σgb also vary depending on the sintering temperature and they are also highest for the materials sintered at 800°C. Figure 9 Arrhenius plots of the total electric conductivity of the LATP-LiF composites sintered at 900°C. where, α = d/D. In the above equation, σgr and σgb denote to the true conductivities of grains and grain boundary phases respectively, but not the apparent ones determined by the means of impedance spectroscopy. When σgr >> σgb i.e. conductivity of the grain is much higher than that of grain boundary, the formula (1) can be simplified to: Hence, the total ion conductivity is proportional to σgb and the proportionality factor (in brackets) depends only on the geometry of the cubes modelling the microstructure. Such situation is encountered for the as-prepared and sintered LATP as well as non-sintered LATP-LiF composites, for which σgr > 10σgb.
These results are consistent with XRD and NMR data. These techniques demonstrated that the as-prepared LATP and non-sintered composite contain not only LATP but also poorly ion conductive phases, such as berlinite and some amorphous phase containing Al 3+ ions.
These foreign phases embedded in the LATP grains form a highly resistant medium, which is detected and identified by the means of impedance spectroscopy method as a grain boundary.
A sintering without LiF addition causes the densification of the material and the decomposition of the amorphous phase but leaves AlPO4 phase unaltered. Therefore, the total ion conductivity of the sintered LATP ceramic remains low, even if it is much higher than that of the as-prepared non-sintered LATP pellets.
Hence, the BLM predicts that the total electric conductivity of the ceramic composed of large grains is proportional to the conductivity of a grain. The proportionality factor depends on both the geometry of the microstructure (parameter α) and conductivity ratio σgr/σgb. It increases when the real conductivity of grain boundary increases. If ασgr/σgb << 1, i.e. the grains are large and the real conductivities of grains and grain boundary are comparable, the total conductivity of the ceramic approaches the conductivity of the grains.
BLM and Eqs. 1-3 emphasize the importance of the microstructure and the real conductivity of the grain boundary in order to obtain ceramics with high total ion conductivity. Recently, A. Vyalikh et al. [44] reported the significant impact of those factors in enhancement of total ionic conductivity of LAGPY material. The brick-layer model also shows the coupling between these two factors. Notably the negative effect of highly resistant grain boundary on the total ion conductivity can be reduced by an enlargement of the grains.
This approach is exploited in the present work. However, in the present work, XRD and NMR characterization have also shown that the sintering affected the grains themselves. The observations indicated that the preparation method of the LATP powder, led to incomplete synthesis of the final product. The resultant LATP powder still contained residues of the unreacted starting materials, some amounts of intermediates and by-products. Therefore, the concentration of the aluminum ions in the LATP crystal lattice was lower than expected. The sintering allowed the formation of grains with a chemical composition closer to the assumed one. Various coupled processes occurs during the sintering: the generation of the free aluminum ions on a grain surface after decomposition of the inter-grain phases, the diffusion
|
2019-09-17T03:08:44.102Z
|
2019-09-29T00:00:00.000
|
{
"year": 2019,
"sha1": "738cc278683ab16c8629bf9de085b18a0c000d85",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1909.13291",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "cdc7e07709fb8801f406a65831fdb4a982950f1b",
"s2fieldsofstudy": [
"Materials Science",
"Engineering",
"Physics"
],
"extfieldsofstudy": [
"Materials Science",
"Physics"
]
}
|
263829517
|
pes2o/s2orc
|
v3-fos-license
|
CodeTransOcean: A Comprehensive Multilingual Benchmark for Code Translation
Recent code translation techniques exploit neural machine translation models to translate source code from one programming language to another to satisfy production compatibility or to improve efficiency of codebase maintenance. Most existing code translation datasets only focus on a single pair of popular programming languages. To advance research on code translation and meet diverse requirements of real-world applications, we construct CodeTransOcean, a large-scale comprehensive benchmark that supports the largest variety of programming languages for code translation. CodeTransOcean consists of three novel multilingual datasets, namely, MultilingualTrans supporting translations between multiple popular programming languages, NicheTrans for translating between niche programming languages and popular ones, and LLMTrans for evaluating executability of translated code by large language models (LLMs). CodeTransOcean also includes a novel cross-framework dataset, DLTrans, for translating deep learning code across different frameworks. We develop multilingual modeling approaches for code translation and demonstrate their great potential in improving the translation quality of both low-resource and high-resource language pairs and boosting the training efficiency. We also propose a novel evaluation metric Debugging Success Rate@K for program-level code translation. Last but not least, we evaluate LLM ChatGPT on our datasets and investigate its potential for fuzzy execution predictions. We build baselines for CodeTransOcean and analyze challenges of code translation for guiding future research. The CodeTransOcean datasets and code are publicly available at https://github.com/WeixiangYAN/CodeTransOcean.
Introduction
Early software systems are developed using programming languages such as Fortran and COBOL, which have a significantly smaller user base compared to modern mainstream programming languages (e.g., Python and Java).Hence maintaining and modernizing early software systems are expensive (Opidi, 2020).Moreover, the readability and compatibility of the mixed multitude of programming languages are challenging when migrating existing software systems to new technology ecosystems or integrating software systems using different programming languages.The code translation task aims to convert source code from one programming language to another and is of great value in industry.
Code translation methods evolve from the inefficient, costly, and error-prone manual rewriting method to automatic methods.Automatic code translation methods can be categorized into compilers and transpilers, rule-based methods, and neural network based methods.Neural models (Feng et al., 2020;Wang et al., 2021Wang et al., , 2023b) have become dominant in code translation.Details of code translation methods are presented in Appendix A.1.The performance of neural models relies heavily on large-scale high-quality parallel data.However, existing code translation datasets are limited by insufficient coverage of programming languages and mostly focusing on a single pair of popular programming languages, limited scale, and uneven data distribution.The widely used CodeTrans (Lu et al., 2021) is a small dataset containing only Java-C# parallel data for quite short code samples.Other datasets (Ahmad et al., 2023;Rozière et al., 2020;Zhu et al., 2022b;Nguyen et al., 2013;Chen et al., 2018) suffer from the same limitations.Consequently, existing code translation models (Feng et al., 2020;Wang et al., 2021;Ahmad et al., 2021) are confined to a narrow range of one-to-one code (Liu et al., 2019).Length is the number of characters.translation scenarios.Moreover, deep learning has been broadly used and achieved unprecedented success.However, there are barriers between different deep learning frameworks during the actual production process.Existing code translation datasets also neglect important demands from real-world applications, including modernizing early software systems developed in niche programming languages and migrating code across different deep learning frameworks.
To address these limitations and advance neural code translation models, we construct a largescale comprehensive multilingual code translation benchmark CodeTransOcean, summarized in Table 1.CodeTransOcean is an innovative benchmark that aims to provide a unified platform for evaluating various models on a comprehensive set of code translation tasks that reflect real-world demands.Based on this goal, each dataset in Code-TransOcean is specifically designed to tackle a key challenge in the field of code translation.Code-TransOcean includes three multilingual datasets, namely, the MultilingualTrans dataset (including eight popular programming languages), the NicheTrans dataset (translating between thirtyseven niche programming languages and the eight popular ones 1 ), and a specialized dataset LLM-Trans (including 350 data samples and their executed results) to evaluate executability of code translated by large language models (LLMs), and a cross-framework dataset DLTrans facilitating our proposed task for translating code between deep learning frameworks to enhance code reusability. 1 We define popular and niche programming languages based on the TIOBE Programming Community Index, which is a metric of the popularity of programming languages.
DLTrans includes 408 samples covering four mainstream deep learning frameworks.
Multilingual modeling shows great potential in neural machine translation (Aharoni et al., 2019;Wang et al., 2020;Zhu et al., 2023), but it has not been systematically explored for code translation.We investigate multilingual modeling for code translation using our MultilingualTrans, NicheTrans, and DLTrans datasets.Experimental results demonstrate that multilingual modeling significantly improves translation quality for both high-resource and low-resource language pairs and improves the model training efficiency.
Recent research indicates that the proficiency of the LLM ChatGPT in natural language translation is on par with commercial-grade translation systems (Jiao et al., 2023).To the best of our knowledge, our work is the first to systematically investigate the potential of ChatGPT in code translation.We develop a fully automated translationexecution-evaluation pipeline AutoTransExecuter to support this study.Note that match-based metrics and execution-based metrics have been used for evaluating code translation methods, with details in Appendix A.1.In order to accurately evaluate the usability of translated code from ChatGPT, we propose a novel execution-based evaluation metric Debugging Success Rate @K (DSR@K), which is the percentage of samples with translation results that successfully execute and produce the expected functionality after K debugging rounds.On our LLMTrans dataset, the baseline ChatGPT setting achieves 48.57% DSR@0.We find that self-debugging and one-shot improve the performance while chain-of-thought strategies degrade the translation accuracy.Since our AutoTransEx-ecuter still cannot cover arbitrary programming languages, we also propose a novel metric fuzzy execution, attempting to address the limitations of existing evaluation metrics for code translation.Our preliminary study using ChatGPT shows that Chat-GPT is still inadequate to predict fuzzy execution for any arbitrary programming language, which demands future research.
Our contributions can be summarized as follows: • A large-scale multilingual code translation benchmark: CodeTransOcean covers the largest number of popular and niche programming languages so far with the largest scale.It also includes an unprecedented dataset for translating code across different deep learning frameworks and a dataset and an automated pipeline for evaluating LLMs on code translation.We establish baselines for all datasets in CodeTransOcean.• Multilingual modeling for code translation: We are the first to systematically evaluate multilingual modeling on code translation for both high-resource and low-resource language pairs.Experimental results demonstrate that multilingual modeling significantly improves translation quality for both high-resource and low-resource language pairs and improves training efficiency.• ChatGPT on code translation: We conduct the first comprehensive study of the potential of Chat-GPT on code translation, investigating efficacy of prompting strategies, hyperparameters, selfdebugging, One-shot, and Chain-of-Thought.• New evaluation metrics: We propose DSR@K to evaluate translation and debugging capabilities of LLMs.We also propose a fuzzy execution metric based on LLMs and conduct a preliminary study using ChatGPT on this metric.
Related Work
Code Translation Datasets The success of neural models for code translation relies heavily on large-scale high-quality parallel data.However, existing code translation datasets are plagued by issues such as insufficient coverage of programming languages, limited scale, and imbalanced data distribution.The widely used code translation dataset CodeTrans (Lu et al., 2021) (Zhu et al., 2022b), making it unsuitable for code translation tasks.With the limitations of existing code translation datasets, neural models trained on them may encounter overfitting, underfitting, and poor generalizability.Clearly, these issues impede the development of neural models for code translation.Therefore, constructing datasets that effectively address these problems is critical to enhance performance of code translation algorithms.
Code Translation Methods and Evaluation Metrics Details of code translation methods and evaluation metrics are presented in Appendix A.1.
The CodeTransOcean Benchmark
In this section, we provide detailed descriptions and analyses of our CodeTransOcean benchmark, including the code translation tasks, their associated datasets, and dataset statistics.Details of data collection methods and licensing information as well as quality control and quality assessment are presented in Appendix A.2.Note that the vast There is no overlap between CodeTransOcean datasets and existing code translation datasets.
Multilingual Code Translation
With the increasing need to unify the language variety when implementing system integration or extensions with multilingual programming environments, we construct the MultilingualTrans dataset for multiple popular programming languages6 .Among programming languages in the rankings, we select the Top-10 languages as popular ones except JavaScript and SQL7 and construct the MultilingualTrans dataset based on the 8 pro-gramming languages.We treat the other languages in the rankings as niche languages and construct the NicheTrans dataset for translating between niche languages and popular languages.Additionally, in order to quantitatively evaluate the execution capabilities of the code generated by LLMs (e.g., ChatGPT, PaLM2 (Anil et al., 2023)), we construct LLMTrans, which includes the execution results for a subset of MultilingualTrans and facilitates evaluating LLMs for multilingual code translation.
MultilingualTrans Dataset This dataset contains 30,419 program samples covering eight popular programming languages, namely, C, C++, C#, Java, Python, Go, PHP, and Visual Basic.Table 11 shows the statistics of each language pair.Note that XLCoST (Zhu et al., 2022a) is the only existing multilingual code translation dataset.Compared to XLCoST, MultilingualTrans is advantageous in more balanced data distribution across various programming languages, practicality of language pairs, and data quality.For example, the real-world requirement for translating Java into JavaScript as in XLCoST is quite limited.As to data quality, our MultilingualTrans originates from a programming chrestomathy website, with all data already reviewed and verified by the website.
NicheTrans Dataset
The NicheTrans dataset contains 236,468 program samples, covering code translation pairs from thirty-seven niche programming languages, including Ada, COBOL, Pascal, Perl, Erlang, Fortran, Scala, Julia and others, to the eight popular ones.Table 12 shows statistics of each niche language.Although many studies have highlighted the practical necessity of code translation for modernizing niche programming languages (Chen et al., 2018;Zhu et al., 2022b;Rozière et al., 2020), our NicheTrans dataset is the first dataset for code translation between these niche languages and popular ones.We believe this dataset will not only facilitate modernization of outdated programming languages more effectively, but also augment and evaluate generalizability of neural models.
LLMTrans Dataset
The LLMTrans dataset aims to provide a benchmark for evaluating the performance of LLMs on code translation.The dataset translates seven popular programming languages to Python, totaling 350 program samples.We compile and test these samples and record the execution results.Based on this dataset, we design and implement an automated pipeline, AutoTransExecuter8 , automatically using LLMs to conduct code translation, execution, debugging, and calculating the success rate.This dataset and the automated pipeline ease investigation of the actual debugging success rate of LLMs on code translation and effectively measure the practical usability of LLMs.Details of the LLMTrans dataset are in Table 1.
Cross-framework Code Translation
Cross-Deep-Learning-Framework Translation Task The widespread applications of deep learning (DL) has spawned emergence of various DL frameworks, such as PyTorch, TensorFlow, MXNet, and Paddle.However, there are significant differences in syntax and dependency libraries between different frameworks, severely impeding reusability of projects9 .Moreover, studies illustrate significant disparities in energy consumption and economic costs during training and inference between various frameworks (Georgiou et al., 2022).Selecting an appropriate DL framework for green AI has become paramount in an era of large models (Ananthaswamy, 2023)
Experiments
We present experiments of multilingual training for code translation (Section 4.1).We then introduce a novel evaluation metric Debugging Success Rate@K for program-level code translation (Section 4.2) and the first comprehensive exploration of ChatGPT for code translation (Section 4.3).
Multilingual Modeling
Multilingual modeling has been pivotal in broadening the applicability of neural machine translation (Aharoni et al., 2019;Wang et al., 2020;Zhu et al., 2023;Johnson et al., 2017).This is primarily evidenced in enhancing the performance of low-resource languages and cross-language transfer learning (Mohammadshahi et al., 2022;Zoph et al., 2016;Nguyen and Chiang, 2017;Johnson et al., 2017).CodeTransOcean covers nearly fifty Table 3: Average BLEU scores of the four multilingual modeling strategies, One-to-One, Many-to-One, Many-to-Many, and One-to-Many, for All language pairs, High-resource language pairs, and Low-resource language pairs.programming languages and deep learning frameworks.We use its datasets to explore multilingual modeling on code translation tasks.
Experimental Setups In this work, we use pretrained CodeT5+ (Wang et al., 2023b) 11 as the backbone based on its superior performance on code understanding and generation evaluations reported in (Wang et al., 2023b).We use the Multilin-gualTrans dataset to investigate four multilingual modeling strategies based on data sharing in the source or target language or both, namely, One-to-One, One-to-Many, Many-to-One, and Many-to-Many, with One-to-One as the baseline.Details of the four strategies are in Appendix A.5.To understand the strengths and weaknesses of the four strategies, we compare their average performance on all language pairs and focus on low-resource and high-resource pairs.Since the CodeBLEU metric (Ren et al., 2020) does not cover all eight languages in MultilingualTrans, we use BLEU to measure translation accuracy for the four strategies.Then, we establish baselines for the DLTrans and NicheTrans datasets.
We rank the resource richness of the eight programming languages in MultilingualTrans in descending order based on their amounts in the CodeT5+ pre-training data, as Java, PHP, C, C#, Python, C++, and Go (Visual Basic is not covered by the CodeT5+ pre-training data).Based on this ranking, we consider Visual Basic, C++, and Go as low-resource languages and Java, PHP and C as high-resource languages.
Results and Analysis Detailed experimental results are shown in Table 14 in Appendix.For All language pairs, the performance of the four strategies is ranked as One-to-Many > Many-to-Many > Many-to-One > One-to-One.(1) Under One-to-Many strategy, the model encoder can provide more comprehensive information for source language translation due to its ability to absorb more source language features, thereby improving generalizability of the model.( 2) Many-to-Many can be considered as expanding the One-to-Many strategy by employing a greater volume of non-source language data for training.Since the encoder must be attuned to the features of various languages simultaneously under Many-to-Many, parameter sharing may potentially undermine the performance.(3) Manyto-One helps the model to learn from a broader range of data than the baseline.Specific patterns or expressions in diverse source languages assist the model in more precisely comprehending how to translate into the target language.The shared semantic representations across different source languages allow the model to implement effective transfer learning strategies.Furthermore, increase in training samples enables the model to optimize the loss function more stably.These results are consistent with previous findings on multilingual modeling for natural language translation (Aharoni et al., 2019): Many-to-Many models, trained across multiple target languages instead of just one target language, can function effectively as a regularization strategy for Many-to-One, thereby reducing the possibility of over-matching.
For High-resource and Low-resource languages, as shown in Table 3, the ranking of the four strategies is the same as for All, but there is notable difference in their adaptability across languages of varying resource scales.High-resource languages can take advantage more effectively from the shared information across multiple source languages; whereas, low-resource languages are relatively less equipped to handle the additional uncertainty and noise introduced by shared parameters, and thus often have to rely on a larger volume of source language data to optimize their benefits.
Results from the Many-to-Many strategy on DL-Trans and NicheTrans datasets are shown in Tables 4 and 5.The experimental results suggest that significant improvements in translation accuracy can be achieved by swapping the source and target languages in the training set to facilitate data augmentation and training a bidirectional model.Notably, prior studies on multilingual neural machine translation often overlook the comparison between One-to-Many and other strategies.Nevertheless, One-to-Many demonstrates superiority over the One-to-One baseline across all our experiments.Overall, our results strongly recommend a targeted multilingual modeling strategy for code translation, as it not only can translate multiple language pairs with a single model, but also achieves better and more stable accuracy than baselines.
Debugging Success Rate@K
For evaluations, we adopt existing code translation evaluation metrics in our experiments, including Exact Match (EM), BLEU, and CodeBLEU (details are in Appendix A.1.2).However, all these metrics are based on surface-form matching (or with some adaptations as for CodeBLEU) and are not suitable for our program-level translation tasks since they cannot reliably evaluate functional correctness of translated code.Moreover, in real-world software development scenarios, developers typically ensure the functionality of code by testing and debugging upon completion, rather than writing and testing multiple versions of the code to achieve the expected functionality as measured by the existing pass@k (Kulal et al., 2019) metric.
Meanwhile, recent research shows that LLMs such as ChatGPT demonstrate preliminary code debugging capabilities (Chen et al., 2023b,a).Hence, we propose a novel and robust evaluation metric for LLM on code translation, Debugging Success Rate@K (DSR@K), by measuring whether the translated code can be compiled and executed with the same behavior as the input source code, with K rounds of debugging.To the best of our knowledge, DSR@K is the first metric designed to accurately reflect real-world software development scenarios.
DSR@K is the percentage of the samples that successfully execute and produce the expected results among all samples.Each sample is given K generation and debugging attempts by an LLM.If the generated code successfully executes and produces the expected results with these K rounds, the sample is marked as successful.DSR@K is computed as 1 N N i=1 S(i, K), where N denotes the total number of samples.If the i th code sample succeeds within K attempts, then S(i, K) = 1; otherwise, S(i, K) = 0. Note that DSR@0 can be used for program-level code translation evaluation for any models.In this work, we employ DSR@K to evaluate the ability of LLMs such as ChatGPT for debugging code and translating code with debugging results.
ChatGPT for Code Translation
The recent LLM ChatGPT demonstrates competitive performance on language generation tasks such as summarization and machine translation (Yang et al., 2023;Peng et al., 2023;Gao et al., 2023).However, ChatGPT for code translation has not been systematically explored.We study the effectiveness and potential of ChatGPT on code translation and investigate strategies to improve its performance.We use DSR@K as the principal evaluation metric since we focus on the practical usability of ChatGPT.We use the ChatGPT API and gpt-3.5-turboas the default model and evaluate on the LLMTrans dataset for all experiments.We investigate the efficacy of prompts and hyperparameters and context in zero-shot setting, then compare oneshot versus zero-shot and study Chain-of-Thought.
Effect of Prompts and Hyperparameters
Prior works show that prompts can influence the performance of ChatGPT (Zhong et al., 2023;Peng et al., 2023;Jiao et al., 2023).We set an initial prompt "Translate [SL] to [TL]: [SC]." as the baseline, where [SL] and [TL] denote the source language and the target language respectively and [SC] denotes the source code.We also add "Do not return anything other than the translated code." for each prompting strategy to require ChatGPT to return only code in order to ease code execution.We design three prompt variants.Details of the experimental settings and prompt variants are in Appendix A.6.We also investigate the effect of hyperparameters on code translation performance.
As shown in Table 6, implementing role assignments, clarifying usage, and polite inquiry in prompts all degrade the performance compared to the baseline prompt.These results show that the baseline with the most straightforward prompt produces the best performance, possibly because it provides clear, short, and unambiguous instructions for the task to the model.More intricate prompting strategies may introduce noise and confuse Chat-GPT.The performance of polite inquiry prompt is comparable to but still worse than the baseline performance.We speculate that the improvement from polite inquiries in prior studies (Akın, 2023) may stem from their explicit and comprehensive formulations which make it easier for the model to understand the task requirements.We also observe in Table 6 that same as prior findings, BLEU and CodeBLEU have no obvious positive correlations with the debugging success rate (DSR@0).Since the reference target code exhibits the same functionality as the source language code but their execution results could differ slightly, EM also does not correlate with DSR@0.Therefore, in subsequent experiments, we only report DSR@0.We also evaluate the CodeT5+_220M model on LLMTrans with the Many-to-Many strategy and find that DSR@0 is 0, suggesting that CodeT5+_220M Zero-shot is unable to generate executable translation results.
ChatGPT selects the token with the highest probability during generation.The hyperparameter temperature influences the randomness of the generated text, while top_p controls the range of vocabu- lary considered during generation.Higher temperature or top_p could increase diversity in the generated results from ChatGPT.However, as shown in Table 16 in Appendix, independently varying temperature or top_p does not notably change the performance of ChatGPT; hence for the other Chat-GPT experiments, we set both temperature and top_p as 0 to ensure stability an reproducibility.
Effect of Context
We explore a Divide-and-Conquer strategy, which segments the source language code into snippets (e.g., functions and subfunctions), translate each snippet independently, then merge their outputs as the final result.As shown in Table 6, Divide-and-Conquer significantly degrades the performance.We hypothesize that lack of the global context in Divide-and-Conquer could prevent ChatGPT from considering the overall structure and variable configurations of the code for translation.
Effect of Self-debugging Since ChatGPT has shown preliminary capability in error detection and correction during code generation (Shinn et al., 2023;Chen et al., 2023b;Kim et al., 2023;Nair et al., 2023;Madaan et al., 2023), we use Chat-GPT to perform multiple rounds of self-debugging and investigate the impact on DSR.Specifically, ChatGPT first translates the source language code into the target language (which is Python as in our AutoTransExecuter) and then attempts to execute the translated code.If the execution passes and executing the translated code exhibits the same functionality as the source code, it is regarded as a successful execution.Otherwise, feedback from the compiler will be also fed to ChatGPT for the next round of translation, and this process is repeated until reaching a pre-defined number K of debugging rounds.The whole process is shown in Table 17 in Appendix.As shown in Table 7, DSR improves significantly with multiple rounds of selfdebugging.The first self-debugging improves DSR by 3% absolutely.Each subsequent round of selfdebugging brings further gain but DSR begins to plateau after the second debugging round.This suggests that ChatGPT has limitations in its capacity to rectify errors after multiple debugging cycles, which is consistent with human behaviors.
Effect of One-shot In-context learning (Brown et al., 2020) allows the model to learn from input examples, enabling it to understand and manage each new task.This method has been validated as an effective strategy for enhancing the performance of model inference (Peng et al., 2023;Liu et al., 2023a).Therefore, we explore one-shot learning for ChatGPT on code translation.We investigate three one-shot learning sample selection strategies.Descriptions of the strategies and the corresponding prompts are in Appendix A.7. Table 8 shows that all three One-shot learning strategies effectively improve DSR@0 of ChatGPT over the Zero-shot baseline.The Experiment#2 strategy (provided contextual example has both same source and target languages as the original task) achieves the best performance, yielding 1.72% absolute gain in DSR@0, with Experiment #1 (example has the same target language but different source language) and #3 (example has different source and target languages) following closely with 1.14% and 0.29% absolute gains, respectively.These results show that One-shot learning entirely tailored to the translation requirements is most effective in boosting code translation performance for ChatGPT.The results corroborate previous findings in natural language translation (Peng et al., 2023) that the performance of ChatGPT is sensitive to the provided contextual example in One-shot learning.
Effect of Chain-of-Thought Chain-of-Thought (CoT) allows the model to simulate an orderly and structured way of thinking by sorting out the thinking process.It helps guide the model to output the final answer step by step (Wei et al., 2022;Peng et al., 2023;Kojima et al., 2022) In Experiment #2, DSR@0 even declines by 6% absolutely.We study the translation results of Chat-GPT and find that when CoT strategies are applied, the model tends to translate the source code line by line, neglecting compatibility issues between libraries and functions in different languages.CoT also compromises the global planning ability of the model.These observations are consistent with the findings in (Peng et al., 2023) that CoT may lead to word-by-word translations of natural language, thereby degrading the translation quality.
Fuzzy Execution To address the limitations of existing evaluation metrics and our AutoTransExecuter, we propose another novel code translation evaluation metric fuzzy execution using LLMs in Section Limitations, inspired by recent progress in using LLMs as evaluation metrics for NLP tasks.
Our preliminary studies evaluates the performance of ChatGPT for predicting whether a given code can be executed or not, and if executable, also for predicting the executed output.Experimental results show that using ChatGPT for fuzzy execution is not yet practical and demands future research.
Conclusion
We construct CodeTransOcean, a comprehensive code translation benchmark that includes multilingual and cross-framework datasets.We demonstrate that multilingual modeling has remarkable potential in enhancing code translation quality.We also reveal the superior code translation capability of ChatGPT and advanced strategies lead to significant performance gains.Moreover, we introduce fuzzy execution that may overcome limitations of existing metrics but requires future research.In summary, we provide a comprehensive suite of resources, tools, and baselines for code translation.
Limitations
Existing match-based evaluation metrics for code translation (Papineni et al., 2002;Ren et al., 2020;Eghbali and Pradel, 2022;Zhou et al., 2023;Tran et al., 2019) focus solely on semantics, overlooking executability of the code and the functional equivalence under different implementations.Executionbased metrics (Kulal et al., 2019;Hao et al., 2022;Hendrycks et al., 2021;Rozière et al., 2020;Dong et al., 2023) that require providing test cases are expensive to conduct in practice, and the significant overhead of executing numerous test cases and the heightened security risks during the execution process remain unresolved.It is crucial to establish an evaluation metric that overcomes these limitations.
Our proposed DSR@K and the automated Auto-TransExecuter aim to measure the executability of the code and reflect the real-world software development scenarios.However, AutoTransExecuter currently only supports Python as the target language.This is mainly due to the fact that different programming languages necessitate distinct runtime environments and libraries, making it particularly challenging to automatically detect and install the required dependencies for each code.While certain existing tools, such as Dynatrace12 , can carry out dependency detection, the range of supported programming languages remains limited.Moreover, the configuration methods for compilers vary substantially among different programming languages, which further complicates automated configuration.In addition, fully automated execution systems could be exploited by malicious code, thus necessitating further security measures.Therefore, achieving this goal requires overcoming many technical and practical difficulties.
To address limitations of existing evaluation metrics and limitations of AutoTransExecuter, we propose another novel code translation evaluation metric fuzzy execution.
Recent studies have begun to utilize LLMs as evaluation metrics in the field of NLP (Chen et al., 2023c;Wang et al., 2023a;Fu et al., 2023;Kocmi and Federmann, 2023;Ji et al., 2023).Inspired by these works, we create a new dataset ExecuteStatus by randomly selecting 300 executable samples from MultilingualTrans and 300 non-executable samples from the translation results of ChatGPT.Each entry in this dataset includes the execution status and, if executable, the result of the execution.
We use ExecuteStatus and AutoTransExecuter to evaluate the performance of ChatGPT for predicting whether a given code can be executed or not, and if executable, also predict the executed output.The Zero-shot prompts are shown in Table 18 in Appendix.For the Few-shot strategy, in addition to the Zero-shot baseline, we include an example of executable code and an example of non-executable code, as detailed in Table 18.
We define fuzzy execution as first testing the consistency between the actual pass rate and the predicted pass rate of ChatGPT, followed by further testing the accuracy in predicting execution results using ChatGPT without relying on a compiler.Since we are interested in the ability of Chat-GPT to identify samples that cannot actually be executed accurately, we present the confusion matrix in Table 9 based on the results.To evaluate the performance of ChatGPT on the fuzzy execution prediction task, we use the standard accuracy, precision, recall, and F1 scores.Experimental results based on these evaluation metrics are in Table 10.The low accuracy, recall and F1 scores show that ChatGPT still has difficulty in identifying errors in the code, exhibiting about an 88% tendency to predict that the code is executable.Overall, ChatGPT has low accuracy in the binary classification task of "whether it can be executed", and its ability to predict execution results, being at a scant 4%, clearly requires further enhancement.Thus, using Chat-GPT for fuzzy execution is not yet practical (Liu et al., 2023b).Despite this, fuzzy execution with LLMs holds the potential to overcome the deficiencies of current code translation evaluation metrics.We will continue this exploration in future work.
A Appendix
A.1 Related Work A.1.1 Code Translation Methods Naive Copy directly duplicates the source code as the target code without making any modifications.Given that the results produced by this method are often unusable, it is treated as the lower bound of performance for code translation.Early code translation relies heavily on manual rewriting, which requires developers to have a deep understanding of both source and target languages along with the ability to navigate various complex programming structures and semantic challenges.This method is inefficient, costly, and prone to errors.
Automatic code translation methods fall into several categories.Compilers and transpilers13 can automatically translate the source code into a target language, significantly saving time and effort.However, these methods cannot fully preserve all the linguistic features and behaviors of the source code, nor can they comprehend the intent and semantics inherent to the source code as humans do.Rule-based methods (Weisz et al., 2021(Weisz et al., , 2022;;Rozière et al., 2020) treat the code translation task as a program synthesis problem.They define a set of transformation rules and employ the rules or pattern matching for code translation.Research on rule-based methods is quite scarce, mainly because they overly rely on the completeness of the rules and also require a considerable amount of manual preprocessing.
Neural network based methods have become dominant in the field of code translation in recent years.These methods mainly treat code translation as a sequence-to-sequence generation problem.Among them, Chen et al. (Chen et al., 2018) are the first to successfully apply neural networks to code translation, designing a tree-to-tree neural model.CodeBERT (Feng et al., 2020) significantly improves code translation accuracy by pretraining models with masked language modeling and replaced token detection.GraphCode-BERT (Guo et al., 2021) further improves code translation accuracy by introducing two additional pre-training tasks as edge prediction and node alignment.CodeT5 (Wang et al., 2021), based on the Transformer encoder-decoder architecture, achieves excellent performance on code translation through four pre-training tasks, namely, masked span prediction, identifier tagging, masked identifier prediction, and bimodal dual generation.With a similar architecture as CodeT5, PLBART (Ahmad et al., 2021) adopts three tasks of token masking, token deletion and token infilling for denoising seq2seq pre-training, which enables PLBART to infer language syntax and semantics and to learn how to generate language coherently.Nat-Gen (Chakraborty et al., 2022) forces the model to learn to capture intent of the source code by setting up "Code-Naturalization" tasks during pre-training, and forces the model to make the generated code closer to the human-written style.
In the line of neural network based methods, recently released large language models (LLMs) (e.g., ChatGPT (OpenAI, 2023)) have shown remarkable performance in a wide range of NLP tasks with instructions and a few in-context examples.ChatGPT is built upon GPT and is optimized with Reinforcement Learning from Human Feedback.ChatGPT can efficiently understand and generate code sequences, and can self-learn from human feedback to improve the quality and accuracy of its outputs.This significant advancement has markedly propelled progress in the field of code translation.
A.1.2 Code Translation Metrics
Match-Based Evaluation Metrics These evaluation metrics are based on the similarity between the translation output and the reference translation.Among them, the Exact Match (EM) metric calculates the percentage of translation outputs that exactly match the reference translation, which overlooks the fact that the same function can be implemented in various ways.The Bilingual Evalu-ation Understudy (BLEU) (Papineni et al., 2002) metric evaluates the similarity between the translation output and the reference translation by multiplying the geometric average of n-gram precision scores with a brevity penalty.The CodeBLEU (Ren et al., 2020) metric extends BLEU by considering syntactic and semantic characteristics of programming languages; it not only considers shallow matching but also pays attention to syntactic and semantic matching.CrystalBLEU (Eghbali and Pradel, 2022) focuses more on the inherent differences between source code and natural language, such as trivial shared n-gram syntax.Code-BERTScore (Zhou et al., 2023) uses pre-trained models to encode the translation output and reference translation, then calculates the dot product similarity between them, enabling comparisons of code pairs with distinct lexical forms.However, CodeBLEU, CrystalBLEU, and CodeBERTScore have limitations as they only support a limited range of programming languages and cannot be used in general multilingual scenarios.Ruby (Tran et al., 2019), a new method for evaluating code translation, considers the lexical, syntactic, and semantic representations of source code.However, its codebase has not yet been open-sourced.These match-based evaluation metrics can only evaluate the surface form and semantic differences of the code, while neglecting the executability of the code and the functional equivalence of implementation variations.
Execution-Based
Evaluation Metrics Execution-based evaluation metrics mainly compare the executed result of the generated code with the expected result.The PASS@k score (Kulal et al., 2019) is evaluated by unit tests: if any of the k samples meets the expected result, the generated result is deemed successful.AvgPassRatio (Hao et al., 2022;Hendrycks et al., 2021) evaluates the overall executable result of code by calculating the average pass rate of test cases.Computational accuracy (Rozière et al., 2020) measures the quality of the generated code snippet by comparing the output of this snippet with the reference code snippet when given the same input.Additionally, CodeScore (Dong et al., 2023) claims that it can estimate the PassRatio of test cases for the generated code without executing the code, but its codebase has not yet been open-sourced.These execution-based evaluation metrics require construction of executable test diversity ensures that CodeTransOcean reflects a wide variety of real-world scenarios.
A.3 Specific Challenges in Implementing
Cross-framework Translation Firstly, there are significant design differences between frameworks, including data processing methods, model-building strategies, and network connection techniques.Secondly, the inherent complexity of DL code increases the difficulty of conversion, as these codes usually contain various components such as neural network layers, loss functions, optimizers, and learning rate schedulers.
Thirdly, there are significant inconsistencies in the code structure of different frameworks, such as code organization and variable naming rules.Lastly, cross-platform compatibility must be considered because DL code may encounter compatibility issues when executing on different hardware platforms (e.g., GPUs, CPUs, TPUs) and operating systems.
A.4 Code Examples on Different Deep Learning Frameworks
Figures 1 and 2 show the implementation of two different deep learning components in various deep learning frameworks.
A.5 Multilingual Modeling
One-to-One For each language pair in the dataset, we train an independent model, e.g., translating C++ to Java.
One-to-Many We train individual models from one language to many other languages, e.g., translating Python to all other languages.Many-to-One We train individual models from multiple languages to one language, e.g., translating all other languages to Python.
Many-to-Many
We train a unified model for the multiple to multiple languages in the dataset, which can handle translations between all languages.
We ensure all experiments are performed under the same hyperparameters and environment for comparison.Table 13 shows these in detail.
A.6 Prompt Variations
Role Assignment (Peng et al., 2023;AlKhamissi et al., 2023;Wu et al., 2023;Akın, 2023) We configured two distinct roles for the model, each with unique skills.This arrangement empowers the model to simulate more domain-adaptable and specialized expert roles.
Polite inquiry (Akın, 2023) These strategies add polite expression and set up imperative and interrogative requests.Given that ChatGPT is designed to simulate human conversation styles as closely as possible, including understanding and simulating polite language expressions.Therefore, we expect these strategies to boost the comprehension of the model and augment the quality of its generated results.
Clarify usage This strategy aims to make the model clearly aware of its requirements during the code translation process -the generated code needs to be guaranteed to execute without issues.The translation prompts of the above four strategies (Wang et al., 2023b).Naive denotes Naive Copy, which directly duplicates the source code as the target code without making any modifications.Method OtO, OtM, MtO, and MtM denote One-to-One, One-to-Many, Many-to-One, and Many-to-Many, respectively.The rows correspond to the source language while the columns correspond to the target language.We run each experiment with three different random seeds and report the mean and standard deviation of BLEU scores.[Return to Section 4.1.] Self-debug@0 Yes, the Python code executes without errors.
Please predict the executed output of the Python code above.
The predicted execution result of the Python code above is [output].
Translate [source_language] to [target_language]: [source_code].Here is the [target_language] code equivalent of the given [source_language] code: [translated_code].Self-debug@n The above python code executes with the following errors, please correct them.[Compiler reports errors] Here is the modified [target_language] code: [translated_code].Table 17: A simple demo: Translation prompting of ChatGPT in the multi-round debugging strategy.The content in red is returned by the compiler.[Return to section 4.3.]Zero-shot prompting Does the following Python code execute?[python_code].Yes, the Python code executes without errors.Please predict the executed output of the Python code above.The predicted execution result of the Python code above is[output].Few-shot promptingThis is a executable Python code [python_code], and this is a Python code [python_code] that cannot be executed.Does the following Python code execute?[python_code].
Table 1 :
Summary of our CodeTransOcean.We report #Samples, Avg.#Tokens/Sample and Avg.Length for Train/Dev/Test sets of each dataset.Note that LLMTrans is only for testing.#Samples are on the program-level.#Tokens are based on RoBERTa tokenizer
Table 4 :
Results on DLTrans of Naive and CodeT5+_220M with Many-to-Many strategy.We run each experiment with 3 random seeds and report the mean and standard deviation of EM, BLEU, and CodeBLEU scores.
Table 5 :
BLEU scores on NicheTrans of Naive and CodeT5+_220M with Many-to-Many strategy.Oneway denotes training models only from niche to popular, while Two-way denotes training in both directions.
Table 6 :
Zero-shot performance of ChatGPT with different prompt variants and contextual strategies.Baseline denotes ChatGPT with the baseline prompt.Details of the prompt variants (Expt #num) are in Appendix A.6.
Table 7 :
ChatGPT performance at the K th debugging.
. For code translation, we investigate four CoT strategies.Detailed
Table 8 :
Performance of ChatGPT with One-shot and CoT strategies compared to the Zero-shot Baseline.Details of Expt #num are in Appendix A.7 and A.8. descriptions and translation prompts for each strategy are in Appendix A.8.As shown in Table 8, CoT degrades executability of the translated code.
Table 9 :
Confusion matrix of fuzzy execution prediction by ChatGPT with Zero-shot and Few-shot settings.
Table 10 :
Performance of ChatGPT on predicting fuzzy execution.
Table 14 :
of the source code, then predict the output of the source code, and finally translate it, with the condition that the translated code must successfully execute.The specific translation prompts are shown in Table19.BLEU scores from different multilingual modeling strategies by fine-tuning the pre-trained CodeT5+_220M model (220M is the model size) tion
Table 18 :
Two simple demos: prompting in fuzzy execution experiments.[Return to Section 6.]
|
2023-10-11T18:43:47.944Z
|
2023-10-08T00:00:00.000
|
{
"year": 2023,
"sha1": "1823969b79f14b3bf99bde020ad6c6ede121cfdc",
"oa_license": "CCBY",
"oa_url": "https://aclanthology.org/2023.findings-emnlp.337.pdf",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "77b3f12cb9e41e6617c5140a2e7f64e957d1ef60",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
249049694
|
pes2o/s2orc
|
v3-fos-license
|
Clinical covariates that improve surgical risk prediction and guide targeted prehabilitation: an exploratory, retrospective cohort study of major colorectal cancer surgery patients evaluated with preoperative cardiopulmonary exercise testing
Background Preoperative risk stratification is used to derive an optimal treatment plan for patients requiring cancer surgery. Patients with reversible risk factors are candidates for prehabilitation programmes. This pilot study explores the impact of preoperative covariates of comorbid disease (Charlson Co-morbidity Index), preoperative serum biomarkers, and traditional cardiopulmonary exercise testing (CPET)-derived parameters of functional capacity on postoperative outcomes after major colorectal cancer surgery. Methods Consecutive patients who underwent CPET prior to colorectal cancer surgery over a 2-year period were identified and a minimum of 2-year postoperative follow-up was performed. Postoperative assessment included: Clavien-Dindo complication score, Comprehensive Complication Index, Days at Home within 90 days (DAH-90) after surgery, and overall survival. Results The Charlson Co-morbidity Index did not discriminate postoperative complications, or overall survival. In contrast, low preoperative haemoglobin, low albumin, or high neutrophil count were associated with postoperative complications and reduced overall survival. CPET-derived parameters predictive of postoperative complications, DAH-90, and reduced overall survival included measures of VCO2 kinetics at anaerobic threshold (AT), peakVO2 (corrected to body surface area), and VO2 kinetics during the post-exercise recovery phase. Inflammatory parameters and CO2 kinetics added significant predictive value to peakVO2 within bi-variable models for postoperative complications and overall survival (P < 0.0001). Conclusion Consideration of modifiable ‘triple low’ preoperative risk (anaemia, malnutrition, deconditioning) factors and inflammation will improve surgical risk prediction and guide prehabilitation. Gas exchange parameters that focus on VCO2 kinetics at AT and correcting peakVO2 to body surface area (rather than absolute weight) may improve CPET-derived preoperative risk assessment.
Introduction
Technical advances in surgery and anaesthesia have led to colorectal cancer surgery being performed on increasingly older patients, often with extensive comorbid disease (Findlay et al., 2011). For the growing numbers of high-risk patients for whom surgery still forms the cornerstone of cancer treatment appropriate preoperative risk stratification is essential to enable clinicians and patients to formulate a shared treatment plan.
Decreased functional capacity (caused by factors such as lifestyle choice, ageing, cancer biology, and neoadjuvant chemo-radiotherapy) increases postoperative morbidity and mortality (Silver & Baima, 2013;West et al., 2014). Functional capacity is a modifiable risk factor: a Delphi survey of anaesthetists and consumers identified preoperative fitness training as a top-ten research priority in anaesthesia (Bainbridge et al., 2012;Boney et al., 2015). Most promising is that a number of recent studies report a substantial reduction in postoperative complications following targeted prehabilitation in patients with modifiable risk, including deconditioning (poor fitness levels), anaemia, and malnutrition (Bolshinsky et al., 2018). Even in elderly patients, published data demonstrated that in the limited window of optimization prior to major gastrointestinal cancer surgery, prehabilitation is feasible and that this translates into measurable benefits in pre and post-operative functional capacity and reduction in postoperative complications (Tsimopoulou et al., 2015;Jack et al., 2011;Barberan-Garcia et al., 2018;Howard et al., 2019).
Importantly, improved surgical risk prediction for postoperative morbidity and mortality will help identify patients that would benefit most from prehabilitation, but also guide informed decision-making in those patients at highest risk. Risk prediction can be aided by a thorough understanding of patients' burden of comorbid disease (e.g. Charlson Co-morbidity Index or the American College of Surgeons National Surgical Quality Improvement Program [NSQIP] Surgical Risk Calculator (Bilimoria et al., 2013;Chang et al., 2016), preoperative blood tests (to identify modifiable risk factors such as anaemia, malnutrition, and inflammation) (Vardy et al., 2018;Chen et al., 2017;Edwards et al., 2011), and through assessment of functional capacity.
Functional capacity can be quantified using a static survey, e.g. Duke Activity Status Index (DASI), dynamic field walk tests, or the gold standard of cardiopulmonary exercise testing (CPET) (Moonesinghe et al., 2013). CPET is attractive in that in addition to objective assessment of preoperative functional capacity, it also provides diagnostic insight into the underlying cause of exercise limitation, and assists with exercise prescription.
Over the past two decades, CPET-derived gas exchange parameters have been increasingly studied for preoperative risk stratification, albeit in isolation and often with limited consideration of other preoperative parameters (Hightower et al., 2010). These perioperative CPET studies have generally focused on the kinetics of oxygen consumption (VO 2 ) during the active cycling phase of CPET-namely VO 2 at anaerobic threshold (AT), VO 2 at peak exercise (pVO 2 ), and typically dichotomised VO 2 (ml/kg/min) at AT and at peak exercise to predict postoperative morbidity and mortality (Wijeysundera et al., 2018). Few studies have explored the kinetics of carbon dioxide elimination (VCO 2 ) during the active cycling phase (CO 2 elimination relative to minute ventilation [Ve/ VCO 2 ] at AT or end-tidal CO 2 [P ET CO 2 ] at AT) (Moran et al., 2016;Levett & Grocott, 2015), the kinetics of heart rate related to the exercise (Hightower et al., 2010) and gas exchange parameters during the recovery phases of CPET (VO 2 R, VCO 2 R) in relation to postoperative outcomes (Ackland et al., 2019).
This retrospective cohort study set out to further explore the predictive value of the following: (1) preoperative covariates of patient burden of comorbid disease (Charlson Comorbidity Index) and/or preoperative serum biomarkers, and CPET-derived parameters of functional capacity; and (2) exploratory parameters of the recovery phase (after peak exercise) CPET-derived parameters on postoperative outcomes after major colorectal cancer surgery.
Patients
This retrospective cohort study was approved by the institutional review board at the Peter MacCallum Cancer Centre (#16/19R). Consecutive patients who had undergone CPET prior to colorectal cancer surgery over a 2year period (September 2013-August 2015) were identified within a prospectively maintained hospital database. Surgery type included segmental colorectal resection, proctocolectomy, abdominoperineal resection, pelvic exenteration, and cytoreduction surgery with hyperthermic intraperitoneal chemotherapy (HIPEC). Patients received standard perioperative care that included enhanced recovery after surgery (ERAS) pathways. The postoperative destination (intensive care, high dependency unit, or regular nursing floor) was determined by the clinical team at the conclusion of surgery for each case.
Risk stratification
Preoperative risk assessment was estimated by routine clinical examination in the pre-anaesthesia clinic, with routine preoperative blood tests performed (full blood evaluation, urea, creatinine, and electrolytes) as clinically indicated, the Charlson Co-morbidity Index, and CPET derived heart rate and gas-exchange variables. A modified Glasgow Prognostic Score was created by substituting neutrophil count for C-reactive protein (which was not routinely measured), and 1 point was assigned for each neutrophil count > 7.5 × 10 9 cells/l or albumin levels < 35 g/l. Zero points were assigned if both parameters were normal and 2 points assigned if both parameters were abnormal. The neutrophillymphocyte ratio (NLR) and platelet-lymphocyte ratio (PLR) were calculated as a marker of inflammation using the results obtained from the full blood evaluation panel (R Z, 2021).
CPET was performed as per the Perioperative Exercise Testing and Training Society (POETTS) practice guidelines (Levett et al., 2018) within four weeks of patients' scheduled elective surgery. Patients participating in the study were instructed not to eat or drink within 2 h of their scheduled CPET. The exercise test was conducted in five phases. Phase 1: Pulmonary function testing (sitting) to measure static lung function, including forced expiratory volume in 1 second and forced vital capacity. Phase 2: Resting phase-after applying the electrodes for a 12-lead ECG, arterial pressure cuff, pulse oximeter, and gas exchange collection mouth piece patients sat quietly for 3 min on the cycle ergometer for the collection of resting gas exchange derived data (CardiO2/CP System, Medical Graphics Corporation, USA). Phase 3: Unloaded cycling at 60-70 revolutions per minute (RPM) with no resistance for 3 min. Phase 4: Ramp protocol during which patients continue to cycle at 60-70 RPM with progressively increasing pedal resistance at a predetermined work rate (10-20 W/min) individualised to each patient's physical strength. The test was stopped either when the patient fatigued or at the investigator's discretion on the basis of signs or symptoms of cardiopulmonary distress. Phase 5: Recovery phase during which patients continue to pedal at 60-70 RPM with minimal resistance (20 W) for 5 min after peak exercise.
After CPET was completed, the test was analysed by anaesthesiologists accredited in CPET assessment (HI, BR), with independent crosschecking of the CPET data to ensure accuracy. Traditional perioperative CPETderived parameters that were analysed included VO 2 at anaerobic threshold (AT; ml/kg/min) and pVO 2 corrected to patient body weight (ml/kg/min). Peak VO 2 data were also corrected to patient body surface area (ml/min/m 2 ). VO 2 at AT was determined according to the POETTS guidelines, using the three-point estimate of modified V-slope, ventilatory equivalents, and increasing end-tidal partial pressure of oxygen (P ET O 2 ) (Levett et al., 2018). Exploratory parameters that were analysed included CO 2 exchange parameters (P ET CO 2 , Ve/ VCO 2 -i.e. minute ventilation to CO 2 production ratio, also known as ventilatory equivalents) at AT, heart (exercise and recovery phase) kinetics, and VO 2 recovery in the first 5 min after achieving peak exercise.
Patient characteristics and postoperative outcomes
Data was extracted from the medical records by three investigators (ML, JB, and VB) and cross-referenced. Extracted data included patient demographic, anthropometric factors, and postoperative complications. Postoperative complications were graded according to the Clavien-Dindo scoring system (Dindo et al., 2004). Using the established Clavien-Dindo scores, the sum of all postoperative complications was calculated using the Comprehensive Complication Index (CCI) to derive a score out of 100 (Slankamenac et al., 2013). A patientcentric measure, Days at Home within 90 days after surgery (DAH-90), was used to account for complications, mortality, and re-admission rates (Myles et al., 2016). This metric has been shown to be very sensitive to quality improvement initiatives (Wijeysundera et al., 2018). Patients were followed up for a minimum of two years after their surgical procedure for overall survival analysis.
Statistical analysis
Statistical methods consisted of standard reporting of descriptive baseline statistics and standard statistical regression methods using the base package of the R language for statistical computing (version 3.6.0) and addon packages (survminer, ggplot2, gdata, graphics, grDevices, Hmisc, R2wd, rJava, utils, and xlsx). Some parameters were scaled as indicated in order to report their hazard ratios and the associated 95% confidence interval (95% CI) on a meaningful scale and were considered both in raw form and dichotomised at critical values where it was considered clinically meaningful to do so. Univariable Cox proportional hazards regression was performed for survival analysis, with survival curves compared by Log-Rank test and shown alongside number at risk (P < 0.05 considered significant).
To explore improved risk prediction using multivariable models, bi-variable models were considered within this data set due to the limited number of patients and the frequency of missing data. Due to the likelihood of high co-linearity within the various CPET-derived variables and inflammatory biomarkers, only models combining one CPET variable with one inflammatory marker were considered together. The best CPET factor (by univariable P-value) was combined with the single most predictive inflammatory marker, being the one which adds the most predictive value to the best CPET factor (again using P-value as the metric). This was undertaken for each of CCI, DAH-90, and overall survival as the independent variable. Due to the exploratory nature of this study adjustment for multiplicity was not used to correct for the large number of tests performed.
For the exploratory analysis of the recovery phase parameters, serial changes in HR (HRR, heart rate recovery), VO 2 , and CO 2 exchange parameters (P ET CO 2 , Ve/ VCO 2 ) from the time point of peak exercise through to the 5-min recovery period were analysed by repeated measures ANOVA (to avoid multiple statistical comparisons). In the univariate analyses, cases with missing data required for the analyses concerned were excluded and in the repeated measures of 2-way ANOVA and comparisons of slopes, all cases were included and only the missing data points were not considered in the overall analysis. Area-under-the-receiver-operating-characteristic (AUROC) curve was used to assess the ability of the rate (or slope) of recovery of each CPET parameter to discriminate between patients with and without mortality.
Results
The study cohort was representative of a complex colorectal surgical practice at a quaternary cancer centre. Eighty-four consecutive patients that underwent CPET prior to major colorectal cancer surgery during a 2-year period (August 2013 and September 2015) were identified and a minimum of 2-year postoperative follow-up for overall survival was performed in this cohort. Two patients required emergency surgical intervention prior to elective surgery at an external institution and consequently the number of patients available for overall survival analysis was 82 patients, with a greater male representation (55%). Baseline patient characteristics, surgical procedures, and postoperative complications are detailed in Table 1.
Complexity of surgical disease
This cohort of patients had a high burden of comorbid disease (Charlson Co-morbidity Index; median [IQR] = 6 [3-6]). Preoperative metastatic disease was documented in 39% of patients undergoing surgical intervention. More than half of the patients underwent intervention for rectal cancer, with an aggressive surgical approach. Multi-visceral resection was required in 41% of patients. One-third of the study cohort underwent cytoreduction and HIPEC. Intraoperative blood transfusion was required in 21% of patients and 5 patients required a re-look laparotomy. Two-thirds of patients suffered postoperative complications (CCI; median [IQR] = 24.2 [16.1-39.7]), and median DAH-90 was 75 (IQR, 56-79) days.
Postoperative complications
Univariable linear regression assessed the association between Charlson Co-morbidity Index, traditional preoperative laboratory values, CPET-derived variables, and postoperative complications (assessed by the CCI; Table 2).
The Charlson Co-morbidity Index was a poor discriminator of postoperative complications. In contrast, low preoperative haemoglobin and albumin (potential modifiable CPET, cardiopulmonary exercise testing; Peak VO 2 , maximum rate of oxygen consumption measured during incremental exercise; VO 2 @ AT, oxygen uptake at anaerobic threshold; Ve/VCO 2 , minute ventilation relative to carbon dioxide elimination; P ET CO 2 @ AT, partial pressure of end tidal carbon dioxide at anaerobic threshold risk factors) were associated with increased risk for postoperative complications. A preoperative pro-inflammatory state, depicted by a high preoperative neutrophil count (absolute or dichotomised at 7.5 × 10 9 cells/L), had a strong association with increased postoperative complications. This predictive value was significantly (P < 0.01) additive when preoperative neutrophil count was combined with albumin (a modification to the Glasgow Prognostic Score) (He et al., 2018) or considered within a bivariable model with the CPET-derived parameter pVO 2 (corrected to body surface area). CPET-derived gas exchange parameters that were most predictive of postoperative complications included pVO 2 (corrected to body surface area) and the measures of ventilation-perfusion (VQ) matching, namely low partial pressure of end-tidal CO 2 (P ET CO 2 ) and high ventilatory inefficiency (Ve/VCO 2 ) both measured at anaerobic threshold. Peak VO 2 (corrected to body surface area) and chronotropic response (heart rate > 25 beats per minute increase during exercise from rest to peak VO 2 ) had additive predictive value within a bivariable model.
Days at Home within 90 days after surgery
Univariable linear regression assessed the association between Charlson Co-morbidity Index, traditional preoperative laboratory values, CPET-derived variables, and Days at Home within 90 days after surgery (DAH-90; Table 3).
The Charlson Co-morbidity Index did not discriminate low versus high DAH-90. In contrast, low haemoglobin and albumin were associated with reduced DAH-90. A preoperative pro-inflammatory state, depicted by a high preoperative neutrophil count (absolute or dichotomised), was also strongly associated with reduced DAH-90. This was also evident when preoperative inflammation was considered in conjunction with albumin (modified Glasgow Prognostic Score) (P < 0.001).
The CPET-derived gas exchange parameters that were most predictive of reduced DAH-90 included increased high ventilatory inefficiency (Ve/VCO 2 ) measured at anaerobic threshold and peak VO 2 (corrected to body weight and to body surface area). Within a bi-variable model, the modified Glasgow Prognostic Score (using neutrophil count) and the CPET-derived parameter pVO 2 (corrected to body weight) had significant additive predictive value for reduced DAH-90 after surgery (P < 0.001).
Overall survival
Univariable linear regression assessed the association between Charlson Co-morbidity Index, traditional preoperative laboratory values, CPET-derived variables, and overall survival within two years after surgery (Table 4).
The Charlson Co-morbidity Index did not discriminate survivors from non-survivors. In contrast, low haemoglobin was strongly associated with reduced overall survival. Similarly, a preoperative pro-inflammatory state, depicted by a high preoperative neutrophil count (absolute or dichotomised), discriminated overall survival.
CPET-derived gas exchange parameters that were most predictive of reduced overall survival included poor CO 2 elimination at AT, namely low partial pressure for endtidal CO 2 and high ventilatory inefficiency (Ve/VCO 2 ) at anaerobic threshold, and pVO 2 (corrected to body surface area). Comparison of Kaplan-Meier curves (Fig. 1a-d) showed significant difference in survival when P ET CO 2 at AT was dichotomised at 35 mmHg (P = 0.001), when Ve/ VCO 2 was dichotomised at 35 (P = 0.001), when pVO 2 was dichotomised at 710 ml/min/m 2 (P < 0.01), and when Ve/VCO2 and peak VO2 were considered together. Importantly, VO 2 at AT (dichotomised at 11 ml/kg/min) did not discriminate survivors from non-survivors.
Within bivariate modelling, when inflammation was considered in conjunction with albumin (a modification to the Glasgow Prognostic Score) or neutrophil count considered with peak VO 2 (corrected to body weight) it added significant ability to discriminate between survivors and non-survivors. Similarly, when ventilatory efficiency (Ve/VCO 2 ) was considered within a bi-variable model with the CPET-derived parameter peak VO 2 (corrected to body weight), it also added significant predictive value for reduced survival after surgery ( Fig. 1; p < 0.0001).
Exploratory CPET parameters
Heart rate at peak exercise and the heart rate recovery (HRR) after peak exercise are higher among the survivors than non-survivors. While change in heart rate and change in VO 2 over the entire testing period were not statistically different between survivors and nonsurvivors, the recovery phase slopes for HRR and for oxygen consumption (VO 2 R) were significantly different between survivors and non-survivors (Table 5) with modest ability to predict survival (HRR: AUROC 0.74, 95% CI 0.59-0.89; P = 0.002 and HRR > 0.87 had 69% sensitivity and 81% specificity; VO 2 R: 0.74, 95% CI 0.61-0.86; P = 0.008 and VO 2 R > 0.57 had 85% sensitivity and 66% specificity). The recovery phase slopes for CPETderived CO 2 exchange parameters (VCO 2 , P ET CO 2 , Ve/ VCO 2 ) were not significantly different between survivors and non-survivors.
Discussion
This study demonstrates that a 'triple low' state, characterised by low haemoglobin (anaemia), low albumin, and low functional capacity compounded by a proinflammatory state, as assessed by routine inflammatory markers are associated with poorer postoperative outcomes, including medium-term overall survival. These findings add to the emerging body of evidence that dynamic assessment of physiological fitness using CPET can help predict adverse postoperative outcomes (including post-operative complications and mortality) in patients undergoing major colorectal cancer surgery. The Charlson Co-morbidity Index appears unreliable for predicting post-operative complications in this patient population. CPET, cardiopulmonary exercise testing; Peak VO 2 , maximum rate of oxygen consumption measured during incremental exercise; VO 2 @ AT, oxygen uptake at anaerobic threshold; Ve/VCO 2 , minute ventilation relative to carbon dioxide elimination; P ET CO 2 @ AT, partial pressure of end tidal carbon dioxide at anaerobic threshold
Utilisation of CPET parameters
Previous studies have demonstrated that patients undergoing intra-abdominal surgery with an AT between 10 and 12 ml/kg/min have increased postoperative risk, with an AT of < 10.1 ml/kg/min being a strong predictor of morbidity, and an AT < 10.9 ml/kg/min being a predictor of mortality (Moran et al., 2016). In a recent meta-analysis of objective assessment of physical fitness in patients undergoing colorectal cancer surgery we reported that deconditioned (AT < 11 ml/kg/min) patients had a threeto five-fold higher incidence of postoperative complications than those patients deemed 'fit', but we were unable to identify a pooled cut point to predict postoperative mortality (Lee et al., 2018). While pVO 2 has been demonstrated to be an independent predictor of mortality (Jones et al., 2011), with significant risk of perioperative • Peak VO 2 (ml/min/m 2 ) > 710 28 0.11, a 9.09 0.01, 0.87 0.007 • Neutrophils (× 10 9 /l) 1.40 1.10, 1.70 0.008 CPET, cardiopulmonary exercise testing; Peak VO 2 , maximum rate of oxygen consumption measured during incremental exercise; VO 2 @ AT, oxygen uptake at anaerobic threshold; Ve/VCO 2 , minute ventilation relative to carbon dioxide elimination; P ET CO 2 @ AT, partial pressure of end tidal carbon dioxide at anaerobic threshold a Inverse of hazard ratio to show directional change related to overall survival relative to other variables Fig. 1 Kaplan-Meier curves showing survival based on CPET-derived gas exchange parameters-VO 2 at AT (dichotomised at AT > 11 ml/kg/min; Log-Rank Mantel-Cox = not significant), Ve/VCO 2 at AT (dichotomised at Ve/VCO2 < 35; Log-Rank Mantel-Cox; P = 0.001), VO 2 at peak exercise (dichotomised at pVO2 >710 ml/min/m 2 ; Log-Rank Mantel-Cox; P < 0.001) and when Ve/VCO 2 and peak VO 2 variables are combined. a Univariate of VO 2 (ml/kg/min) at AT. b Ve/VCO 2 at AT. c Univariate of peak VO 2 (ml/min/m 2 ) corrected to body surface area. d Bivariate of Ve/VCO 2 and peak VO 2 corrected to body surface area complications reported with a pVO 2 of < 15 ml/kg/min (Smith et al., 2009), a recent international prospective cohort study (the METS study) suggested that a low AT or low pVO 2 did not predict for a composite of postoperative cardiac complications and mortality (Wijeysundera et al., 2018). This same study, however, confirmed the value of peak VO 2 as a bi-variate metric to be predictive of allcause non-cardiac complications after surgery in a mixed cohort of surgical patients with relative low preoperative risk (Wijeysundera et al., 2018). In our study cohort of patients having major colorectal cancer surgery, we were unable to demonstrate significant association between VO 2 kinetics at AT and adverse postoperative outcomes. The strongest association between postoperative mortality in our study was seen with VO 2 at peak exercise that was adjusted to body surface area (rather than weight) and for CO 2 kinetics at AT (Ve/VCO 2 or P ET CO 2 at AT). In this small cohort, one in two patients with preoperative pVO 2 < 710 ml/min/m 2 and Ve/VCO 2 at AT > 35 had died within one year of surgery (Fig. 1d). Nagamatsu et al. reported a similar association between pVO2 adjusted for body surface area and postoperative complications in patients having an oesophagectomy (Nagamatsu et al., 2001;Nagamatsu et al., 1994). Indexing VO 2 to body surface area rather than body weight was performed in order to minimise variability due to extremes of body weight. There is emerging recognition that sarcopenic obesity (obesity with depleted muscle mass), caused by cancer, as well as toxicity of chemotherapy, may be predictive of disease-specific morbidity and mortality (Prado et al., 2008). Our patient cohort included patients with recurrent, or advanced gastrointestinal cancer, commonly having multiple courses of preoperative chemotherapy and potential sarcopenia, making it necessary to index CPET derived variables to body surface area rather than absolute body weight.
Utilisation of preoperative biomarkers
Statistical association of low haemoglobin and low albumin with postoperative complications and overall survival allude to additional modifiable risk factors that may not only add value to CPET-derived risk prediction but also underpin the need for haematinic and nutritional optimisation within the setting of prehabilitation. The concept of using perioperative biomarkers to stratify surgical risk has its foundations in gauging the risk of postoperative cardiac complications (Edwards et al., 2011). Optimisation of the oxygen-carrying capacity of the surgical patient by addressing anaemia has been shown to increase AT and pVO 2 (Agostoni et al., 2010 ) . Both cancer biology and anticancer therapy are associated with a pro-inflammatory state (de Visser et al., 2006) and relationships between different haematological markers have been previously used to prognosticate cancer risk. Specifically, platelet/lymphocyte (PLR), lymphocyte/monocyte (LMR) and neutrophil/ lymphocyte (NLR) ratios have been investigated as preoperative risk indicators (Guo et al., 2017;Templeton et al., 2014;Chan et al., 2017). It has been demonstrated that in patients with operable colorectal cancer, PLR was associated with poor prognosis, whereas LMR was associated with increased overall survival for patients undergoing curative colorectal resections (Guo et al., 2017;Chan et al., 2017). While the ratios of these haematological markers did not demonstrate an association with the postoperative outcomes of interest in our patient population, there was statistically significant association with preoperative neutrophil count, signalling the association between preoperative inflammation and post-operative outcomes. Though our sample sizes were limited for biomarker analysis (haemoglobin/neutrophil count: n = 58; albumin count: n = 34), larger prospective studies assessing multiple variables alongside CPET are needed to delineate modifiable risk factors that may benefit from intervention preoperatively, within the setting of a multimodal prehabilitation program.
Exploratory CPET recovery parameters
Ackland et al. have demonstrated that heart rate recovery (HRR) has good predictive capability for morbidity within five days of surgery (Ackland et al., 2019). We similarly demonstrate that heart rate kinetics during preoperative CPET testing has a modest ability to predict intermediate-term mortality following major intraabdominal cancer surgery. HRR is an independent Results are presented as median with IQR A larger 'Ve/VCO 2 slope' implies a slower recovery from peak performance to baseline VO 2 , maximum rate of oxygen consumption measured during incremental exercise; Ve/VCO 2 , minute ventilation relationship to carbon dioxide elimination; P ET CO 2 , end tidal partial pressure of carbon dioxide predictor for 6-year all-cause mortality in the epidemiological (non-surgical) population (Cole et al., 1999). Patients with prolonged HRR were more likely to be elderly and also possess cardiac risk factors (Simões et al., n.d.). Furthermore, large studies examining healthy patients without cardiovascular disease have also reported that HRR is directly linked to mortality Nishime et al., 2000;Shetler et al., 2001). Our findings that HRR is a prognostic factor, as a reflection of cardiovascular reserve, or a predictor of autonomic dysfunction (Imai et al., 1994;Perini et al., 1989), is supported by studies showing HRR is associated perioperative morbidity (Ackland et al., 2015;Ackland et al., 2018), improved survival in conditions such as bacterial peritonitis, hypovolemic shock, and myocardial ischaemia (Guarini et al., 2003;Boland et al., 2011;Mioni et al., 2005). Taking all the existing evidence together, HRR measured during the recovery phase of CPET has the potential to be an important marker of parasympathetic activity and a risk predictor of perioperative morbidity and mortality risk and should be further investigated.
Limitations of this study
This cohort reflects a gastrointestinal surgical population in a quaternary institution with multiple variables that may affect outcomes and these results may have limited applicability to other major oncological subspecialty procedures. The size of this cohort was limited and therefore some of our negative findings may reflect the power of the study. Given the exploratory nature of this retrospective study, we have restricted our focus to comorbid disease and have a limited description of intraoperative variables. The statistical power of the results may be reduced due to the number of analysed variables. Furthermore, while referral to CPET in the study centre is based on hospital guidelines, the pattern of referral could have introduced an element of selection bias.
Conclusion
Patients presenting with a 'triple low' preoperative state (anaemia {low haemoglobin}, malnutrition {low albu-min}, and deconditioned {low functional capacity, e.g. low peak VO2}) are at greatest risk of postoperative complications and death. This is further compounded by a pro-inflammatory state. This study demonstrates that in complex colorectal cancer patients undergoing major cancer surgery, one in two patients with a preoperative pVO 2 < 710 ml/kg/m 2 and Ve/VCO 2 at AT > 35 or P ET CO 2 at AT < 35 mmHg will die within 1 year of surgery. In light of this, the current focus on conventional CPET parameters such as VO 2 at AT and peak VO 2 should be superseded by a holistic approach that analyses multiple physiological and biochemical parameters. This will not only improve risk prediction but to identify opportunity to optimise reversible patient factors within the prehabilitation window. Large, prospective multivariate trials are required to expand our understanding of modifiable risk factors and guide preoperative optimisation prior to major cancer surgery.
|
2022-05-26T13:23:24.607Z
|
2022-05-26T00:00:00.000
|
{
"year": 2022,
"sha1": "0ae9c46a6b0191b5a28cc8571f3ef2ad9dece731",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Springer",
"pdf_hash": "22ad19b7ae4db5e9a59c632923f57b301541cbf3",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
170072295
|
pes2o/s2orc
|
v3-fos-license
|
Study on the Model of Financial Centralized Management in the Large-scale Construction Enterprises
Based on the research of financial centralized management and project financial centralized management, this paper comprehensively expounds the related theories of financial centralized management and analyzes the necessity of financial centralized management in large engineering construction enterprises. It also explores the financial centralized management mode of large engineering construction enterprises and puts forward some countermeasures.
Introduction
Along with social economy development, the construction enterprises have showed the characteristics of scale, the amount of the contract is increasing, the construction process of the project is more complex, and the construction area of the project is scattered. Under the large background of largescale integration of engineering construction, financial work as an important work of engineering construction enterprises, the development of scale and integration of engineering construction enterprises also puts forward higher requirements for the financial management ability. Based on the theories of financial centralized management, in accordance with the characteristics of large engineering construction enterprises and the financial management status, this paper aims to explore the financial centralized management mode of large engineering construction enterprises, then takes large engineering construction enterprises as an example to carry out case analysis and study, and finally draws relevant policy suggestion.
Weak financial control leads to financial risk analysis
According to the actual work, the starting point of the project department's financial work is to reimburse the cost and obtain more expenses and funds. In this way, it is hoped that the headquarters will examine their financial reimbursement projects as soon as possible. However, as a result, the project department is not strict with itself, and there are fake bills which do not conform to the actual payment, so a large number of accounts are not actually reimbursed. As the head office of engineering construction, it should try its best to control and manage the operation of the whole enterprises so that the financial expenditure can meet the requirements, but this requires more effort and labor costs.
Strengthening overall control is an important factor for the headquarters of construction enterprises to adopt "strong control" and the "strong demand" strategy adopted by the project department. In the following section, we will analyze the game relationship between the financial management of the 2 1234567890''"" headquarters of large construction enterprises and the financial audit management of project construction department.
The cost of the "strong control" strategy is H, and the cost of "weak control" strategy is lower than the cost of "strong control" strategy, so the cost of "weak control" is H-I (H-I<H). The "strong control" strategy can gain a certain benefit, that is F, which is the punishment income of project department of the enterprise headquarter. D is the cost of "strong demand" for the project department, and the cost of "weak requirement" is D-E(D-E<D) [19,20] .
Among them, F>E. H, I, F and D are all positive. Then the game matrix of the headquarters and project department of the construction enterprise is drawn as follows. Project department If the project construction headquarters has long adhered to the "strong requirements", it is obvious that the benefit of the project department under the "strong requirements" and "weak requirements" is -D and -D+E-F respectively. At this point, -D and -D+E-F are the decisive factors, and F>E, so -D>-D+E-F. Under these circumstances, in the long term adherence to the "strong management and control" of the headquarters of the construction enterprises, the project department will adopt the "strong demand" strategy.
If the project construction headquarters has long adhered to the "weak requirements", it is obvious that the benefit of the project department under the "strong requirements" and "weak requirements" is -D and -D+E respectively. At this time, the project department will adopt the "weak requirement", that is to say, the project department will not have a strong demand.
If the project department adopts weak requirements for a long time, whether the construction headquarters will adopt strong control or weak control depends mainly on the input cost I and the fine benefit F. Therefore, the intensity of the fines imposed by the headquarters will play an important role in the strong control or weak control of the construction headquarters.
To sum up, it is easy to see that under the condition of weak control of engineering headquarters, the project department must be weak. It is because of the relatively weak control of large scale construction enterprises that the project department's weak demand for financial management leads to the existence of financial risks.
The necessity of strengthening financial control in large construction enterprises
With the rapid development of the reform and opening up and the rapid economic social development, our country has begun to develop enterprise groups. It has also become an important symbol of the development of modern enterprise organization and Chinese economic industry. In the meanwhile, with the deepening of reform and opening up, enterprise groups have become the backbone of economic construction. The enterprise groups can play the economic benefits of scale effectively, expand the competitiveness of the market, realize the vertical integration and economize the transaction cost. At the same time, enterprises can diversify their risks by implementing cross- diversification. However, with the continuous expansion of the enterprise group and its scale, most enterprises have appeared the decline in operating efficiency, the business risks and financial risks are increasing, and the scale benefits and advantages of the group application are not realized. As shown in the figure below. From the above picture, we can see that the above base model has two feedback loops: "enterprise scale ⎯ → ⎯ + scale efficiency ⎯ → ⎯ + enterprise benefit", "enterprise scale ⎯ → ⎯ + required control capability ⎯ → ⎯ + the difference between actual control capability and required control capability ⎯→ ⎯ − overall enterprise benefit ⎯ → ⎯ + enterprise benefit" [21] . The two feedback loops reveal that with the expansion of the enterprise scale, if the control ability cannot keep up with the change of enterprises scale, the limited control capability of enterprises will restrict the expansion and development of enterprises scale.
Financial control is the core of corporate collectivization development control. Because of the characteristics of liquidity, regional and long period of production, the financial control of the construction enterprise has its commonness: the financial work runs through the production, operation and investment of enterprises, and it is the main line of the engineering construction enterprises. However, the current financial management is not perfect in the construction enterprises of our country. The limit of the scale efficiency growth of the above enterprises reveals that the strengthening of the financial management and control of the construction enterprises can effectively promote the improvement of the overall management and control capacity, then promote the continuous development and expansion of the engineering construction enterprises.
At the same time, we summed up the problems of the financial management of large engineering construction enterprises, and there are three main problems that restrict the development of engineering construction enterprises in the financial management: insufficient understanding, money across, weak financial control. Therefore, it is necessary to strengthen the financial control of enterprises.
Organization framework for financial centralized management
According to the basic principles and main contents of the financial centralization of the abovementioned construction enterprises, the basic framework of the financial centralized management mode of large construction enterprises is as follows. As you can see from the figure above, the corporate headquarters set up the finance department, the audit department, the project department, the branch office and the human resources department. The finance department has set up the financial management department, the fund clearing department, the accounting operation department, the project department and the branch office.
The financial management department consists of the budget section and the financial management section. The budget team is responsible for the organization and coordination of the budget management, the establishment of relevant systems and plans, the budget of the cost, the budget of the project department or branch, the supervision and inspection of the budget execution control, and the analysis of the budget and its implementation. Finally, a budget report is formed. On the other hand, the financial management section is responsible for the formulation of the financial system, the analysis of the financial situation, and the formulation of the financial statements. Another important content is the prediction of profit and cash flow. Of course, as the functional department of the headquarters, it also requires guidance and supervision of subsidiaries, branches or project departments, as well as the construction and management of the team.
The fund settlement part is composed of four groups. The cost group is responsible for the examination of the expense documents and the designation of expense reimbursement and other relevant system. The fund group, which is responsible for the management of the fund payment data, the fund allocation, the management of the exchange certificate, the opening and approval of the account and the construction of the fund system. The payroll group is responsible for the management of sales expenses and personnel salaries. The business fund group is responsible for the management and confirmation of the business income funds, financial audit and payment.
The accounting department is responsible for the management of business accounting. The accounting team is responsible for the accounting management of headquarters and project departments or branches. The report group is responsible for the formulation and examination of 1234567890''"" AEMCME 2018 IOP Publishing IOP Conf. Series: Materials Science and Engineering 439 (2018) 032040 doi:10.1088/1757-899X/439/3/032040 external reports and the report management of the whole system. The tax team is responsible for the study of fiscal and taxation policies and the handling of tax related matters[37]. The system management group is responsible for the development and maintenance of the centralized financial data system. The document group is responsible for the formulation, issuance and related system formulation and management of valuable documents.
The financial management department of the project department or branch should also be set up, the financial officer of the project department or the branch office is appointed directly by the headquarters finance department, according to the needs, the relevant financial posts are set up, and the Department of finance management of the headquarters will be effectively docked.
The approval process of financial centralized management
Under the framework of financial centralized management mode, the approval process is the core link of financial centralized management and control, which mainly includes financial budget expenditures and reimbursement approval process. The approval process of financial budget and financial reimbursement process of project department or branch office are as follows. From the above picture, we can see that under the centralized financial management mode of the large construction enterprise, the project department or the branch office submits the budgetary application in the budget operational support system. The headquarters finance department is audited according to the relevant system and standard, the sales process will be refunded if the audit is not passed. The final approval authority for financial budgeting belongs to headquarters, which ensures the centralization of financial management.
After budget action, the financial reimbursement process will be reimbursed, and the financial reimbursement process is as follows. As can be seen from the above picture, the financial reimbursement process of the financial centralized management mode is carried out after the budget has been approved. First, the engineering construction project department or the branch office input the financial voucher to submit the application for reimbursement. Then, the project department or the branch responsible person audit again and then submit to the headquarters financial department. After the headquarters financial department reviewing and approving the application in accordance with the financial system and related system, the project department or the branch reorganize the voucher to submit it. Finally, the system payment and financial reimbursement documents are archiving. In this way, the final authority of the whole financial reimbursement audit is at the headquarters of the enterprise, ensuring the relative centralized management of the financial affairs.
Conclusion
Under the large background of large-scale integration of engineering construction enterprises, financial work as an important work of engineering construction enterprises, the development of scale and integration of engineering construction enterprises also puts forward higher requirements for the financial management ability. Based on the theoretical basis of financial centralized management, combining the characteristics of large engineering construction enterprises and the financial management status of large engineering construction enterprises, this paper explores the financial centralized management mode of large engineering construction enterprises, and takes large engineering construction enterprises as an example to carry out case analysis and study, finally draws relevant policy suggestions.
|
2019-05-30T23:45:59.514Z
|
2018-11-01T00:00:00.000
|
{
"year": 2018,
"sha1": "fc862406a40f133faeb3be0b4834d2e0fd809fe5",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1757-899x/439/3/032040",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "4213369d89574a22090f9d313e68ff4bac3b24d7",
"s2fieldsofstudy": [
"Economics",
"Business",
"Engineering"
],
"extfieldsofstudy": [
"Physics",
"Business"
]
}
|
213245268
|
pes2o/s2orc
|
v3-fos-license
|
Wide Bandwidth High Gain Circularly Polarized Millimetre-Wave Rectangular Dielectric Resonator Antenna
A wideband high gain circularly polarized (CP) rectangular dielectric resonator antenna (RDRA) having a frequency range of 21 to 31 GHz is proposed. The RDRA consists of two layers with different dielectric permittivities and has been excited using a cross slot aperture. The proposed antenna offers wide impedance and CP bandwidths of ∼36.5 % and 13.75%, respectively, in conjunction with a high gain of ∼12.5 dBi. Close agreement has been achieved between simulated and measured results.
INTRODUCTION
Wireless communication systems have grown dramatically over the last few decades. As a result, the carrier frequencies have been shifted up to the mm-wave band in order to acquire a much wider bandwidth and minimize the interference in the overcrowded lower frequencies' spectrum. With the increasing demands for wireless mobile devices and services, the new wireless applications require high data rates in the order of 1 Gbps that can only be supported by the fourth generation (4G) wireless networks [1]. Therefore, the mm-wave frequency band has been utilized in the fifth generation (5G) wireless systems in order to achieve higher data rates [1,2]. Further, mm-waves signals have the ability of penetration through fog and heavy dust [3]. However, the electromagnetic energy at the mm-wave band can be absorbed by oxygen, which attenuates the signal over the communications channel and necessitates the use of a high gain antenna [4]. Unfortunately, antenna arrays require feed networks with potentially high ohmic losses at higher frequencies as well as increased cost, size and complexity. Furthermore, microstrip antennas are associated with well-known limitations such as narrow impedance bandwidths and considerably lower gain due to ohmic and surface wave losses at the mm-wave frequency range [5]. Therefore, a DRA represents a suitable choice to address the aforementioned limitations as it offers wide bandwidth in conjunction with high radiation efficiency of more than 90%, as well as other appealing features such as small size, various geometries, easy excitation, low profile and lightweight [6,7]. As a result, millimetre wave DRAs have been the focus of several recent studies [8][9][10][11][12]. In addition, a number of studies have focused on the design of mm-wave DRA arrays [13][14][15][16]. As the higher order mode DRAs increase the effective permittivity then a narrower impedance bandwidth is expected [3]. The outer layer creates a transition region between the antenna and air resulting in an enhanced impedance bandwidth. In addition, the dielectric coat serves another purpose by exciting additional resonance modes in the same band, and margining the bands of adjacent modes improves the impedance bandwidth further. It is worth pointing that wideband and high gain X-band DRAs have been reported recently by incorporating an outer dielectric coat layer [17,18]. This approach is utilized in this letter for the mm-wave band applications, where further performance improvements have been achieved by optimizing the feed network and the dielectric coat dimensions. The simulations have been implemented using the time domain solver of CST microwave studio [19].
THEORY
For a single layer DRA, the resonance frequency of each TE mnp mode can be calculated using the dielectric wave guide model, DWM [20], which results in the following equations in which λ0 is the free-space wavelength, and c is the speed of light. Substitution of Equation (2) in Equation (1) provides an equation to calculate the modes' resonance frequency as: However, there is no equivalent equation for a layered DRA structure. Hence the CST Eigen mode solver has been utilized to predict the resonance frequencies of various modes in layered DRA configuration.
ANTENNA CONFIGURATION
In this work, a mm-wave rectangular DRA working at higher order modes is designed and measured. Fig. 1 illustrates the proposed RDRA geometry with an inner layer dimensions of l 1 =w 1 =2 mm and h 1 =10 mm as well as a relative permittivity of r1 = 10. The DRA has been coated by a Polyamide outer layer that has dimensions of l 2 =w 2 =12 mm and h 2 =11 mm with a dielectric constant of r2 = 10= 3.5. The proposed antenna has been placed on a Rogers RO4535 substrate having size of 200 mm 2 , thickness of 0.5 mm and dielectric constant of 3.5. In addition, a cross-slot with unequal arm lengths of ls 1 =1.9 and ls 2 =2.6 mm and identical width of ws 1 =ws 2 =0.5 mm has been etched on the ground plane in order to generate two near resonant modes having an equal amplitude and 90 • . phase difference that are required to generate the circular polarization [21,22]. The reflection coefficient has been measured using an E5071C vector network analyzer thorough a 50 Ω coaxial cable. A 2.92 mm SMA has been utilized between the coaxial cable and the feeding strip line. The calibration has been carried out using the Agilent's 85052D calibration kit. The radiation patterns have been measured using the SNF-FIX-1.0 Spherical Near-field mm-Wave Measurement System. The prototype of the DRAs is shown in Fig. 2. results with measured and simulated gains 12.5 and 12.1 dBic, respectively, which demonstrate a right hand circularly polarized (RHCP) antenna since E R is considerably higher than E L . However, the minor disagreements between the simulated and measured H-plane patterns could be attributed to fabrication and measurements tolerances. It is worth mentioning that a right-hand circular polarization sense, RHCP, has been achieved due to the fact that the length of the ls 2 arm of the cross slot is longer than ls 1 as demonstrated in Fig. 1(b). Similarly, left hand circular polarization, LHCP, can be achieved by swapping the cross-slot arms so that ls 1 is longer. The simulated and measured radiation patterns of TE 115 , TE 117 and TE 119 modes are illustrated in Figs. 4, 5, and 6 at 24, 27.5 and 29 GHz, respectively. These results demonstrate the stability and consistent of the radiation patterns, which is expected since all the excited modes offer broad side far field patterns. The simulated and measured axial ratios and directivities agree well with each other as demonstrated in Fig. 7. It can be noted that both of the measured and simulated CP radiations have been acquired over a frequency range of 23.4-26.7 GHz, which corresponds to an AR bandwidth of 13.7% . This has been achieved in conjunction with a stable directivity across the circular polarization bandwidth with a maximum of ∼12.5 dBi at 24 GHz in both of the simulated and measured data. It is worth pointing that, the wider axial ratio band has been acquired due to the combination of TE 115 and TE 117 modes. It should be noted that at the absence of the Polyimide coating layer, the DRA directivity, impedance, and AR bandwidths are 8.66 dBi, 7.5%, and 1.95%, respectively. Furthermore, Table 1 presents a comparison between the performances of the proposed layered DRA to that of a several DRA arrays [13][14][15][16]. From the tabulated data, it can be noted that the performance of the presented antenna is comparable to those of the arrays albeit with the utilization of a single element, which results in a smaller overall size as well as the absence of an array feed network.
CONCLUSION
A two-layer mm-wave DRA configuration has been investigated and measured. The proposed antenna offers a high gain of ∼12.5 dBi in conjunction with wider impedance and axial ratio bandwidths of 36.5% and 13.7%, respectively. The improved bandwidth has been achieved due to the excitation of multiple higher order modes as a result of incorporating a dielectric coat layer. On the other hand, the gain has been enhanced due to the increased order of the excited DRA modes at the presence of the coat layer.
In addition to the performance improvements, the outer dielectric layer has provided a physical support due to the small DRA size as well as an easy holder to the DRA on the ground plane. Furthermore, the radiation characteristics of the layered DRA are comparable to those of a number of DRA arrays that have been reported in the literature. The appealing features of the presented antenna can play a major role for 5G applications that require directive antennas with wider bandwidth.
|
2020-03-05T11:06:26.680Z
|
2020-01-01T00:00:00.000
|
{
"year": 2020,
"sha1": "c6e4dece47af2c7b53558c0d0e1c59c0ca927ccb",
"oa_license": null,
"oa_url": "https://www.jpier.org/PIERM/pierm89/18.19111903.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "2164cd67372b8fc2d6677aba0263d6b6facd6de9",
"s2fieldsofstudy": [
"Engineering",
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
233701613
|
pes2o/s2orc
|
v3-fos-license
|
Modeling region based regimes for COVID‐19 mitigation: An inverse Gompertz approach to coronavirus infections in the USA, New York, and New Jersey
Abstract The world tried to control the spread of coronavirus disease 2019 (COVID‐19) at national and regional levels through various mitigation strategies. In the first wave of infections, the most extreme strategies included large‐scale national and regional lockdowns or stay‐at‐home orders. One major side effect of large‐scale lockdowns was the shuttering of the economy, leading to massive layoffs, loss of income, and livelihood. Lockdowns were justified in part by scientific models (computer forecast and simulations) that assumed exponential growth in infections and predicted millions of fatalities without these ‘non‐pharmaceutical interventions’ (NPI). Some scientists questioned these assumptions. Regions that followed other softer mitigation strategies such as work from home, crowd limits, use of masks, individual quarantining, basic social distancing, testing, and tracing – at least in the first wave of infections – saw similar health outcomes. Clear results were confusing, complicated, and difficult to assess. Ultimately, in the USA, what kind of mitigation strategy was enforced became a political decision only partly based on scientific models. We do not test for what levels of NPI are necessary for appropriate management of the first wave of the pandemic. Rather we use the ‘inverse‐fitting Gompertz function’ methodology suggested by anti‐lockdown advocate and Nobel Laureate Dr. Levitts to estimate the rate of growth/decline in COVID‐19 infections as well to determine when disease peaking occurred. Our estimates may help predict levels of first‐wave infections in the future and help a region to monitor new outbreaks prior to opening its economy. The inverse fitting function is applied to the first wave of infections in the USA and in the hard‐hit New York and New Jersey regions for the time period March to June 2020. This is the earliest days of pandemic in the USA. The estimates for the rates of growth/decline are computed and used to predict underlying future infections, so that decision makers can monitor the disease threat as they open their economies. This preliminary and exploratory analysis and findings are discussed briefly and presented primarily in charts and tables, but the following waves of disease diffusion are not included and certainly were not anticipated.
tific models (computer forecast and simulations) that assumed exponential growth in infections and predicted millions of fatalities without these 'non-pharmaceutical interventions' (NPI). Some scientists questioned these assumptions. Regions that followed other softer mitigation strategies such as work from home, crowd limits, use of masks, individual quarantining, basic social distancing, testing, and tracingat least in the first wave of infectionssaw similar health outcomes. Clear results were confusing, complicated, and difficult to assess. Ultimately, in the USA, what kind of mitigation strategy was enforced became a political decision only partly based on scientific models. We do not test for what levels of NPI are necessary for appropriate management of the first wave of the pandemic.
Rather we use the 'inverse-fitting Gompertz function' methodology suggested by anti-lockdown advocate and 1 | INTRODUCTION When faced with a sudden emergence of a potentially deadly infectious disease that has no immediate cure; decision makers are likely to resort to wide-scale regional and national lockdowns. The lockdowns are part of a risk management regime that is deemed necessary to stop the disease from growing exponentially. The main goal is to 'flatten the curve,' so that existing healthcare resources can handle inevitable increases in hospitalizations and visits to emergency rooms and intensive care units.
That being said, lockdowns result in local, regional, and national consequences, leading to massive layoffs as well as loss of livelihood and incomes for millions of people, causing economic hardships in a population that is already under severe health-related stress. Further, for a country the size of the USA, these disease outbreaks 1 occur asynchronously in different places over different periods of days, weeks, or months. Further, the interaction across the US physical space was unanticipated and not responded to at the national or even regional level. For example, outbreaks on the West Coast predated those on the East Coast by nearly a week, while the outbreaks in some of the southern and in Midwest states were a few days to a few weeks later than on the West Coast ( Figure 1). Finally, the local state management (public health care) systems responsible for responding were not equipped to manage it locally, let alone across multistate and multiregional interactive settings.
Adding to the mass anxiety was not knowing how long the disease will run its course. Although Western states had outbreaks that started earlier and appeared to be under control, some of those areas had smaller outbreaks as late as the second week of May, even as they were part of the wider regional shutdowns that were in effect from the later half of March through the first three weeks of May. On the other hand, some of the southern states went on to open their economies quickly by cancelling NPI mitigation rules without immediately seeing substantial resurgence in disease intensity.
In open societies like much of the Western world where free flow of informationboth good and badis a given, state and local policy makers are keenly aware that they are on borrowed time in assuming an implicit permission from citizens for the continuation of shutdown policies. It is assumed here that these policy decisions are carefully evaluated on the basis of a number of inputs that presumably include scientific analyses. In all countries, these decisions are political and much of the process behind these decisions remains opaque. As a result, countries developed a variety of mitigation strategies ranging from very strict, as in the case of China, to very loose and voluntary, as in the case of Sweden and in the beginning the UK and in-between. A mix of mitigation strategies developed, with some being quite successful, as in the case of Taiwan, South Korea, Iceland, and New Zealand, who relied heavily on testing, tracing, and individual quarantining rather than massive lockdowns.
The role of science is often difficult to appreciate and conflicting in interpretation. A case in point is the Scientific Advisory Group for Emergencies (SAGE) committee appointed by the UK government for the coronavirus pandemic. For much of February and through the first couple of weeks of March 2020, the UK government had decided to go with a loose set of guidelines that assumed voluntary participation from the public to control the spread of the coronavirus. However, all that changed almost overnight, when the SAGE committee learned about the alarming results of a computer simulation model from one of its members, Prof. Neil Ferguson (Imperial College London). The then unpublished report (Ferguson et al., 2020) claimed that its model predicted that, in absence of any non-pharmaceutical intervention (NPI), nearly 80% of the population will get infected and total mortality could be as high as 2.2 million in the USA and over 500,000 in the UK. The UK government announced a nation-wide lockdown F I G U R E 1 Log-linear plot of COVID-19 cumulative confirmed cases for select states from 22 January to 15 March 2020, where by 15 March 2020, every state had at least 25 cases the very next day (20 March 2020). The USA, however, continued to use its decentralized federal system, with state health evaluation and response procedures determined at the local level with little national guidance or role modeling.
The shocking fatality counts from the Ferguson simulation based on a scenario of the pandemic with no mitigation intervention may have influenced actions of many governments across the globe in justifying stricter socialdistancing regimes (Adam, 2020). It appears to have influenced even the Trump administration while it was trying to talk up, but not require, its recommendation for social distancing. 2 Initially, the USA had declared a national emergency and recommended (13 March 2020) that states develop a 30-day stay-at-home and social-distancing advisory on 16 March 2020 (The White House, 2020a). It was extended on 29 March (The White House, 2020b) to last through the end of April 2020. At least five states did not follow these suggestions, and many made them optional.
It is important to note that in a constitutionally based federal system of government such as the USA, the national government may have little control over what happens in each state in areas of education and health. Each state's governor has the authority to issue and implement that state's set of rules/guidelines on what type of mitigation (social-distancing regime) is followed within its borders (Gershman, 2020). Further, the federal government was limited by poor or no planning, was undersupplied with needed support equipment, and had a nonexistent testing system with no plans for linking any testing to tracing procedures.
States with large susceptible population groups and/or what appeared to be out-of-control outbreaks issued stricter social-distancing measures including stay-at-home policies with near total shutdown of all commercial and noncommercial activities (including religious places). The only exceptions were essential services. At the other end of the social-distancing spectrum were the recommendations in some states, such as Nebraska, Arkansas, Iowa, and the Dakotas (North and South), that never issued stay-at-home orders and had a very loose set of guidelines on what was allowed and what was banned (for example, large gatherings). The messy decision-making process of lockdown/ no-lockdown varied from state to state. In the end, it was a combination of policies related to 'flattening-the-curve', the threat of exponential spread of the contagion, the mixed messaging from the federal government, and the divided politics that contributed to the atmosphere of fear and doubt among the people. This led, in the private sector, to massive layoffs and brought the US economy to almost a complete halt.
| BACKGROUND
Overwhelmed by the rush of bad events, economic meltdown, and rising daily infection and casualty counts, it seemed that hardly anyone in the scientific community questioned whether COVID-19 was spreading exponentially and what was the necessity or usefulness of national or statewide shutdowns. There were a few exceptions with varying advice, including Prof. Sunetra Gupta, the well-known epidemiologist, Dr. Ben-Israel, Space Science Center leader in Israel, Dr. Karl Friston, Neuroscientist and Professor at City University of London, Dr. Johan Giesecke, Sr. Advisor to Swedish Government and the World Health Organization (WHO), and Dr. Michael Levitt, Nobel Laureate Professor at Stanford University.
Each of them subscribed to some form of social-distancing regimeespecially targeted at lockdowns for vulnerable population groups and limiting large gatherings, but none of them supported the strict stay-at-home lockdowns for the entire population for one singularly important reason: it would send the economy into a precipitous fall (Master, 2020).
The other thought behind objecting to strict lockdowns is that the virus may be highly infectious, but the vast majority would likely not get the disease, and if they did, the high level of infections would confer 'herd' immunity that in turn would protect the rest of the population at least for a short time. Additionally, it would save the economy from the precipitous decline that was happening all over the world. Nearly all of the experts noted that the virus was already circulating through the population before the strict lockdowns went into effect; some of them thought that the virus would follow its own life cycle (Dr. Ben-Israel and Dr. Friston) and decline accordingly, and at least some of them thought that exponential growth in the spread of the virus (Dr. Michael Levitt) would not happen. These experts, from a variety of experiences and disciplines, all agree that without a strict social-distancing regime, the virus will spread across the globe. However, they questioned whether the virus was likely to kill a significant level of the population. The main objection was about general acceptance that COVID-19 would grow exponentially without a widespread shutdown, and the other was whether one could ignore the enormous economic costs associated with such lockdown policies.
For example, Dr. Ben-Israel, Israeli Space Agency, came up with an easily computable statistic that showed when each country or region peaked and the degree of decline in the COVID-19 spread following the peak (Isaac Ben Israel, 2020). The statistic is computed as a simple ratio of daily new cases to cumulative cases of COVID-19.
Further, he claimed that the severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), the causative agent behind the COVID-19 disease, would follow its own life cycle that spanned roughly 70 days. In other words, it would be self-limiting irrespective of what mitigation/social-distancing regime is implemented (Knapton & Gilbert, 2020).
One problem with Ben-Israel's statistic is that the ratio always starts with the peak and keeps declining afterwards since the daily new cases are always less than cumulative cases. Despite this, it does provide an easily computable statistic that can be compared across space and time. At the very least, it may provide a way to measure effectiveness of different disease management regimes followed in different regions.
Prof. Sunetra Gupta, from Oxford University who studies evolutionary biology of infectious diseases, thought that SARS-CoV-2 was already circulating among the population and that the strict shutdown policy was too late to stop the spread, and that now Britain was faced with a worsening economic disaster (Vardarajan, 2020). She blamed the lack of early response on long-term neglect and underfunding of the UK national health service. She thought that targeted quarantines would be better able to control future outbreaks while the rest of the economy continued to work for the people. She had similar opinions when it came to India's response of total shutdowns to fight coronavirus. She thought that India, with its relatively young population (majority under 60 years), could have managed to keep its economy working instead of doing a nation-wide lockdown (Thapar, 2020). She suggested that large-scale serological testing would help identify areas with higher numbers of infection and that these could be targeted for social-distancing measures while the rest of the country could go back to work (Lourenco et al., 2020).
In many ways, this follows the typical strategies used in containing earlier bacterial and virus outbreaks (polio, smallpox, Ebola, etc.).
Dr. Friston, a neurobiologist from City University of London, who is part of an independent group of scientistsreferred to as 'shadow' SAGE in the media, 3 noted at the time, that there were too many unknown variables that affect COVID-19 and its causative agent SARS-CoV-2 and that a typical epidemiological susceptible-infectiousrecovered (SIR) model (Kermack & McKendrick, 1927) may not be sufficient to capture the dynamics of these unknown variables. Instead, he proposed a 'Dynamic Causal Model' to study SARS-CoV-2. This model, according to Dr. Friston, predicts that nearly 80% of the UK population was immune to SARS-CoV-2 and that COVID-19 had already peaked when the UK instituted its strict mitigation regime (stay-at-home/lockdown) or even its looser mitigations Moran et al., 2020). Of course, this meant that a "not-as-stringent" mitigation/ social-distancing strategy would save the UK economy from collapse.
Dr. Michael Levitt, Nobel Laureate from Stanford University, was visiting China in January (2020) when the coronavirus outbreak began. He analyzed China's coronavirus data when he was back in the USA (Levitt & Sandford, 2020) and noted that there never was any danger of the coronavirus growing exponentially (Sayers, 2020).
As mentioned above, disease transmission and spread models are based on the classical SIR model (Kermack & McKendrick, 1927) that uses sigmoid functions. Others (Ord & Getis, 2018) have successfully used SIR models with sigmoid and Gompertz functions (Mitchen, 2020). Dr. Levitt believed that the Gompertz function was a better fit for the cumulative infection data than a logistic function. As per his analyses, no matter what type of soft mitigation/ social-distancing regime is followed by any country, the growth function in each of the country level analyses has a negative exponent, indicative of a decay function (Levitt & Stanford, 2020) that will eventually slow the virus.
| METHODOLOGY
In these first days of the pandemic, we decided to use the 'inverse function fitting' methodology suggested by Dr. Michael Levitt to analyze the coronavirus infection data for the USA at the country, state, and county level. Such analyses, we thought, could help with the following goals of our study: A. To assess whether the infection data at the country, state, or county level showed exponential growth any time during the study period.
B. To identify when the infections peaked and estimate the time of peaking.
C. To determine the rate of decay as the infections start decreasing.
Goal A was to help identify whether there ever was an exponential growth danger to potentially impact the healthcare system (already under severe stress) that could lead to its breakdown. Were there areas where hospitals would be unable to provide care for the increasing caseloads? Lack of hospital care would significantly increase mortality in a region. Goals B and C would help to monitor the date of peaking and estimate the time in days when local healthcare systems and hospitals in any jurisdiction would be under pressure to meet the needs of existing and new infections.
Towards these goals, we used raw cumulative data from the Johns Hopkins University (JHU) COVID-19 data source (JHU COVID-19 Dashboard, 2020) to extract data for the COVID-19 confirmed cases from 22 January to 15 June 2020.
We expected the cumulative confirmed cases by county to be monotonically nondecreasing over the study time period. However, there are many counties (small jurisdictions) where cumulative cases on successive days show a decrease (Equation 1 below) rather than a constant numberwhen there are no new casesor increasing cumulative cases when new cases are reported.
where c(t) = cumulative cases on day t and c(t − 1) = cumulative cases on day (t − 1).
The downloaded data were checked, and they were adjusted to make the time series monotonically nondecreasing such that Next, we computed daily new cases d on day t: As mentioned in the three goals above, we wanted to determine whether any of the US jurisdictions had exponential growth during the current study time duration; if and when the peaking occurred; and, for past peaking, what the nature of decreases in infections was, that is, either a steady or a bumpy decline.
We decided to use the Gompertz curve following Ord & Getis (2018) as the inverse fitting function to the monotonically nondecreasing cumulative infection data. The Gompertz function shown in the equation below (The White House, 2020b) is a special case of the sigmoid function and is suitable to describe a phenomenon that seems to unexpectedly emerge and grow over short time periods and then appears to asymptotically reach a plateau.
where K is the asymptotic value at time T (the duration of study period), a is the rate of growth, and b is the horizontal drift.
| ANALYSIS
The nature of the difference curve described by Equation 3 also contributed to the choice of Gompertz instead of a typical sigmoid curve. Cumulative data based on the sigmoid curve show a symmetric bell-shaped difference curve where the number of cases before and after the peak are equal; such a distribution of data is highly unlikely in the case of an ongoing pandemic. On the other hand, a difference curve computed from a Gompertz function shows a distribution with a 'fat tail' in its decline after a peak is reached. In fact, the difference curve data covering our firstwave analysis does indeed show an asymmetric distribution around the peak where the post-peak decline is described by a bumpy or 'fat tail'. In other words, a Gompertz function can describe this asymmetric bell curve with a fat tail decline that takes a longer time to decay than the rapid rise to the peak. We decided to analyze the national pattern and then to disaggregate the data into active regional patterns that, at that time, dominated in terms of what was happening in the states and cities of New York and New Jersey.
These regions had the highest cumulative counts of infections. In this first wave, it appears that from 29 February to 15 June 2020, many of these regions had multiple outbreaksindicated by total number of infections staying constant for few days and then suddenly increasing to higher levels of infections 4 Here the coronavirus cases from 29 February to 15 June 2020 for the USA and select states are for different jurisdictions and peak at different times. For example, New York peaked around 15 April, while New Jersey and the USA peaked 24 April and 25 April, respectively.
These are log difference curves. In other words, these are the slopes of the cumulative count curves.
| RESULTS USING INVERSE CURVE FITTING
Our first-wave analysis covers the cumulative infections data by state for a total of T = 102 days from 6 March to 15 June 2020. The choice for the time duration was partly decided because of the multiple short duration outbreaks exhibited by many states between 22 January and 5 March 2020. Each such short outbreak would need to be handled as a separate curve fitting problem that is not only complex, but parameters of such curves are likely to be riddled with the underlying noise of the data collection process. It helps to have data that instead have one single outbreak. Note that even this so-called single outbreak time duration has a few minor glitches, but they are far and few and therefore do not affect the overview analysis.
The output is shown for the USA and the states of New York and New Jersey in Figures 6, 7, and 8, respectively.
| FUTURE DIRECTIONS
We understand that this approach broke down in the second and third waves that followed and, therefore, nationally.
However, we would like to confirm the application methodology of reverse curve fitting of the Gompertz curve to subregional data (groups of states/Census regions and at the county level) and for metropolitan regions. In the first COVID-19 wave, it assumes isolated populations and not the overall state and regional interactions that followed in secondary and tertiary waves. Further, we note that the diffusion of the disease is not adequately captured in this simplistic approach, but it does provide a building block for alterations and changes. Also, we plan to compare the COVID-19 infections and mortality data for other states and counties that followed minimal mitigation/ social-distancing regimes with those following a stay-at-home or lockdown approach for limiting infection expansion.
tremendously. It was their models that created the ability to see what these mitigations could do, how steeply they could depress the curve from that giant blue mountain down to that more stippled area. In their estimates, they had between 1.5 million and 2. what an extraordinary thing this could be if every American followed these. And it takes us to that stippled mountain that is much lowera hill, actuallydown to 100,000 to 200,000 deaths, which is still way too much.' https://www. whitehouse.gov/briefings-statements/remarks-president-trump-vice-president-pence-members-coronavirus-task-forcepress-briefing-15/ 3 https://www.gov.uk/government/publications/scientific-advisory-group-for-emergencies-sage-coronavirus-covid-19response-membership/list-of-participants-of-sage-and-related-sub-groups 4 The sudden jump in numbers may also be partly due to uneven collection and reporting of confirmed cases, which in turn is likely due to variation in the times needed by a variety of testing procedures to confirm COVID-19 infections.
|
2021-03-31T13:11:53.899Z
|
2021-03-24T00:00:00.000
|
{
"year": 2021,
"sha1": "68206500642e57053399cd9c56d59e99fa1617d6",
"oa_license": null,
"oa_url": "https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8250709",
"oa_status": "GREEN",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "101735cc4e7e6829a972a30bdc91840a87604425",
"s2fieldsofstudy": [
"Environmental Science",
"Political Science",
"Medicine",
"Economics"
],
"extfieldsofstudy": [
"Economics"
]
}
|
85695552
|
pes2o/s2orc
|
v3-fos-license
|
Production of thermostable xylanase by thermophilic fungal strains isolated from maize silage
The search for microorganisms able to produce thermostable xylanases with high yield and characteristics desired for industrial applications has been strongly encouraged since such enzymes are widely used in large-scale processes. In the present study, thermophilic fungal strains able to grow at high temperatures (≥55 °C) were isolated from maize silage. The strains were molecularly identified and used for the production of extracellular xylanase by solid-state fermentation using corn cobs as support-substrate material. Species from the genera Rhizomucor and Aspergillus were identified among the isolated strains and these species demonstrated good ability to produce xylanase under solid-state fermentation conditions. Maximal values of enzymatic activity (824 U/g) and productivity (8.59 U/g.h) were obtained with Rh. pusillus SOC-4A (values per g dry weight of fermented medium). The xylanase produced by this fungus presented thermal stability at 75 °C, with maximum activity at 70 °C and pH 6.0, revealing, therefore, great potential for application in different areas.
Introduction
Xylanases (EC 3.2.1.8) are enzymes with the ability to hydrolyze xylan, a polysaccharide constituted by xylose units linked through β-(1→4) bonds usually found as the most abundant component in hemicellulose structures (Carvalho, Mussatto, Candido, & Almeida e Silva, 2006;Saha, 2003). Xylanases have extensive applications in different industrial sectors. In the food industry, for example, they are used for the clarification of fruit juices (Dhiman, Garg, Sharma, & Mahajan, 2011), to improve the texture, loaf volume, and shelf-life of baked products (Bajaj & Manhas, 2012), and to reduce the viscosity and filtration rate of the brewery mash (Qiu et al., 2010). In the pulp and paper industry, xylanases are used to increase the brightness of the pulp in order to produce papers of superior quality. This application is also advantageous from an environmental point of view because it avoids or reduces the use of chemicals such as chlorine (Li et al., 2010;Saleem, Tabassum, Yasmin, & Imran, 2009). Xylanases are also employed to convert xylan structures present in lignocellulosic biomass wastes to xylose sugar, which can be further used as carbon source for the production of a variety of valuable compounds such as xylitol (Mussatto & Roberto, 2008) and ethanol (Silva, Mussatto, Roberto, & Teixeira, 2012), among others.
In some cases, the use of xylanase produced by mesophilic organisms is limited as these enzymes generally undergo denaturation at temperatures higher than 55°C. As a consequence, the efficiency of hydrolysis is decreased and higher enzyme loadings are required to overcome this problem, increasing the costs of the process. The use of thermostable enzymes able to carry out hydrolysis at high temperatures is needed for these kinds of applications. One example of industrial application for thermostable xylanase enzymes is in the pulp and paper industry. For the pulp production, wood is treated at high temperature and basic pH. As a consequence, the enzymatic procedure requires enzymes with high thermostability and active in a broad pH range. Treatment with xylanase at elevated temperatures disrupts the cell wall structure, facilitating lignin removal during the bleaching stages (Georis et al., 2000).
The optimum temperature for activity of most xylanases is around 50-60°C only, with a half-life of about 1 h at 55°C including fungi and bacteria have been reported to produce xylanases with thermostable properties, active at temperatures between 50 and 80°C (Haki & Rakshit, 2003). However, taking into account the amount of enzymes required for large-scale applications, the search for microorganisms able to produce thermostable xylanases with high yield and characteristics desired for industrial applications is still being pursued. Some thermophilic fungi able to produce xylan-degrading enzymes have been isolated from soil and plant materials. Maize silage is the main source of forage for lactating dairy cows in Europe and North America (Cavallarin, Tabacco, Antoniazzi, & Borreani, 2011). To the best of our knowledge, this material has been understudied for the isolation of xylanase producers' fungal strains.
Agricultural, agro-industrial, and forest residues are usually rich in xylan, which can act as a natural inducer of xylanases by microorganisms. The reuse of such residues has been strongly encouraged in the past years for economic and environmental reasons, in order to minimize pollution and generate compounds of industrial interest from available and low-cost resources. Corn cobs, a broad by-product generated from the corn harvest, are composed of approximately 30% (w/w) xylan (Garrote, Domínguez, & Parajó, 2002) and can be then considered as a material of interest for use on the production of xylanases. The enzyme production by fungi can be done by submerged or solidstate fermentation (SSF) systems. However, in recent years, SSF has received more interest because this process may lead to higher yields and productivities or better product characteristics than submerged fermentations. In addition, capital and operating costs are also lower than in submerged fermentation (Mussatto, Aguiar, Marinha, Jorge, & Ferreira, 2015), and the downstream step for separation of the produced compound is facilitated due to the low water volume used for fermentation (Martins et al., 2011;Mussatto, Ballesteros, Martins, & Teixeira, 2012).
In the present study, efforts were made in order to isolate new thermophilic fungal strains with the ability to produce thermostable xylanase. The fungal strains were isolated from maize silage taking into account that this material has been little explored for this purpose. The selected strains were molecularly identified and used in a subsequent stage for the production of extracellular xylanase by SSF using corn cobs as support-substrate material. Finally, the thermostability of the produced enzymes as well as the pH and temperature where the activity is maximized were determined.
Isolation of thermophilic fungal strains
Maize silage was collected from a local farm in the province of Chihuahua, México. Soon after collection, the samples were cooled in ice and transported to the laboratory within 6 h. For the experiments, 1 g of maize silage was dissolved in 100 mL of sterilized distilled water, and the obtained solution was diluted up to 10 4 times. Afterwards, 0.1 mL of the diluted solution was added to potato dextrose agar (PDA) plates, which were incubated at 55 ºC for 5 days. Fungi were isolated from each plate and continuously subcultured on new fresh PDA medium until pure isolates were obtained. Stock cultures were maintained in PDA medium at 4 ºC.
Selection of the xylanase producers' fungal strains (plate-screening method)
In order to select the xylanase producers' fungal strains, cultures of the previously isolated fungi were transferred to a solid media containing birch wood xylan 0.5% as carbon source, yeast extract 0.1% as nitrogen source, Congo red dye 0.5% as chromogenic reagent, and agar 1.5%. The inoculated media were incubated at 55°C for 5 days. Then, the xylanase enzyme activity was determined by analyzing the clear zone formed around the fungal colony as a result of the reaction between the enzymes secreted by the fungi and the chromogenic substances present in the solid medium (Yoon et al., 2007).
Identification of the isolates
The strains were identified by rDNA 18S amplification, which was carried out by obtaining the genomic DNA and amplifying an 18S rDNA fragment corresponding to 500 bp approximately. High-molecular-weight DNA was extracted using the protocol described by Barth and Gaillardin (1996), and the primer pairs PN3 (5ʹ-CCGTTGGTGAACCAGCGGAGGGATC-3ʹ) and PN10 (5ʹ-TCCGCTTATTGATATGCTTAAG-3ʹ) were used to amplify a fragment of 18S rDNA. The PCR reaction system consisted of 0.5 µL of 1 U/µL Taq DNA polymerase, 2.5 µL of 10X buffer stock solution, 0.5 µL of 10 mM deoxynucleotides (dNTP mixture), 2.0 µL of 10 µM oligonucleotide, 2 µL template DNA, and sterile distilled water to obtain 25.0 µL as the final volume. Amplification was done in a PCR Thermal Cycler Px2 (Thermo Electron) with the following program: initial denaturation at 95°C for 10 min; 35 cycles (each one of them at 94°C for 1 min), annealing temperature of 54°C, 72°C for 1 min, and final extension at 72°C for 20 min. PCR products were visualized on 1.5% (w/v) agarose gel stained with ethidium bromide. PCR amplicons were sequenced using Big Dye® terminator cycle sequencing kit, in the 3730XL DNA Automatic Sequencer. The sequences obtained were compared with data available in GenBank database using the Mega BLAST network service of the National Centre for Biotechnology Information (NCBI) (Figure 1).
Corn cobs characterization and xylanase production by solid-state fermentation (SSF)
Corn cobs samples were supplied by Instituto del Maíz, Universidad Autónoma Agraria Antonio Narro (Saltillo, Coahuila, México). As soon as obtained, the material was milled to particle sizes between 0.7 and 1.4 mm, and was characterized to determine the water absorption index (WAI) (Anderson, 1982) and critical humidity point (CHP) (Oriol, Schettino, Viniegra-González, & Raimbault, 1988). To be used as a solid substrate during fermentation, milled corn cob was boiled for 10 min, washed three times with distilled water, dried at 60°C to constant weight, and autoclaved at 121°C for 20 min, as reported by Mussatto, Aguilar, Rodrigues, and Teixeira (2009a).
The inoculum was prepared by activating the selected strains in slants of potato dextrose agar medium for 4 days at 30°C. Subsequently, spores were recovered by washing the cultures with sterile aqueous solution of Tween 80 (0.01% v/v). Spore concentration was determined using a Neubauer chamber.
For the SSF experiments, 3.0 g of sterilized corn cobs were moistened with 11.0 mL of minimum Czapek-Dox medium (composed of 7.65 g/L NaNO 3 , 3.04 g/L KH 2 PO 4 , 1.52 g/L MgSO 4 .7H 2 O and 1.52 g/L KCl, with the final pH adjusted to 6.0 before sterilization) and the moistened material was transferred to a Petri dish. Then, each dish was inoculated with 1×10 7 spores per gram dry corn cob, and incubated at 55°C for 5 days.
For the enzyme recovery, 50 mL of sodium phosphate buffer (0.1 M, pH 7.0) was added to the fermented material and the mixture was homogenized during 30 min at 166 g and 30°C.
The sample was then centrifuged (3327 g, 15 min, and 10°C; Sorball, Primo R Biofuge Centrifugation, Thermo, USA) to separate the enzyme extract, which was further used for the xylanase activity determination.
Xylanase activity determination in the extracts produced by SSF
Xylanase activity in the extracts produced by SSF was determined by mixing 0.05 mL of extract with 0.05 mL of birch wood xylan 1.0% (w/v) prepared in 50 mM acetate buffer pH 6.0. The enzyme-substrate mixture was incubated at 50°C for 30 min. The released reducing sugars were quantified by the 3,5-dinitrosalicylic acid (DNS) method (Miller, 1959) using xylose as standard. One unit of xylanase was defined as the amount of enzyme that liberates 1 µmol of xylose equivalent per minute under the assay conditions.
Effect of pH and temperature on xylanase activity
To evaluate the effect of the pH on xylanase activity, enzyme solutions at different pH values (4-10) were prepared using 0.05 M acetate buffer, 0.05 M Tris-HCl buffer or 0.05 M glycine-NaOH buffer. For the experiments, 0.3 mL of xylanase solution was added to 0.7 mL of 1% birch wood xylan and the mixture was incubated at 55°C for 5 min, following which the reaction was stopped in ice-water bath and the enzyme activity was determined under standard assay conditions. To evaluate the effect of the temperature on enzyme activity, the crude enzyme extract obtained by SSF was incubated at different temperatures (55-80°C) for 5 min. The reaction was then stopped in ice-water bath and the enzyme activity was determined under standard assay conditions. To determine the thermal stability of the enzyme, the crude extract produced by SSF was incubated at different time intervals (0-60 min) at fixed temperatures (65-85°C ).
Data analyses
All the experiments and analyses were performed in triplicate and average values are reported. Differences among mean values were identified by the Tukey's range test (p ≤ 0.05) using the software Minitab®.
Results and discussion
Isolation and identification of xylanase producers' thermophilic fungal strains Twenty-one fungal strains were initially isolated from maize silage. Xylanase producer strains were selected using the platescreening method, which consists in the detection of clear zones formed as a consequence of the enzymatic activity that hydrolyses the Congo red dye substrate (Yoon et al., 2007). Among the 21 isolated strains, seven were able to growth on xylan-Congo red plates; however, two of them were unable to promote discoloration of the agar medium, probably because they did not secrete the enzyme out of the cell. The other five strains, referred to as SOC-4A, SOC-4B, SOC-4 C, SOC-4D, and SOC-5A, were then selected at this stage. When compared with data available in GenBank database, the sequence identified as SOC-4A (Access number KC711060), SOC-4B (Access number KC711061), and SOC-4 C (Access number KC711062) showed 100% homology with the sequence available for Rhizomucor pusillus 1341; while the sequence SOC-4D (Access number KC711063) showed 98% homology with that for this same fungal strain. On the other hand, the sequence identified as SOC-5A (Access number KC711064) presented 99% homology with the sequence available for Aspergillus fumigatus BF18.
Corn cobs characterization for use as support-substrate in SSF
In SSF processes, the microorganisms are grown on solid particles in the absence (or near absence) of free water. However, the support-substrate material must present enough moisture to allow the growth and metabolism of the microorganism (Martins et al., 2011;Mussatto et al., 2012). Therefore, determining the WAI and CHP of the material that is desired for use as supportsubstrate is very important because these properties are directly related to the capacity of the material to absorb water and, as a consequence, to be invaded and colonized by the microorganisms. In brief, the WAI and CHP values allow estimating if the material presents characteristics suitable for use as support-substrate in SSF systems (Mussatto, Aguilar, Rodrigues, & Teixeira, 2009b;Mussatto et al., 2009a). WAI reflects the ability of the material to absorb water, and depends on the availability of hydrophilic groups to be bonded with water molecules as well as on the gel-forming capacity of the macromolecules (Mussatto et al., 2009a). Materials with high WAI are preferred for use in SSF systems because they facilitate Figura 1. 18S rDNA fragmento de cepas. CN, control negativo; 1, 2, 3, 4, and 5 corresponden a las cepas SOC-4A, SOC-4B, SOC-4C, 4D y SOC-SOC-5A, respectivamente; MM, marcador molecular de 1.000 bp. Los productos de PCR fueron visualizados en 1,5% (w/v) gel de agarosa teñido con bromuro de etidio. the species growth and development (Mussatto et al., 2009b). In the present study, the corn cobs sample presented WAI value of 2.55 g gel per g dry material, which is similar to that reported by other authors for this same substrate (Buenrostro-Figueroa et al., 2014;Mussatto et al., 2009a) as well as for other substrates already used in SSF processes, including cork oak (Mussatto et al., 2009a), wheat bran, and pecan nutshell (Orzua et al., 2009).
CHP represents the quantity of water linked to the support material, which cannot be used by the microorganism. Therefore, materials with low CHP values are desired (Mussatto et al., 2009b;Robledo et al., 2008). The CHP value obtained for corn cobs in the present study was 32.6%. This value is within the range of values acceptable for use in SSF systems. A maximum limit of 40% CHP was recommended for Aspergillus niger strains grow in solid-state cultures (Moo-Young, Moreira, & Tengerdy, 1983). The value of CHP found for corn cobs was similar to those found for other materials such as lemon peel and apple pomace (Orzua et al., 2009).
Finally, the results of WAI and CHP obtained for corn cobs reveal that this material presents characteristics suitable for use as support-substrate in SSF processes.
Xylanase production by the isolated fungal strains under SSF conditions
In this step of the study, the ability of the five fungal strains previously isolated from the plate-screening method (SOC-4A, SOC-4B, SOC-4 C, SOC-4D, and SOC-5A) to produce xylanase under SSF conditions was evaluated. All the strains were able to grow and synthesize xylanase when cultivated in corn cobs under SSF conditions. The fermentation runs were carried out during 120 h, but the highest production of xylanase was observed at 96 h of fermentation. Figure 2 shows the relative xylanase activity recorded for all the strains at this time. Rhizomucor pusillus SOC-4A provided the highest value of xylanase activity, which was not different at p < 0.05 when compared with the value achieved with Rh. pusillus SOC-4B. Aspergillus fumigatus SOC-5A presented also a good ability to produce xylanase, providing an enzyme activity value similar to that observed for Rh. pusillus SOC-4B.
Since fungi from different species presented good ability to produce xylanase, a kinetic study was then carried out with the strains Rh. pusillus SOC-4A and A. fumigatus SOC-5A in order to better understand and compare the enzyme production by these fungi. As can be seen in Figure 3, the highest xylanase production (824 U/g) by Rh. pusillus SOC-4A occurred at 96 h of fermentation. Aspergillus fumigatus SOC-5A produced maximum enzyme activity (488 U/g) at 72 h, but this maximum value was 40% lower than that obtained with Rh. pusillus SOC-4A. In addition, the xylanase production by A. fumigatus SOC-5A occurred with lower productivity (6.78 U/g.h) when compared with the production by Rh. pusillus (8.59 U/g.h). These results reveal that Rh. pusillus SOC-4A was a more efficient xylanase producer strain than A. fumigatus SOC-5A, since it was able to produce more enzymes in a shorter fermentation time. In terms of stability, the xylanase produced by both fungi presented high thermostability, since the enzyme activity decreased only 2.0% and 6.0% for Rh. pusillus SOC-4A and A. fumigatus SOC-5A, respectively, after 15 min at 75°C. Taking into account the high thermal stability of the xylanase produced, and mainly the high enzyme production and productivity, Rh.
pusillus SOC-4A was selected as the best xylanase producer thermophilic fungal strain isolated in the present study.
The xylanase activity values (824 U/g, 8.59 U/g.h) obtained with Rh. pusillus SOC-4A can be well compared to values reported in other SSF studies on the production of this enzyme by different microorganisms (Kapilan & Arasaratnam, 2011;Sadaf & Khare, 2014). It is important to emphasize that the results of the presented study were obtained without optimizing the fermentation conditions and, therefore, it is expected that they can be further improved. Selecting the best operational conditions for use during the fermentation process is fundamental to achieving maximum product formation. These conditions can be selected by using statistical tools such as experimental designs and surface response methodology, for example (Mussatto & Roberto, 2008;Mussatto et al., 2013). Higher values of xylanase activity have been reported when using submerged fermentation systems (Bakri, Masson, & Thonart, 2010;Bokhari, Rajoka, Javaid, & Latif, 2010). Nevertheless, the use of SSF systems presents important advantages from the downstream point of view. Since the volume of medium used in SSF is low, the enzyme can be more easily recovered from this medium than from the large volumes of liquid medium used in submerged fermentation systems (Martins et al., 2011;Mussatto et al., 2012). This fact may have important influence in the practical and economic aspects of the global process (Mussatto et al., 2015).
Another interesting aspect presented in Figure 3 is that the xylanase production by both fungi, Rh. pusillus SOC-4A and A. fumigatus SOC-5A, decreased after attained the maximum value. Such behavior can be related to a possible hydrolysis of the enzyme caused by proteases secreted by the microorganisms. Some studies report that Rh. pusillus (Macchione, Merheb, Gomes, & Da Silva, 2008;Yegin, Fernández-Lahore, Guvenc, & Goksungur, 2010) and A. fumigatus (Neustadt et al., 2009;Oguma et al., 2011) are protease producer strains.
Effect of pH and temperature on activity of the xylanase produced by Rhizomucor pusillus It is well known that pH and temperature are two important process variables affecting the enzyme activity. Therefore, this part of the study consisted in establishing the conditions of these variables in which the activity of the xylanase produced by Rh. pusillus SOC-4A can be maximized. A pH range varying from 4 to 10 and temperature range varying between 55 and 80 º C were studied. The results obtained in these experiments indicated that the xylanase was highly active in a range of pH between 5.0 and 7.0, with maximum value at pH 6.0 ( Figure 4). A pronounced drop in the xylanase activity was observed above pH 7.0. Similar pH range was reported for the xylanase produced by Rh. miehei (Fawzi, 2010), while different results were reported for other fungi. For example, xylanase purified from Arthrobacter sp. MTCC 5214 presented maximum activity at higher and narrower pH range (between 7.0 and 8.0) (Khandeparkar & Bhosle, 2006), whereas the xylanase produced by Thermomyces lanuginosus presented maximum activity in a wider pH range, within 5.5-10.0 (Singh, Pillay, & Prior, 2000).
Regarding the temperature, the xylanase produced by Rh. pusillus SOC-4A presented maximum activity between 65 and 75°C, with optimum value at 70°C ( Figure 5). Fawzi (2010) reported an optimal temperature of 75°C for xylanase produced by Rh. miehei. In the present study, when the temperature reached 75°C, the relative xylanase activity was decreased to 77% of the maximum value achieved at 70°C.
Conclusions
New thermophilic fungal strains with the ability to produce thermostable xylanase were isolated from maize silage. Species from the genera Rhizomucor and Aspergillus were identified among the selected strains and demonstrated good ability to produce xylanase under solid-state fermentation conditions using corn cobs as support-substrate. The best results of activity and productivity were obtained for the thermostable xylanase produced by the isolated strain Rhizomucor pusillus SOC-4A. The enzyme produced by this fungus presented high activity within a pH range from 5.0 to 7.0, and temperature from 65 to 75°C, with maximum value at pH 6.0 and 70°C. This enzyme showed also good thermal stability since the activity decreased only 2.0% after 15 min at 75°C. These characteristics suggest that the xylanase produced by the new isolated fungus Rh. pusillus SOC-4A has potential for applications in different areas, including in the pulp and paper, food, biofuel, and textile industries.
Disclosure statement
No potential conflict of interest was reported by the authors.
Funding
The author A. Robledo acknowledges CONACYT (Consejo Nacional de Ciencia y Tecnología) for the financial support to conduct his PhD study. Relative activity (%) Temperature (°C) Figure 5. Effect of temperature on the activity of xylanase produced by Rhizomucor pusillus SOC-4A grown on corn cobs under solid-state fermentation conditions. Relative activity was determined at pH 6.0.
|
2019-03-30T13:04:51.188Z
|
2016-04-02T00:00:00.000
|
{
"year": 2016,
"sha1": "df9ef5d309d4b2d64033fb27be6c6d85473ce66e",
"oa_license": "CCBY",
"oa_url": "https://repository.tudelft.nl/islandora/object/uuid:37de12f7-b19b-4178-b261-325a131f9a9f/datastream/OBJ/download",
"oa_status": "GREEN",
"pdf_src": "TaylorAndFrancis",
"pdf_hash": "fecfaf9a30033b68a8f8e99c65dd3f59ee8b714c",
"s2fieldsofstudy": [
"Biology",
"Engineering",
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Biology"
]
}
|
252464412
|
pes2o/s2orc
|
v3-fos-license
|
Post-COVID-19 Syndrome: Retinal Microcirculation as a Potential Marker for Chronic Fatigue
Post-COVID-19 syndrome (PCS) is characterized by persisting sequelae after infection with severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2). PCS can affect patients with all COVID-19 disease severities. As previous studies have revealed impaired blood flow as a provoking factor triggering PCS, it was the aim of the present study to investigate the potential association between self-reported chronic fatigue and retinal microcirculation in patients with PCS, potentially indicating an objective biomarker. A prospective study was performed, including 201 subjects: 173 patients with PCS and 28 controls. Retinal microcirculation was visualized by OCT angiography (OCT-A) and quantified using the Erlangen-Angio-Tool as macula and peripapillary vessel density (VD). Chronic fatigue (CF) was assessed according to the variables of Bell’s score, age and gender. VDs in the superficial vascular plexus (SVP), intermediate capillary plexus (ICP) and deep capillary plexus (DCP) were analyzed, considering the repetitions (12 times). Seropositivity for autoantibodies targeting G protein-coupled receptors (GPCR-AAbs) was determined by an established cardiomyocyte bioassay. Taking account of the repetitions, a mixed model was performed to detect possible differences in the least square means between the different groups included in the analysis. An age effect in relation to VD was observed between patients and controls (p < 0.0001). Gender analysis showed that women with PCS showed lower VD levels in the SVP compared to male patients (p = 0.0015). The PCS patients showed significantly lower VDs in the ICP as compared to the controls (p = 0.0001 (CI: 0.32; 1)). Moreover, considering PCS patients, the mixed model revealed a significant difference between those with chronic fatigue (CF) and those without CF with respect to VDs in the SVP (p = 0.0033 (CI: −4.5; −0.92)). The model included variables of age, gender and Bell’s score, representing a subjective marker for CF. Consequently, retinal microcirculation might serve as an objective biomarker in subjectively reported chronic fatigue in patients with PCS.
Introduction
Severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), first observed in Wuhan, Hubei Province, China, in 2019 [1], was declared a public health emergency of international concern in January 2020 by the World Health Organization (WHO) [2]. In March 2020, worldwide pandemic infection levels were reached, with impacts on social life, the economy, and healthcare systems [3]. Acute coronavirus disease 2019 (COVID- 19) caused a number of pneumonia cases [1] and can lead to various complications, such as respiratory and multiorgan failure [3]. By 18 February 2022, the virus's spread had increased to over 418 million cases, with over 5.8 million deaths (numbers obtained from the WHO) [4,5]. Apart from acute COVID-19 disease, post-COVID-19 syndrome (PCS) can arise afterwards.
PCS is defined as the persistence of symptoms for more than 12 weeks after infection with the virus (S1 guideline; AWMF online). Continuing symptoms more than 4 weeks after infection are recognized as long COVID symptoms or post-acute sequelae of COVID-19 [6].
The most common symptoms reported in studies are chronic fatigue (CF) and dyspnea (i.e., shortness of breath) [5]. Persistent symptoms may include neurocognitive impairments (brain fog, loss of attention), autonomic symptoms (chest pain, palpitations, tachycardia), gastrointestinal issues, musculoskeletal problems (myalgia), smell and taste dysfunction, cough, headache and hair loss [5][6][7]. Studies have reported that PCS can even affect people with moderately acute COVID-19 who did not require hospital care during the acute stage [5,8,9]. The absolute numbers of patients with PCS correspond to the shape and amplitude of the pandemic curve, showing the risk PCS poses for individual health, healthcare systems and the economy [10]. Studies have revealed that more than half of patients with COVID-19 infection reported PCS symptoms [11]. Others have postulated that 15% of all COVID-19 patients [6] and 50-70% of hospitalized patients suffer from PCS [7].
The pathogenesis of PCS is still elusive. Recent studies have revealed viral persistence and enduring texture damage, including endotheliopathy, impaired microvasculature, hypercoagulation, thrombosis, neutrophil extracellular traps (NETs), chronic immune dysregulation, dysregulation of the renin-angiotensin-aldosterone system (RAAS) and hyperinflammation/autoimmunity, as possible pathomechanisms of PCS [6,[12][13][14]. It is assumed that autoimmunphenomena [15], including the generation of functional active autoantibodies [5,16], are involved in the pathogenesis of PCS with potential different PCS subgroups [5,6]. Autoantibodies targeting G protein-coupled receptors (GPCR-AAbs) are of especial interest, as GPCRs represent the largest receptor family in humans. A functional dysbalance in these receptors, induced by functional active GPCR-AAbs, is likely to disturb several factors in the human body. A previous study in GPCR-AAb-positive glaucoma patients having shown a link with impaired microcirculation [17], an experimental therapy aiming to neutralize functional active GPCR-AAbs improved PCS in a glaucoma patient [16]. Thus, it can be hypothesized that GPCR-AAbs are involved in the pathogenesis of PCS, potentially in combination with preconditions (e.g., ischemia) [5,18,19]. In addition, recent studies have examined increased D-dimer levels up to 4 months post-acute infection in approximately 25% of patients [20]. The mechanisms of these persistent procoagulant effects in PCS have not been clarified at this point in time [20]. Endotheliopathy and elevated plasma markers of endothelial cell activation have been recognized in patients with severe COVID-19 [20]. Studies have investigated persistent endotheliopathy in patients with PCS which is associated with enhanced thrombin generation potential independently of ongoing acute-phase response or NETosis [20]. Autopsy studies revealed that alveolar capillary microthrombi were nine times more prevalent in patients with COVID-19 compared to patients with influenza [21]. It can be assumed that impaired microcirculation might be one factor contributing to the clinical symptoms of PCS.
Fatigue is one of the most common symptoms of PCS, which emphasizes its impact on individual health, healthcare systems and economics [6,11,22]. The study "Assessment and characterization of post-COVID-19 manifestations" revealed that fatigue was reported as the most common symptom in 72.8% of patients [23]. Sudre et al. related data from the COVID-19 symptom study app and concluded that "self-reported fatigue is the commonest complaint in a large group of Long-COVID patients" [24,25]. Associations between fatigue in PCS and laboratory markers of inflammation and cell turnover (leukocyte, neutrophil or lymphocyte counts; neutrophil-to-lymphocyte ratio; lactate dehydrogenase and C-reactive protein levels) or pro-inflammatory molecules (IL-6 or sCD25) had not been observed until recently [11]. Thus, it would be of interest to establish an objective marker for patient self-reported fatigue.
The eye, as a window into the human body, can be used as a "diagnostic window" for several systemic disorders. The retinal capillary system represents the microcirculation in the whole human body; thus, retinal capillary disorders might represent whole human microcirculatory disorders. Retinal macular and peripapillary capillary plexuses can be visualized by optical coherence tomography angiography (OCT-A) and quantified using the Erlangen-Angio-Tool (EA-Tool) [5,[26][27][28]. OCT-A is easy to perform and allows for noninvasive measurement without any contact with the human eye. It measures differences in the speckle patterns of backscattered light in two or more repeated scans. These differences are caused by moving particles, such as red blood cells (RBCs) [29]. The aim of this study was to investigate the association between self-reported chronic fatigue and retinal microcirculation in patients with PCS to potentially indicate an objective biomarker. In addition, serum samples were screened for GPCR-AAbs, considering their potential impact on microcirculation.
Significant effects for age and gender were observed with respect to VD in the SVP, ICP and DCP (p < 0.0001). Considering the influence of age, we observed that, with increasing age, VD in the SVP, ICP and DCP decreased ( Figure 1). Estimated values were −0.06 (SVP), −0.06 (ICP) and −0.07 (DCP) in patients with PCS.
Correlations between gender and VD in the SVP, ICP and DCP in patients with PCS are plotted in Figure 2. Females showed lower levels for each comparison of retinal layers. Significantly lower (Type 3 Tests of Fixed Effects) VDs in the SVP in female patients with PCS were observed compared to males (LS mean difference = 1.05 (CI: 0.41; 1.69), p = 0.0015; Figure 3) with increasing age.
The analysis using a mixed model with 12 repetitions, which was corrected for age and gender, yielded significantly impaired VDs in the ICP (p = 0.0001), yet not in the SVP or DCP (p > 0.05) in patients with PCS, compared to controls (Table 1).
Instead, in the PCS patients, the complete model (including age, gender and Bell's score variables) revealed a significant difference between patients with chronic fatigue (CF) and those without CF with respect to VD in the SVP (p = 0.0033, (CI: −4.5; −0.92)) ( Correlations between gender and VD in the SVP, ICP and DCP in patients with PCS are plotted in Figure 2. Females showed lower levels for each comparison of retinal layers. Significantly lower (Type 3 Tests of Fixed Effects) VDs in the SVP in female patients with PCS were observed compared to males (LS mean difference = 1.05 (CI: 0.41; 1.69), p = 0.0015; Figure 3) with increasing age. Considering GPCR-AAbs and their potential impact on microcirculation in patients with PCS, a mixed model with repetitions was generated, with combinations of GPCR-AAbs inserted. Significant effects of β2-AAb on VD in the SVP (p = 0.0075), Noci-AAb (p = 0.0344) and β2-AAb (p = 0.0112) and of ETA-AAb (0.0261) on VD in the ICP, along with a trend associating Noci-AAb (p = 0.055) with VD in the DVP, were observed in the present cohort (Table 3). The analysis using a mixed model with 12 repetitions, which was corrected for age and gender, yielded significantly impaired VDs in the ICP (p = 0.0001), yet not in the SVP or DCP (p > 0.05) in patients with PCS, compared to controls (Table 1). Table 3. Differences in the least square means of VDs in the SVP, ICP and DCP in patients with post-COVID-19 syndrome (PCS), considering seropositivity for GPCR-AAb (1) and seronegativity for GPCR-AAb (0). Notable significant effects were observed in the SVP for β2-AAb (p = 0.0075) and in the ICP for Noci-AAb (p = 0.0344), β2-AAb (p = 0.0112) and ETA-AAb (p = 0.0261), and in the DCP an associative trend with Noci-AAb (p = 0.055) was also observed.
Discussion
Post-COVID-19 syndrome (PCS) is a challenge for individual health, the healthcare system and the economy due to its high prevalence in patients worldwide. The leading clinical symptom of PCS is self-reported fatigue, as it is one of the most common symptoms associated with PCS among large groups of patients [6,11,[22][23][24][25]. The prevalence of CF is not associated with COVID-19 severity, which indicates that it potentially affects a high number of patients [11]. As each clinician prefers objective biomarkers in addition to self-reported clinical symptoms, the aim of this study was to investigate the association of self-reported chronic fatigue and retinal microcirculation in patients with PCS to potentially indicate an objective biomarker. An age effect with respect to VD was observed in patients and controls (p < 0.0001). Gender analysis revealed that women with PCS showed lower VD levels in the SVP especially, compared to male patients (p = 0.0015). Previous studies revealed a PCS ratio of 3:1 in women and men [30,31]. In addition, the present data conform with clinical observations that women with PCS had a higher probability of fatigue and anxiety/depression throughout 6-month follow-up [32,33]. Patients with PCS showed significantly lower VDs in the ICP compared to controls (p = 0.0001 (CI: 0.32; 1)), considering age and gender effects. Among PCS patients, the mixed model revealed a significant difference between those with chronic fatigue (CF) and those without CF with respect to VDs in the SVP (p = 0.0033 (CI: −4.5; −0.92)). The model included as variables age, gender and Bell's score, the latter representing a subjective marker for CF. The variable of Bell's score was always significant for each VD. Thus, the eye as a window into the human body might offer an objective diagnostic option through the measurement of retinal microcirculation in cases of self-reported CF among patients with PCS. Considering GPCR-AAb and impaired microcirculation, data for the present cohort showed a significant effect of β2-AAb on VD in the SVP (p = 0.0075), Noci-AAb (p = 0.0344) and β2-AAb (p = 0.0112) and of ETA-AAb (0.0261) on VD in the ICP, along with a trend of association between Noci-AAb (p = 0.055) and VD in the DVP.
To date, there is no uniform consensus about the definition of CF in PCS. It is assumed that the label CF/post-COVID-19 fatigue, in line with definitions of post-infectious fatigue, should be applied under the following conditions: that it is a dominant symptom, chronic, disabling (it prevents pre-illness activities and duties), is intensified after mental and/or physical activity (post-exertional malaise, PEM) [34,35], has persisted for 6 months or longer (3 months in children and adolescents), that it occurred during confirmed acute COVID-19 and has persisted without a symptom-free interval since onset [36]. The unknown nature of PCS and its phenotypical similarity to postinfectious fatigue syndrome has led some studies to suggest a connection to myalgic encephalomyelitis/chronic fatigue syndrome (ME/CFS) [37][38][39]. Post-exertional malaise (PEM), a leading symptom of ME/CFS, is characterized by worsening symptoms after low or moderate daily activity for several hours or weeks [35,40]. This burden has also been found in PCS patients [35,40] Patients lose the ability to engage in pre-illness levels of activity in social life, work or school [39]. Studies have revealed that patients react abnormally to stressors, e.g., they wake up with abnormal rises in serum cortisol levels and heart rates [41]. Women seem to be more affected than men [42].
CF in PCS exhibits similar incidences in hospitalized and non-hospitalized patients [22,43]. Fatigue and cognitive impairment are assumed to endure and may worsen over time in individuals with <6-month and ≥6-month follow-ups [22,43]. If CF in PCS is identified, there should follow an underlying diagnostic.
At the moment, CF diagnostics is based on brief questionnaires to characterize the fatigue state, such as the Calder Fatigue Scale or the SPHERE [36]. These methods try to identify CF in line with the disease-specific recommendations from the National Institute of Neurological Disorders and Stroke Common Data Elements [36]. As CF is often part of a multisymptomatic cluster, SPHERE includes related physical symptoms in the diagnostics, and other systems also include mental health questions (e.g., the Patient Health Questionnaire-9) [36]. CF in PCS is associated with marked functional impairment [22]. As a subgroup of patients still exhibited inflammatory markers after acute COVID-19 infection, it has been suggested that hyperinflammation is a cause of CF in PCS [22]. The causal association between specific pro-inflammatory cytokines, mood symptoms and cognitive decline has been confirmed [44,45]. Other post-infectious syndromes (e.g., post-infectious encephalitis) have been associated with inflammatory processes [46]. The pathophysiology of CF remains unresolved [36].
To the best of our knowledge, the present study has revealed for the first time an association between CF in PCS by recourse to Bell's scores and impaired retinal microcirculation, as determined by OCT-A, potentially indicating an objective biomarker. OCT-A is able to visualize retinal macular and peripapillary capillary plexuses [5,[26][27][28]. It is easy to perform and allows for non-invasive measurement without contact with the human eye. The technical basis is the recording of a real-time motion signal based on temporal changes in intravascular moving red blood cells (RBCs) [29]. If a signal is recorded, a retinal pixel is coded 'white'; without any motion, it is coded by 'black' (i.e., the coding is binary) [5]. The data can be analyzed with high reliability and reproducibility using the Erlangen-Angio-Tool (EA-Tool) [33]. Fine-grained analysis can be performed by division of the scan region into 12 sectors (macula) or 4 sectors (peripapillary region) to calculate the overall and sectorial vessel density (VD). The eye, as a "window" into the human body, is representative of several systemic disorders [47][48][49][50][51]. Alveolar capillary occlusion is a characteristic symptom of COVID-19 which can lead in severe cases to respiratory failure, as blood oxygen uptake is limited [22,52]. Impaired microcirculation can be found in acute COVID-19 as well as in PCS [5,[53][54][55]. The virus may infect endothelial cells directly via Angiotensin Converting Enzyme 2 (ACE2), leading to inflammation and fibrosis [53], and may trigger the generation of several autoantibodies targeting receptors, being involved in the regulation of blood flow. Specific GPCR-AAbs were observed to have an impact on microcirculation, confirming previous data [5,17,18]. Noci-AAb, β2-AAb and ETA-AAb were observed to be linked to impaired retinal microcirculation in different retinal layers.
The present study showed that female patients with PCS exhibited lower vessel density (VD) levels in comparisons of their retinal layers (SVPs, ICPs and DCPs) with those of men. This reinforced impaired microcirculation in female patients with PCS is in line with the results of other PCS studies [5]. The analysis, which was corrected for age and gender, showed significantly impaired VD in the ICP in patients with PCS compared to controls. Interestingly, the Angiotensin Converting Enzyme 2 (ACE2), a serine protease, is located in the ICP retinal layer. There is evidence that SARS-CoV-2 occupies the human body by first binding to the ectoenzyme ACE2, acting as the receptor [56]. Another serine protease is required to prime the viral spike "S" protein for entry into cells [56]. Inclusion of the additional explicative variable Bell's score, representing a subjective marker for CF, in the mixed model, revealed a significant effect on the SVP when comparing PCS patients with CF and without CF. Thus, CF has a significant impact on retinal microcirculation, which indicates that the latter is a potential objective biomarker for CF which might provide an objective diagnostic option for CF in PCS patients. The eye as a window into the human body might offer an objective diagnostic option through the measurement of retinal microcirculation in self-reported chronic fatigue in patients with PCS. It can be assumed that retinal microcirculation can have an impact as a diagnostic tool in PCS patient populations and that it might additionally have an impact in the treatment of related diseases, such as ME/CFS.
The study is not without limitations. The number of controls was low (n = 28). However, the present analysis aimed to investigate differences within the post-COVID-19 syndrome cohort itself (n = 173). In addition, the age range for the present cohort was 39.7 ± 12 years. Cross-sectional studies are necessary with wider age-distribution ranges. In addition, it would be of interest if long-term studies could observe further alterations in OCT-A data over time.
SARS-CoV-2 infection was confirmed by a positive real-time reverse transcription polymerase chain reaction test. Post-COVID-19 symptom persistency was 231 ± 111 days at the time of study participation. The most common self-reported post-COVID-19 symptoms in the present cohort were CF (92%), impaired concentration (83%), hair loss (63%), POTS (19%) and subjectively colder hands (12%). No local or systemic eye disorders with retinal affection were presented. The patients underwent ophthalmic examinations, including measurement of best-cured visual acuity (BCVA), non-contact intraocular pressure (IOP), measurement of axial length (IOL Master, Zeiss, Oberkochen, Germany) and OCT angiog-raphy (see below for details). Best-corrected visual acuity was 0.97 ± 0.1 (post-COVID-19 patients) and 1.12 ± 0.2 (controls). Anamnestic data, including self-reported chronic fatigue, were recorded. In addition, a subgroup of patients with post-COVID-19 assessed their self-reported fatigue using a chronic fatigue score system (Bell's score; n = 104). The Bell's score result was 46 ± 19. All patients signed a written informed consent form. The study was approved by the local ethics committee and performed in accordance with the tenets of the Declaration of Helsinki.
OCT-A
OCT-A (Heidelberg Spectralis II, Heidelberg, Germany) is a diagnostic technique used to visualize retinal microcirculation in the macula and peripapillary region. Retinal macula microvasculature can be subdivided into three layers: the superficial vascular plexus (SVP), the intermediate capillary plexus (ICP) and the deep capillary plexus (DCP). All OCT-A scans have an angle of 15 • covering a size of 2.9 mm × 2.9 mm, with a lateral resolution of 5.7 µm/pixel [5].
The OCT-A data were exported by the SP-X1902 software (prototype software, Heidelberg Engineering, Heidelberg, Germany) and analyzed by the Erlangen-Angio-Tool (EA-Tool) software, which is coded in MATLAB (The MathWorks, Inc., Natick, MA, USA, R2017b). Studies have revealed high levels of reliability and reproducibility for the EA-Tool [60]. For the analysis, VD was computed in 12 sectors of the macula. Moreover, overall VD was computed as a mean over the sectors. In addition, the Anatomic Positioning System (APS-part of Glaucoma Module Premium Edition (GMPE), Heidelberg Engineering, Heidelberg, Germany) was implemented in the EA-Tool. This feature aligns all OCT-A scans according to their individual fovea-to-Bruch's membrane opening center axis (FoBMOC) to allow for a better comparison of different scans. This FoBMOC axis is defined by the fovea and the center of Bruch's membrane opening [5].
Statistical Analysis
The data were analyzed using different mixed models (SAS version 9.4, Institute Inc., Cary, NC, USA), taking into consideration the repetitions for the eyes (12 times) for each sector of the macula in the OCT-A scans. In the first model, we compared the PCS and the control patients; the variable was set as the independent variable. In the second model, we excluded the control group; the independent variable was chronic fatigue. We estimated the least square means (LS means) that corresponded to the specified effects for the linear predictor part of the model and the relative confidence limits. LS means are closer to reality and constitute more accurate data when cofactors occur compared to means. Age and gender were introduced as covariates in both models. In the second model, we also added Bell's score as predicative variables. A mixed model was calculated also for the combination of the GPCR-AAb variables, testing possible differences in SVP, ICP and DCP. The p-values (the α-value was set at 0.05) are presented with their respective confidence interval limits (CLs). All the CLs and p-values in the multiple comparisons were adjusted with Tukey-Kramer tests.
Conclusions
Post-COVID-19 syndrome is a post-infectious disease with a multifactorial pathomechanism and symptoms. We were able to reveal differences in VDs in the ICPs of the control and PCS patients. Considering the PCS patient group, we were able to observe differences in VD in the SVP between PCS patients with and without CF. In addition, GPCR-AAb showed an impact on impaired retinal microcirculation. As self-reported fatigue is one of the most common symptoms in PCS, the present study showed that vessel density in retinal microcirculation as measured by OCT-A might serve as an objective biomarker for this subjectively reported symptom.
|
2022-09-24T07:36:43.610Z
|
2022-09-23T00:00:00.000
|
{
"year": 2022,
"sha1": "9c6936685de2d786537a3d2efb16976b64e561fc",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1422-0067/23/22/13683/pdf?version=1668571703",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "385e4b803ebf035d7739dfc7fca9d4eff131aef5",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
4857999
|
pes2o/s2orc
|
v3-fos-license
|
Health workforce metrics pre- and post-2015: a stimulus to public policy and planning
Background Evidence-based health workforce policies are essential to ensure the provision of high-quality health services and to support the attainment of universal health coverage (UHC). This paper describes the main characteristics of available health workforce data for 74 of the 75 countries identified under the ‘Countdown to 2015’ initiative as accounting for more than 95% of the world’s maternal, newborn and child deaths. It also discusses best practices in the development of health workforce metrics post-2015. Methods Using available health workforce data from the Global Health Workforce Statistics database from the Global Health Observatory, we generated descriptive statistics to explore the current status, recent trends in the number of skilled health professionals (SHPs: physicians, nurses, midwives) per 10 000 population, and future requirements to achieve adequate levels of health care in the 74 countries. A rapid literature review was conducted to obtain an overview of the types of methods and the types of data sources used in human resources for health (HRH) studies. Results There are large intercountry and interregional differences in the density of SHPs to progress towards UHC in Countdown countries: a median of 10.2 per 10 000 population with range 1.6 to 142 per 10 000. Substantial efforts have been made in some countries to increase the availability of SHPs as shown by a positive average exponential growth rate (AEGR) in SHPs in 51% of Countdown countries for which there are data. Many of these countries will require large investments to achieve levels of workforce availability commensurate with UHC and the health-related sustainable development goals (SDGs). The availability, quality and comparability of global health workforce metrics remain limited. Most published workforce studies are descriptive, but more sophisticated needs-based workforce planning methods are being developed. Conclusions There is a need for high-quality, comprehensive, interoperable sources of HRH data to support all policies towards UHC and the health-related SDGs. The recent WHO-led initiative of supporting countries in the development of National Health Workforce Accounts is a very promising move towards purposive health workforce metrics post-2015. Such data will allow more countries to apply the latest methods for health workforce planning. Electronic supplementary material The online version of this article (doi:10.1186/s12960-017-0190-7) contains supplementary material, which is available to authorized users.
Background
The case for universal health coverage (UHC) is wellestablished, but its implications for the health workforce have only recently started to receive attention. Countries working towards UHC need to keep track of the size and composition of their health workforce and to anticipate future need for human resources for health (HRH) [1]. This can be strategically informed by valid and reliable workforce data [2]; without these data, decisionmakers are unable to plan strategically or anticipate future needs [3,4].
The importance of HRH data and the need to improve them has been stressed by the World Health Organization (WHO), the World Bank and the Organization for Economic Co-operation and Development [5][6][7]. To date, data collection processes and mechanisms have tended to be developed at a country level. There have been attempts to create harmonised regional and global data sets [8,9], but this work requires further development.
Recent data from the International Labour Organization estimate a global shortfall of over 10 million health workers and affecting principally countries with the highest burden of mortality and morbidity [10]. The focus of this paper is on the 75 'Countdown to 2015' countries. Countdown to 2015 was a global movement which tracked progress towards the health-related Millennium Development Goals (MDGs) in the 75 countries where more than 95% of maternal and child deaths occurred [11]. The Countdown to 2015 collaboration has now evolved into the 'Countdown to 2030 for Reproductive, Maternal, Newborn, Child, and Adolescent Health and Nutrition' initiative, which will continue the focus on high-burden countries, particularly in Sub-Saharan Africa and South Asia [12], making these 75 countries still highly relevant in the post-2015 era.
This paper uses two health worker density thresholds to assess the HRH situation in the Countdown countries. Firstly, the 2006 World Health Report [13] stated that countries with fewer than 22.8 physicians, nurses and midwives per 10 000 population were highly unlikely to be able to provide 80% coverage of the most basic health services [14]. Secondly, WHO recently developed an 'SDG Index threshold' as an indicative minimum density representing the need for health workers to achieve the health targets of the Sustainable Development Goals (SDGs). The value of the threshold was determined to be 4.45 doctors, nurses and midwives per 1000 population (or 44.5 per 10 000) [15].
These two thresholds are both needs-based yet vary in interpretation: countries below the 22.8 threshold may be thought of as having too few health workers to meet even the most basic health needs, whereas the 44.5 threshold can be thought of as a step forward in identifying the minimum health workforce requirements to achieve the health-related SDGs.
As stated in the 2006 World Health Report, these thresholds 'are not a substitute for specific country assessments of sufficiency, nor do they detract from the fact that the effect of increasing the number of health workers depends crucially on other determinants' [13]. Additionally, there are limitations in both these thresholds and the quality of the data used to calculate countries' health worker densities [16][17][18][19]. However, in the absence of robust estimates of HRH development, thresholds offer a common comparative value against which countries can be monitored to check HRH progress or the lack of it [20]. Therefore, this study aimed to (a) describe HRH metrics in the 75 Countdown countries using a global and comparable source and (b) describe and assess some commonly used HRH metrics, their sources of data, and the methods used to analyse HRH data in the research literature. The fulfilment of these two objectives allows us to make some recommendations about how HRH metrics (and the data that feed into them) could be developed in the SDG era.
Methods
To describe the characteristics of HRH metrics in the 75 Countdown countries, we used two indicators of workforce availability: (1) number of skilled health professionals (SHP numbers: nurses, midwives and physicians) and (2) density of skilled health professionals per 10 000 population (SHP density). We extracted data on SHP numbers from 2004 to 2014 for each of the 75 Countdown countries from the WHO Global Health Workforce Statistics database [21]. This database compiles data from four main sources: population censuses, labour force and employment surveys, health facility assessments and routine administrative information systems. Most of the data from administrative sources are derived from published national health sector reviews and/or official reports to WHO offices.
No SHP data were available for South Sudan, so this country was excluded. To calculate SHP densities for each of the remaining 74 countries, country data on SHP numbers were divided by the population size [22] of each country in the relevant year. We used descriptive statistics to examine the current levels of SHP density and their association with (1) gross domestic product (GDP) and health expenditure and (2) country-specific health outcomes and health care coverage indicators. We used Pearson's r to measure the correlation between SHP density and these indicators. We conducted analysis by region by allocating each country to one of the seven UNICEF regions (the regional presentation of similar analysis presented by the Countdown initiative).
We used descriptive statistics to explore trends over time in SHP density for 53 of the 54 Countdown countries which reported SHP numbers for two points in time: 2004 (or closest year prior to 2004-the oldest data are from 1997 for Angola) and the most recent year available (see Additional file 1): Uganda was excluded from the trend analysis due to a highly discrepant change in the number of SHP reported between 2004 and 2005, the two time points available for this country). For 52 of these 53 countries, it was possible to explore trends over time disaggregated into (1) number of nurses and midwives and (2) number of physicians (the exception was Madagascar, for which such disaggregated data were not available). Trends over time were measured using the average exponential growth rate (AEGR): In Eq. 1 above, w 1 and w n are the first (2004 or nearest) and the latest observations of variable w (either SHP numbers or SHP density) in a period of n years. The AEGR is most suitable to define the growth rate between two points in time for certain demographic indicators, notably labour force and population. The AEGR does not correspond to the annual rate of change measured at a 1-year interval, but rather to an average rate that is representative of the available observations over the entire period.
The rationale behind exploring trends over time in both SHP numbers and SHP densities is that these two indicators measure different things: while changes in SHP numbers measure the effort made by a health care system in increasing the overall availability of skilled health professionals, changes in the SHP density measure reconciles the changes in the availability of SHP given an individual country's population growth.
Looking forward, a final analysis calculated the AEGR required in each Countdown country to reach a density of 44.5 SHP per 10 000 population by 2030.
To complement this analysis and to inform the future development of global HRH metrics, we investigated the types of methods and data sources used in other HRH studies by means of a rapid literature review. The search was conducted in PubMed, in the Bulletin of the World Health Organization and in Human Resources for Health, using the following four search terms: 'Human Resources for Health' , 'Data' , 'Metrics' and 'Statistics'.
The initial search yielded 1144 papers. This was narrowed down to 237 on the basis of a title review, then to 125 on the basis of an abstract review. From these, 86 were selected for full review according to the following inclusion criteria: (1) used a quantitative approach (or mixed methods including some quantitative), (2) used HRH data, (3) published in or after 2004 and (4) published in English. Only papers from peer-reviewed publications were included (i.e. there was no grey literature). Nearly 90% of the studies were published from 2007 onwards, reaching its peak in 2013 (n = 14). The final cutoff date for inclusion was May 2015, and the whole process was conducted by one researcher.
Relevant information about the study attributes was extracted and recorded as follows: The income level of the countries included in the study (high-, upper-middle, lower-middle and low-income according to the World Bank income classification) The type of method used (basic descriptive and inferential statistics, regression analysis, workforce modelling) The comparative level where the study was conducted (multi-country, national, national and subnational) The health workforce cadre(s) included in the analysis (physicians, nurses, midwives and others) The type of data sources (global, national, subnational and administrative, institutional) The metrics used (headcounts, densities and others) The topic of interest (e.g. migration, distribution, among others)
HRH metrics in the countdown countries
Wide variations in SHP density were observed; the median SHP density in the 74 countries was 10.2 per 10 000 population, with estimates ranging from 1.6 in Madagascar and Niger to 142 in Uzbekistan. Of the 74 countries, 55 (74%) fell short of the 22.8 threshold, 8 had a SHP density between 22.8 and 44.5 and 11 had a SHP density of 44.5 or above (see Additional file 1). Most of the countries with very low densities are in sub-Saharan Africa and South Asia (see Fig. 1). Figure 2 shows how SHP density varied by UNICEF region. All five Countdown countries in the Central and Eastern Europe region had at least 44.5 SHPs per 10 000 population. However, in other regions, SHP densities tended to be much lower. Figure 3 shows the strong association between World Bank income group [23] and SHP density: 32 (94%) of the low-income countries had a density <22.8, compared with 20 (71%) lower-middle-income countries and three (18%) upper-middle-and high-income countries.
Countries with lower SHP densities tended to have worse maternal and newborn health (MNH) outcomes, as illustrated in Figs. 4 and 5. Figure 4 shows that Countdown countries with higher SHP densities had lower maternal mortality ratios (r = −0.56, p < 0.05) according to WHO mortality estimates [24]. Countdown countries in the highest quintile of SHP density (31.2+ SHPs/10 000 population) had a median maternal mortality ratio 11% of that of countries belonging to the lowest Figure 5 shows that countries with higher SHP densities had (1) lower stillbirth rates (r = −0.56, p < 0.05), (2) lower neonatal mortality rates (r = −0.45, p < 0.05) according to Healthy Newborn Network data [25] and (3) lower under-5 mortality rates (r = −0.48, p < 0.05) according to UN data [26]. These results cannot prove a causal relationship between health worker density and MNH outcomes, since strong confounders such as quality of care or social factors are not taken into account in this analysis. However, the contribution of health worker density to the improvement of health outcomes has been shown in other studies [27]. Moreover, it should be noted that, for most Countdown countries, estimates of the maternal mortality ratio and the stillbirth rate are generated by statistical modelling rather than empirical data [28,29]. Factors such as GDP and gross national income (GNI) are used as predictors in the modelling, which is bound to affect the observed correlation between mortality estimates and SHP density. Figure 6 shows that, where data are available, countries with higher SHP density had (1) higher coverage of [30,31] Countdown countries in the highest quintile of SHP density (31.2+ SHPs/10 000 population) estimate a median skilled birth attendance coverage close to double that of those in the lowest quintile of SHP density (<4.9 SHPs/10 000 population).
As measured by the AEGR, of the 53 countries with at least two data points, 27 (51%) showed an increase in SHP density, 19 (36%) showed a decrease and the remaining 6 (11%) showed little or no change (AEGR between −1 and 1%). Figure 7 shows which countries fall into each of these categories: Djibouti, Egypt and the Gambia all showed a positive AEGR greater than 10%. On the other hand, in Madagascar, Swaziland, Cameroon and Sierra Leone a large negative AEGR was observed.
By UNICEF regions, Fig. 8 shows the direction of change in AEGR and Fig. 9 its magnitude. In all regions except the Americas & Caribbean and West & Central Africa, the number of countries showing a positive AEGR in SHP density was larger than the number showing a negative AEGR (Fig. 8). None of the four South Asian countries recorded a decrease in SHP density, and four of the five countries in Middle East and North Africa recorded an increase. On the other hand, countries in West and Central Africa were the least likely to display an increase in density (only five of 18 countries did). The data from West and Central Africa illustrate well the difficulty of keeping pace with a growing population; 7 out of the 18 countries in this region recorded an increase in SHP numbers, but only 5 recorded an increase in SHP density. Figure 9 shows that, in the 32 out of 53 countries (60%) with a positive AEGR in SHP numbers, most (22) recorded an increase of 5% or more, including nine countries with an AEGR of 10% or more. Middle East and North Africa region was a particularly strong performer. By contrast, East Asia and the Pacific and Americas and the Caribbean regions showed weaker growth.
For the 52 countries for which it was possible to disaggregate the AEGRs in SHP numbers for physicians and for nurses and midwives, there was a positive correlation between the AEGR in the number of physicians and the AEGR in the number of nurses and midwives (r = 0.48, p < 0.05). Figure 10 shows the range of values underlying this association. Of 18 countries with a negative AEGR for nurses and midwives, half showed a null or positive AEGR for physicians (Swaziland, Chad, Zambia, Malawi, Somalia, Looking to the future, Fig. 11 shows the required AEGR in SHP numbers that each of the 63 Countdown countries with SHP density below 44.5 SHPs per 10 000 population require to reach that threshold by 2030. Six countries (Botswana, China, Gabon, India, Peru and Viet Nam) require a solid AEGR (<5%) and a further 20 countries (Angola, Bolivia, Comoros, Congo, Djibouti, Ghana, Guatemala, Indonesia, Lesotho, Morocco, Myanmar, Nepal, Nigeria, Pakistan, São Tomé and Principe, Solomon Islands, Sudan, Swaziland, Uganda, Zimbabwe) require a very solid AEGR (5-10%). However, the remaining 37 countries require an extraordinary AEGR of 10% or above, including 13 African countries requiring an unlikely AEGR of 15% or above.
HRH metrics: a rapid review of the literature The analysis of the 86 studies included in the rapid review lead to the identification of three different groups, (Table 1): (1) basic descriptive and inferential statistics, (2) regression analysis and (3) workforce modelling.
Workforce modelling studies included approaches to predict future health workforce requirements based on different policy scenarios, most often observed in highincome settings. Among the more sophisticated workforce modelling approaches were those based on systems dynamics [99][100][101] or need for health services [102][103][104]. Other modelling approaches included supply-based modelling (e.g. focusing on the production and inflows of health workers) [105][106][107][108], demand-based modelling (e.g. estimating future health service utilisation) [109] or both [36,[110][111][112]. Table 2 shows that most studies (and all of the modelling studies) used two or more sources of workforce data. Descriptive studies were most likely to use a combination of primary data collection and analysis of secondary sources, while workforce modelling studies nearly all used secondary analysis only.
Out of the 86 studies, 118 data sources were identified: 32 studies combined different types of data sources. National data sources (including professional councils' registers, ministries of health, national statistics bureaux and national censuses) were the most commonly used data sources across all types of methods. The studies which most often used global data sources (including the WHO Global Health Workforce Statistics Database, WHO health indicator statistics and World Bank socioeconomic indicators) were those using regression analysis. 'Sub-national' data sources (administrative databases in, e.g. provinces, districts or health facility registers) were most commonly used in studies using basic descriptive and inferential statistics. Workforce planning studies mostly relied on national-level data sources.
The literature review showed that sophisticated workforce planning approaches are being developed, particularly in high-income settings. These planning approaches look at the population need for health services and align with identified health service priorities. They estimate future health workforce requirements based on populations needs and can be adjusted over time [4,113,114]. Such approaches should be considered in all settings, particularly where resources are limited, and are feasible if HRH information systems (HRIS) or national health workforce accounts (NHWA) are already in place [2,9,115].
Discussion
Although this study shows that half of the Countdown countries for which data are available have seen an increase in SHP density and 60% in SHP numbers since 2004, most remain affected by critical needs-based shortages. This situation has hindered the achievement of the MDGs [116], and the fact that so many countries have fewer than the 44.5 SHPs per 10 000 population needed to deliver on the health-related SDGs will negatively affect progress towards these goals. The demand for high, sustained and equitable coverage with proven life-saving interventions will continue to rise especially in sub-Saharan Africa, a challenge compounded by its significant population growth. In many countries, the required scale-up of SHPs may be unrealistic given the resources available and the present capacity of production of qualified health workers. On the basis of this study and similar analyses [117], it is unlikely that many low-and middle-income countries will be able to address effectively the shortage of health workers without significant additional investments. This will require strategies to mobilise additional resources and funding mechanisms that require long-term strategic planning exercises and a focus on cost-effective primary care delivery models [113]. The increasing diversity in the types of health worker (e.g. by reviewing the scopes of practice of certain cadres, for example expanding the functions of nurses, introducing new cadres such as community-based and mid-level health practitioners, changing the skill mix of cadres placed closer to communities) can be an effective way to make services available, accessible and acceptable and could represent a sustainable strategy to improve health outcomes in some countries. There is also emerging evidence that a more diverse skill mix can represent a cost-effective policy option in low-income settings [118].
Despite the detailed analysis of the HRH, due to data limitations, this study was unable to go beyond descriptive analysis using density as the main HRH measure. The data published in the Global Health Workforce Statistics database are mined from multiple sources and vary within and across countries. The database is largely composed of data from national administrative sources which may be less rigorous on standardised definitions and occupational classifications, as opposed to data collected for differential statistical analysis. It is also noted that national occupation titles and classifications change over time within a country and across countries, posing a challenge to the interpretation of any trend analysis such as the one described here. By and large, administrative sources are confined to the public sector; so the growing private sector in many countries is commonly under-or unrepresented. Hence, there are limitations to HRH data availability and in some instances quality, and far less emphasis on the other dimensions of effective coverage: accessibility, acceptability and quality [119]. Presently, the calculated AEGRs offer the most that can be gleaned from the available data and unfortunately little to no data are reliably and representatively available on the determinants of HRH changes in these countries.
Even if a country has sufficient numbers of health workers, health outcomes will only improve if attention is paid to the other dimensions of effective coverage at sub-national levels because people may be prevented from using services due to geographical, financial or other barriers [120] or may choose not to use services due to concerns over acceptability or quality [121].
HRH data are multi-sectoral and current mechanisms to collect and collate health workforce data routinely do not always include the private and NGO sectors. In many countries, the private sector meets a high proportion of the demand for health services [122], so if excluded, workforce planning is highly compromised and biased. More critically, accurate data on the number of health workers trained using public funds and go on to work in the private sector would enable governments to better manage fiscal resources and the extent of 'internal brain drain'.
The health-related targets of the SDGs emphasise the need for UHC and therefore equity of access to health care [123]. The lack of standardised, disaggregated and interoperable data on the health workforce limits the capacity of countries to systematically and regularly identify gaps in health worker availability [20,124], whether these gaps relate to geography, socio-economic group, ethnic group, age, gender or to other variables. Countries cannot address unmet need unless they have reliable information about the nature, size and location of these gaps [113].
The literature review showed that some descriptive studies also focus on the distribution of the health workforce; however, these are often based on cross-sectional data collected from a variety of sources which are not always designed for this specific purpose [125]. Systems for monitoring the health workforce, such as an HRIS embedded in a health management information system (HMIS) [125], should be developed and take into account measures of accessibility, acceptability and quality as well as availability at both national and subnational levels [9]. It is also important systematically to collect and analyse information on the dynamics of the health labour market, such as production, inflows and outflows, distribution, retention and regulation and their determinants. These factors interrelate to impact the availability and quality of health services. NHWA is a WHO-led initiative which aims "to standardise the health workforce information architecture and interoperability as well as tracking HRH policy performance toward universal health coverage" by defining core indicators and data characteristics [124,126,127]. This approach has the potential to break new ground in the standardisation and systematic collection of relevant health workforce information to inform planning and policy development [115]. NHWA build upon existing HRIS in a modular fashion which will be supported by a global digital tool [128]. It is anticipated that, as the implementation of NHWA develops, a standard set of indicators will emerge that will allow more nuanced monitoring of the availability and efficacy of the health workforce [115].
In some countries, concerted investments are needed to professionalise and institutionalise health workforce planning and management. The rapid review shows that innovative approaches are being developed for workforce planning [129] which are less labour-intensive than traditional methods and therefore may represent an opportunity for low-and middle-income countries to introduce systems which are less costly to implement and maintain.
The paper has identified the main sources of HRH data: we recognise that there are limitations in the accuracy and completeness of some of data that can be accessed from these sources and that not all sources have the same strengths and weaknesses as a source of HRH data. In particular, survey-based primary sources and secondary sources may be at risk of being incomplete, out of date or inaccurate due to a lack of full understanding on the part of the researchers, or a focus on 'what is there' in terms of data, rather than 'what is needed'. It should also be noted that in some cases 'official' sources may also fall short of complete accuracy, and if these are then used as the basis for input into international databases, then the risk is compounded. International agencies recognise this risk and devote time and effort to clarifying data returns with the original source within the country, but this is not foolproof. Using a coherent approach to data reporting and analysis based on a template of core indictors as proposed by the implementation of NHWA would gradually eliminate such data errors and shortfalls.
Last but not least, the confinement of reported HRH statistics to SHPs has to be overcome. For example, data are commonly requested on community health workers (CHWs) which have recently been recognised as a costeffective cadre in delivering certain health services [118], and some countries have chosen to deploy significant numbers of CHWs as a response to the shortage of SHPs, which is not reflected in these analyses. This situation is in part a result of poor reporting of CHW numbers in countries' official statistics, but mainly due to the lack of standardised definitions of CHWs in terms of training, skills and functions. Until these issues are properly addressed, any global monitoring of those cadres will remain flawed.
The rapid literature review methodology is not as clearly developed as the one established for systematic reviews, so the review may not have been fully comprehensive. In addition, the search terms used are not exhaustive, and the review was conducted by a single researcher, which could have introduced a selection bias. Therefore the findings of the literature review require a careful interpretation.
Conclusions
The achievement of health-related SDGs remains conditional on the existence of a sufficient health workforce that is well-planned, deployed and appropriately managed and supported to meet population needs. The skill mix, composition and efficiency of such a workforce can only be determined accurately using high-quality and comprehensive data. In many countries, and especially low-and middleincome countries, such data are close to absent. This study adds to the growing body of knowledge on the health workforce trends and on shortcomings of existing health workforce data (see for example, Gupta et al. [130]) by (1) exploring the current need for SHP in maternal and newborn health in Countdown countries in 2015, estimating the necessary growth to meet the HRH requirements to achieve UHC, and (2) highlighting the limitations of the current HRH data sources, HRH metrics and methods of analysis of HRH data. The paper explains the need for a harmonised, global approach to strengthen health workforce knowledge and the evidence base. The Countdown to 2015 collaboration has now evolved into the 'Countdown to 2030 for Reproductive, Maternal, Newborn, Child, and Adolescent Health and Nutrition' initiative, which emphasises the need to build, beyond 2015, a solid foundation of baseline data that can be used to track progress and back up the accountability rhetoric with real resources to generate sound data [131], a critical dimension of this relates to the health workforce and the implementation of NHWA.
Additional file
Additional file 1: Table S1.
|
2018-04-03T02:01:08.212Z
|
2017-02-15T00:00:00.000
|
{
"year": 2017,
"sha1": "9948fea6b38c77c2674a2b037c5095667065a565",
"oa_license": "CCBY",
"oa_url": "https://human-resources-health.biomedcentral.com/track/pdf/10.1186/s12960-017-0190-7",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "9948fea6b38c77c2674a2b037c5095667065a565",
"s2fieldsofstudy": [
"Medicine",
"Political Science"
],
"extfieldsofstudy": [
"Medicine",
"Business"
]
}
|
267235425
|
pes2o/s2orc
|
v3-fos-license
|
Research on the Integration of System and Emotion Management in Enterprise Management in the Era of Artificial Intelligence
: With the rapid growth of AI technology, in the field of enterprise management, especially in the era of information technology, the efficiency of human beings in information transmission and processing continues to improve. However, due to the defects of traditional thinking mode such as self-isolation to a certain extent, and the influence of emotional factors on human nature cognition, this article needs to use intelligent, automated and information means to improve the manual operation mode, and apply Artificial Intelligence(AI) technology to enterprise management, integrate system and emotion, and design management models. After that, the running effect of the model is tested. The test results show that the quality control defect rate can be controlled within 2%; satisfaction with humanistic care is up to 94%, and satisfaction with emotional value is up to 96%. This shows that the service quality combining system and emotion has been correspondingly improved.
Introduction
With the rapid growth of AI, enterprise management is facing many new problems, including high employee turnover rate and high staff mobility.In order to solve these problems, it is necessary to conduct in-depth research on the integration of enterprise system and emotion management, so as to improve the work efficiency and loyalty of employees, and promote them to better play their role in the organization.This not only provides a good environment and basic guarantee for China's scientific and technological innovation, but also lays a theoretical foundation for the improvement of enterprise competitiveness, and has important practical value.
With the growth of AI technology, there are many new types of problems, such as information security and data leakage, which have a serious impact on human society.In order to deal with these problems, this paper needs to establish a new organizational structure, which can deal with the complex and ever-changing business management system and emotional culture system.In this process, it is also necessary to fully understand the relationship between people and machines, and in-depth understanding of humanized design methods to better manage employees, improve efficiency and respond to various emergencies, so as to promote the innovation of enterprise management system in the era of intelligence.
The innovation of this paper is mainly reflected in two aspects.First of all, the research object is targeted, that is, in the era of AI, the integration of enterprise management system and emotion management has been optimized and integrated, which has improved the original corporate culture and organizational structure.Through the application of scientific and technological growth achievements in the new era to practical work to solve practical problems, while providing reference significance and practical experience for other fields of relevant theories and suggestions and guidance, which has important practical value for promoting the construction of China's science and technology innovation country.Secondly, this paper integrates information technology into the organization management system.At present, many large companies in China have established information system platforms based on Internet technology to improve their system operation efficiency and optimize business processes.However, in the era of AI, the organizational structure of enterprises is also changing and improving.Therefore, from the perspectives of management system, emotional culture and incentive mechanism, this paper proposes to combine information technology with organizational management system to further improve employees' work efficiency and loyalty, and promote the innovation of enterprise management system.
Related Work
With the rapid growth of AI technology, its understanding and application are getting deeper and deeper.In the context of the rapid growth of information technology, intelligent tools have been produced in large numbers, and the emotional and psychological needs of business management have become crucial.However, most domestic companies have not yet established a perfect and sound human care system and employee incentive mechanism, and have not carried out relevant research work.In contrast, some developed countries pay attention to the use of data mining technology, machine learning theory and AI and other means to achieve the effective combination of customer personalized service.Vladimir Tsyganov's research explored the relationship between social and political stability, voters' emotional expectations and information management [1].The study of Jorge Fernandez Herrero et al. introduced the steps of applying AI emotional expression recognition software to emotion management in an educational context [2].Jose Arias-Perez et al. 's research focuses on the organizational emotional capacity and the absorptive capacity orientation of competitors to stimulate the open innovation process [3].Zahra Sarhadi et al. assessed the impact of knowledge management and emotional intelligence on the work efficiency of Iranian librarians [4].Satyanarayana Parayitam et al. explored the relationship between role conflict and organizational performance in India, and studied how knowledge management and emotional exhaustion play a moderating role [5].Yong Liao and Rui Kong analyzed the complexity of iot RFID in the management of fast fashion enterprises [6].Faizan Ahmed Khan et al. introduced the process discovery and improvement of enterprise management system [7].Jannis Beese et al studied the impact of enterprise architecture management on the complexity of information system architecture [8].Raphael David Schilling et al. explored the strategic alignment of enterprise architecture management and how the combination of control mechanisms tracked the ten-year corporate transformation of Deutsche Commerzbank [9].Islam O. Sulumov et al. studied the problems of enterprise human resource management during the growth of new forms of employment [10].To sum up, in enterprise management, emotion management mainly exerts influences in the following three aspects: First, it makes integrated analysis and prediction of information resources such as employees, customers and suppliers; the second is to establish a performance management system based on knowledge sharing platform through data mining technology; the third is to realize the incentive and constraint through the performance feedback mechanism.
AI Technology
AI is a new science and technology based on computer technology, which realizes the ability to recognize and judge things by processing a large number of complex and abstract information.To a certain extent, AI can simulate the way human intelligence thinks and automatically analyze data results.In addition, AI also uses machine language to express and explain the problems and reasoning logical relations involved in the cognitive process of things.The growth of AI comes from the research of human brain technology, which is a new type of science and technology.AI technology refers to the simulation and perception of the human brain through computers, information processing and related intelligent tools, and makes corresponding responses according to its own brain cognitive ability.This technology has a high degree of intelligence and versatility.In the era of AI, enterprise management needs to combine different types of emotional needs to achieve human care and guidance, and also need to pay attention to effective analysis and research on employees' emotional state and behavior, so as to help enterprises develop suitable and targeted incentive mechanisms to promote employees' work enthusiasm.Based on computer, AI technology has made certain progress in information processing, automatic control and intelligent machine learning [11][12].In the decision-making process, a large number of complex and highly accurate data is essential.In radar signal recognition system, a large amount of data needs to be collected, and sensor technology needs to process and predict this complex, large amount, variety and unstructured information.Therefore, AI can be used to process and predict these data.In the enterprise management of AI, the integration of system and emotion management is of great significance.This fusion combines institutional norms with emotional factors to improve organizational effectiveness and employee satisfaction.The relationship between system implementation efficiency (X) and emotional factor (Y) can be expressed by the following formula: In the formula, a and b are weight parameters, and Z represents other factors that may affect the efficiency of system implementation.This formula can be applied to a series of data samples to calculate the specific value of the weight parameter.By adjusting the parameters, we can find the best balance point between institution and emotion management.The following formula can be used to accurately measure the comprehensive index of the degree of institutional norms and the influence of emotion management: In the formula, A represents institutional normality, B represents emotion management factors, and C represents other possible relevant factors.c, d, and e are the weight parameters.Through statistical analysis of the data, the best weight parameters can be determined, and the comprehensive index M can be calculated to evaluate the integration effect of institutional and emotional management.In order to assess the degree of innovation in AI applications, the following formula can be used: In the formula, D represents the data analysis effect of AI, E represents the emotional intelligence factor, and F represents other innovation factors.f, g, and h are weight parameters.Through quantification and analysis of relevant data, innovation index I can be calculated to measure the innovation effect of AI on institutional and emotional management in enterprise management.Artificial neurons generate a kind of "memory" by constantly learning new knowledge, so as to realize the interconnection between the human brain and the brain, and automatically regulate the information processing capacity.This kind of neural network is composed of nonlinear functions composed of a large number of repetitive data, which can maintain normal operation, control complex behaviors, realize self-cognition and other functions under uncertain environment [13][14].In the era of AI, business management is human-centric, treating people as the core elements of machines and information processing systems.In the process of science and technology growth, humanization has become an important direction.Intelligent robot technology enables users to enter a personalized interactive environment, self-learning, self-realization, and contact, communication and interaction with the outside world.In this paper, intelligent robots can perform decision analysis and behavior control by sensing other people's instructions.Intelligent robots can also integrate personal emotions into online platforms, achieving the goal of emotional regulation and expanding interpersonal relationships and social circles.
Integration Model of Institutional and Emotional Management in Enterprise Management
In the enterprise management, the system is an important part of the organization, and the emotion management is the auxiliary service content.In order to realize the integration of the two, it is necessary to establish a perfect and effective communication mechanism and exchange platform to coordinate and link up work.In addition, it is also necessary to improve the level of professional ethics and psychological quality through employee training and other means, and use incentive policies to promote emotional expression and strengthen emotional investment to meet the spiritual needs of employees, so as to achieve the goal of corporate culture construction [15][16].From another point of view, in order to realize the integration of system and emotion, it is necessary to establish a humanized management system.In the process of enterprise growth, the continuous training of employees, so that they can adapt to the position, and create value for the company, which is determined by the needs of humanity.In order to achieve the coordination between organizational goals and individual behaviors, it is necessary to establish a good communication channel and perfect incentive mechanism to improve the enthusiasm of organization members.Secondly, it is necessary to strengthen the implementation of team building and management system, so that managers can give full play to their own abilities and influence.In enterprise management, system and emotion refer to the behavior and attitude generated by employees in the process of work, as well as the coordination of the relationship between things and others.Figure 1 shows the integration model of institutional and emotional management.
Business data
The integration of institutional and emotional data Emotional data platform For a company, it is necessary to establish a system that is people-oriented, meets the needs of humanity and has certain special characteristics.For a company, it is necessary to establish a system that is people-oriented, meets the needs of humanity and has certain special characteristics.In addition, starting from the mutual growth of emotions and employees, a complete institutional system can be built to ensure that emotions can play an effective role and effect [17][18].In business management, the concept of emotion management refers to the two-way interaction between employees and organizations.When the two sides communicate, it can produce a good interaction effect.In this case, it can increase the trust of both managers and the managed, improve work efficiency, achieve common growth goals, improve the overall performance of the company, and promote social harmony and stability.In enterprise management, because different employees have different emotional needs, it is necessary to build an effective incentive model based on the combination of AI technology and management concepts to meet the needs of employees.In the work of system construction and emotion management, taking people as the core is the key.Humanization, scientificity and innovation are the basic conditions for system construction and emotion management.Therefore, this paper can incorporate corporate culture as a humanistic care approach to meet the inner needs of employees and stimulate their enthusiasm for work, so as to create maximum benefits and value for enterprises.
Testing the Application Effect of AI in Enterprise Management
The testing process of the application effect of AI technology in enterprise management includes the following main steps.First, the data needs to be preprocessed, through the analysis of the data, to determine the need for classified statistics, and convert the information into a format suitable for decision makers.Secondly, the model, parameter base and database are established, and the sample features collected by the massive complex system are extracted by the computer, and the relevant rule base is generated to realize the modeling and predictive analysis process.The application of intelligent technology in enterprise management activities can help managers quickly understand the working status and needs of employees, and find problems and deal with them in time by collecting, sorting and analyzing data [19][20].Among them, artificial neural network technology is an intelligent information model developed based on the cognitive system of human brain.It can automatically obtain various complex behavior patterns and related knowledge system structure images generated in the thinking process of human brain, and use the model to conduct the correlation between decision assistance function and emotion management, so as to provide help or solutions to the emotional and mental state problems that may occur in the work of enterprise employees.The original database is processed and screened effectively by means of intelligent technology and network communication.This paper will conduct 5 rounds of tests on this model.Table 1 shows the test hardware environment.
By sorting and extracting a large amount of information, useful data can be obtained for decision-makers' reference, and some non-quantitative or semi-qualitative attribute information can be converted into quantitative or digital forms to assist decision-making and help enterprise managers choose corresponding management strategies to cope with risk problems brought by emergencies of different degrees.The defect rate test of quality control aims to find the root cause by analyzing the error in the production process of the product, and put forward the corresponding solution to improve the management level and service efficiency of the enterprise.In quality control technology, process capability index is very important.In the traditional cost management mode, enterprises mainly rely on post-inspection and regular sampling inspection to carry out product inspection and production management activities.However, the application of intelligent technology can realize the functions of pre-control and in-process supervision.After the intelligent optimization of the management model test, from the data in Figure 2, the quality control defect rate of this paper can be controlled within 2%.
Figure 3: Satisfaction with customer service Customer service satisfaction is the result of providing psychological, material and emotional needs to consumers after evaluating product quality and performance.It is designed to meet the needs of consumers and ultimately form a comprehensive evaluation and recognition.In the era of AI, the requirements for employee service level are higher.Employees not only need to have rich work experience, but also need to have certain communication skills.Therefore, enterprises should establish a sound, reasonable and effective customer service system and management system to enhance customer satisfaction.According to the results of the customer service satisfaction survey in Figure 3, the highest degree of satisfaction of humanistic care is 94%, and the highest degree of satisfaction of emotional value is 96%.This shows that the service quality combining system and emotion has been correspondingly improved.
Conclusion
With the progress of AI, emotion management is constantly introduced into the field of business management.Therefore, how to establish a sound emotion management system has become one of the focuses of current research.This paper aims to analyze the problems and causes of employee emotion management in the current growth stage of Chinese enterprises, and put forward targeted suggestions and countermeasures, so as to provide reference and reference value for improving employee work efficiency, promoting the realization of the company's strategic goals and improving customer satisfaction.This study also provides theoretical support for relevant scholars to promote the improvement of enterprise management system in the era of AI.However, this study has the following shortcomings: when analyzing the fusion relationship between enterprise management system and emotion management in the era of AI, it only considers the influence of AI technology on the organizational structure and function of traditional companies, but ignores the analysis of its internal mechanism.This paper does not study the different needs, motivation characteristics and individual characteristics of employees in this stage from the perspective of employees, and takes corresponding measures according to specific situations.Therefore, there may be certain limitations in practice.In view of the above shortcomings, future research can further strengthen the methods and techniques of data acquisition to improve the credibility of empirical analysis.In addition, it should deeply study the integration mechanism of enterprise management system and emotion management in the era of AI, so as to better transform theory into practice.It can also pay attention to the needs and personalized characteristics of employees, and strengthen communication and interaction with employees to achieve more effective emotional management.
Figure 1 :
Figure 1: The fusion model of system and emotion management
Figure 2 :
Figure 2: Defect rate of quality control
Table 1 :
Test specification
|
2024-01-26T17:01:50.392Z
|
2024-01-01T00:00:00.000
|
{
"year": 2024,
"sha1": "f63f0f64a87f633364fed86c6d65af306fa27ae3",
"oa_license": "CCBY",
"oa_url": "http://www.clausiuspress.com/assets/default/article/2024/01/23/article_1706007412.pdf",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "ff98f59bd39a9416683dc6d9f674fdbc30fba803",
"s2fieldsofstudy": [
"Computer Science",
"Business"
],
"extfieldsofstudy": []
}
|
234469964
|
pes2o/s2orc
|
v3-fos-license
|
Cavity exciton-polaritons in two-dimensional semiconductors from first principles
Two-dimensional (2D) semiconducting microcavity, where exciton-polaritons can be formed, constitues a promising setup for exploring and manipulating various regimes of light-matter interaction. Here, the coupling between 2D excitons and metallic cavity photons is studied by using first-principles propagator technique. The strength of exciton-photon coupling is characterised by its Rabi splitting to two exciton-polaritons, which can be tuned by cavity thickness. Maximum splitting of 128 meV is achieved in phosporene cavity, while remarkable value of about 440 meV is predicted in monolayer hBN device. The obtained Rabi splittings in WS$_2$ microcavity are in excellent agreement with the recent experiments. Present methodology can aid in predicting and proposing potential setups for trapping robust 2D exciton-polariton condensates.
Two-dimensional (2D) semiconducting microcavity, where exciton-polaritons can be formed, constitues a promising setup for exploring and manipulating various regimes of light-matter interaction. Here, the coupling between 2D excitons and metallic cavity photons is studied by using first-principles propagator technique. The strength of exciton-photon coupling is characterised by its Rabi splitting to two exciton-polaritons, which can be tuned by cavity thickness. Maximum splitting of 128 meV is achieved in phosporene cavity, while remarkable value of about 440 meV is predicted in monolayer hBN device. The obtained Rabi splittings in WS2 microcavity are in excellent agreement with the recent experiments. Present methodology can aid in predicting and proposing potential setups for trapping robust 2D exciton-polariton condensates.
Despite this enourmous interest in exciton-polaritons and seminconductor microcavity devices, a complementary microscopic theories that are able to scrutinize the cavity photon-exciton coupling on the quantitative and predictive level are still rare. In addition, the majority of the microscopic descriptions are based on simple model Hamiltonians describing exciton-photon interactions in microcavity [34][35][36][37][38][39]. Recently, a more rigorous ab initio theoretical description of exciton-polaritons in TMD microcavity was provided in the framework of the quantum-electrodynamical Bethe-Salpeter equation [40], where the excitons are calculated from first-principles Bethe-Salpeter equation, and the electromagnetic field is described by quantized photons. However, the coupling between excitons and photons is left to be arbitrary. This study showed how excitonic optical activity and energetic ordering can be controlled via cavity size, light-matter coupling strength, and dielectric environment.
Here, we present a fully quantiative theory of 2D exciton-polaritons embedded in plasmonic microcavity that is able to analyze and predict light-matter coupling strengths for various cavity settings. We study cavity exciton-polaritons in three prototypical two-dimensional single-layer semiconductors, i.e., single-layer black phosphorus or phosphorene (P 4 ), WS 2 , and hexagonal boron nitride hBN. The results show a clear Rabi splitting between 2D exciton and cavity photon modes as well as high degree of tunability of Rabi (light-matter) coupling Ω as a function of microcavity thickness. In the case of WS 2 , for the usual experimental setup where cavity size is around d ∼ 1 µm we obtain splittings of Ω ∼ 42 meV and Ω ∼ 64 meV for principal and second cavity modes, in line with the experiments [25,27]. Also, in all cases larger coupling strengths Ω are found for larger photon confinements when d < 1 µm as well as for higher cavity modes n. Interestingly, the ultraviolet (UV) exciton in hBN shows a very strong exciton-cavity photon coupling of about ∼ 440 meV and a possiblity of Bose-Einstein condensation.
In this work both excitons and photons are described by bosonic propagators σ and Γ, repectively, which are derived from first principles. The 2D crystal optical conductivity σ is calculated using ab initio RPA+ladder method [41] , and propagator of cavity photons Γ is derived by solving the Maxwell's equations for planar cavity decribed by local dielectric function (see Supplemental Material [42]). The exciton-photon coupling is achieved by dressing the cavity-photon propagator Γ with excitons at the RPA level. Thus obtained results are therefore directly comparable with the experiments. As illustrated in Fig. 1(a) the microcavity device consists of substrate, tip and dielectric media in between, described by local dielectric functions + , − and 0 , respctively. The 2D semiconducting crystal defined by optical conductivity σ µ (ω) is immersed in a dielectric media at a height z 0 relative to the substrate. The substrate occupy region z < 0, tip occupy region z > d, and dielectric media occupi region 0 < z < d. In such semiconducting microcavity setup the coupling between the exciton and cavity photon is expected to result in the splitting of exciton-polariton to the lower and upper polariton branches (LPB and UPB), which we shall refer as ω − n and ω + n , respectively [see Fig. 1 The quantity from which we extract the information about the electromagnatic modes in microcavity setup is electrical field propagator E µν which, by definition [43], propagates the electrical field produced by point oscillating dipole p 0 e −iωt , i.e. E(ω) = E(ω)p 0 . Assuming that the 2D crystal, substrate and tip satisfy planar symmetry (in x − y plane) the propagator E in z = z 0 plane satisfies matrix equation as illustrated by Feynmans diagrams in Fig. 2(a) (see also Sec. S1.A in Ref. [42]). Here Γ = Γ 0 + Γ sc represents the propagator of electrical field, in absence of 2D crystal, ie. when σ = 0. The propagator Γ 0 represents the "free" electrical field and the propagator of scattered electrical field Γ sc results in multiple reflections at the microcavity interfaces, as ilustrated in Fig. 2(b). In order to simplify the interpretation of the results we suppose that the dielectric media is vacuum ( 0 = 1), and we suppose that tip and substrate are made of the same material ( − = + = ). In order to support well-defined cavity modes, these materials should be highly reflective in the exciton frequency region ω ≈ ω ex , which is satisfied if ω ex < ω p , where ω p is the bulk plasmon frequency. For the P 4 and WS 2 monolayers where exciton energies are ω ex < 3.0 eV, we chose that the substrate and tip are made of silver (ω p ≈ 3.6 eV). One the other hand, for the single-layer hBN where exciton energy is ω ex = 5.67 eV we chose aluminium (ω p ≈ 15 eV). Both silver and aluminium macroscopic dielectric functions (ω) are determined as well from the first principles (see Ref. [42]).
Figures 3(a), 3(b), and 3(c)
show the modifications of the n = 1 cavity mode intensity after the single-layer P 4 is inserted in the middle z 0 =d/2 (n = 1 antinodal plane) of the silver cavity, where the cavity sizes are d = 375 nm, d = 400 nm and d = 425 nm, respectively. White and turquoise dotted lines denote the P 4 exciton and unperturbed cavity mode n = 1, respectively. For d = 375 nm, just before the n = 1 cavity mode crosses the exciton, a significant part of the n = 1 mode spectral weight is transferred below the exciton energy. By increasing the cavity size, i.e. for d = 400 nm and d = 425 nm, the exciton crosses the n = 1 mode, which results in the intensity weakening and band-gap oppening in the intersection area. This behaviour enables creation of exciton-polariton condensate, as experimentaly verified in Refs. [6,[13][14][15]29]. By changing the cavity thickness, the exciton can interact also with the higher cavity modes. Figure 3(d) shows the modifications of n = 2 mode intensity, where the cavity thickness is d = 850 nm and P 4 is choosen to be located at z 0 = d/4nm (n = 2 antinodal plane). The exciton significantly weakens the intensity of n = 2 mode in the intersection area, however, here the avoided crossing behaviour is not clearly noticeable in comparison with the exciton coupled to the 1st cavity mode.
The dispersion relation of exciton-polaritons ω − n and ω + n (hybridised cavity photon-exciton modes), as the one shown in Fig. 4(a), can be precisely determined by following the splitted maxima in induced current j µ = σ scr µ E µ driven by external (bare) field E µ e −iωt , where the screened optical conductivity is σ scr µ = [1 − Γσ] −1 µ σ µ . The inset of Figure 4(b) shows the Reσ scr x before (brown dashed) and after (solid magenta) the P 4 is inserted in the middle of the cavity of thickness d = 400 nm. The spliting of exciton ω ex to exciton-polaritons ω − 1 and ω + 1 can be clearly seen. The exciton-photon binding strength can be determined from the Rabi splitting defined as difference Ω n = ω + n − ω − n for wave vector Q and for which the bare cavity modes n = 1, 2, 3, ... crosses the exciton ω ex . Figure 4(a) shows the dispersion relations of plasmon-polaritons ω − 1 and ω + 1 obtained by following the splitted maxima in Re σ scr x for different wave vectors Q x , d = 400 nm and z 0 = d/2. The clear anticrossing behaviour and Rabi splitting of Ω 1 = 123 meV indicates strong interaction between exciton and n = 1 cavity photon. Red circles, yellow squares and green triangles in Fig. 4(b) show the Rabi splittings Ω n for n = 1, n = 2 and n = 3, respectively, versus cavity thickness d. The maximum Rabi splittings of nth mode Ω max n are achieved when z 0 = d/2 and for d choosen so that nth mode just starts to cross the exciton energy ω ex . All three modes show strong coupling with exciton that results in the as n increases confirms a confinement hypothesis; as n increases the cavity photon modes crosses the exciton for larger d, and photon becomes less confined while the coupling is reduced. Thus, the coupling will be stronger as the thickness d at which the crossing between nth mode and exciton occurs is getting smaller.
The above criterion is met by excitons with higher excitation energy, such as for instance the UV exciton in the hBN single layer. Since in the same UV frequency region the cavity should be highly reflective (i.e., ω p > ω ex ), the appropraite cavity for hBN layer can be made of aluminium with ω p ≈ 15 eV. Figure 5 The 2D exciton-polaritons are experimentally studied mostly in various TMDs where the mesured Rabi splittings of Ω = 46 meV, Ω = 26 meV, and Ω = 20 meV are found in MoS 2 [25], WSe 2 [28], and MoSe 2 [26]. For WS 2 the experimentally measured splittings are around 20 − 70 meV for d > 1 µm, depending on the precise cavity size [27]. In Fig. 5(b) we show the modification of the silver n = 1 cavity mode intensity when the WS 2 monolayer is inserted in the middle of microcavity of thicknesse d = 260 nm. The unperturbed n = 1 mode as well as the A and B excitons of bare WS 2 are also denoted by dotted lines. Both excitons significantly per-turbe the n = 1 mode providing the Rabi splitings of Ω A 1 = 117 meV and Ω B 1 = 103 meV. For comparison, Fig. 5(c) shows the modification of n = 1 mode intensity when P 4 is inserted in silver microcavity for the same conditions as in WS 2 microcavity presented in Fig. 5(b); n = 1 minimum is 100 meV below the exciton. The cavity thickness is d = 415 nm and z 0 = d/2. Interestingly, the achieved Rabi splittig is here also Ω 1 = 117 meV, even though according to confinement hypotesis, the A exciton, which is confined in smaller cavity, is expected to split more. However, the A exciton in WS 2 has smaller oscillatory strength than P 4 exciton [cf. Figs. S4(a) and S5(b) in Ref. [42]] so that the binding is weaker and the two effects cancel.
Finally, in Fig. 5(d) we compare the maximum splittings of exciton-polaritons Ω max for the three semiconducting microcavities, summarizing the different regimes of exciton-cavity photon coupling strengths in these materials. For WS 2 microcavity we additionaly present the results for the experimentally measured value of cavity size, i.e., d = 930 nm [27], when exciton interacts with n = 2 cavity mode. The corresponding value of Ω A 2 = 64 meV shows an excellent agreement with the experiment [27]. For the same cavity thickness, splitting of the n = 1 cavity mode and the WS 2 A exciton is Ω A 1 = 42 meV.
In summary, we have studied the intercation strengths between cavity photons and excitons in various 2D semiconducting crystals by means of rigorous ab initio methodology. It is shown that insertion of 2D crystals into a metallic microcavity significantly modifies the photon dispersion. For instance, the band gap opening and Rabi splitting as high as Ω = 440 meV was obtained for hBN cavity device. This opens a possibility of experimental realization of the robust 2D excitonpolariton condensate. Moreover, the exciton-photon interaction strongly depends on photon confinement, which was shown to be adjustable by the cavity thickness d. The results of exciton-polariton splitting in WS 2 cavity device show a good agreement with recent experiments and suggest higher photon confinements with decreasing cavity size at which stronger photon-matter coupling should be achieved. In order reach this stronger binding we suggest an experimental setup consisting of tunable submicrometer cavity (such as AFM-tip and substrate) tuned so that the principal photon cavity mode coincide with the exciton energy, e.g. as in Fig.4(b).
The authors acknowledge financial support from European Regional Development Fund for the "QuantiXLie
|
2021-05-13T01:16:01.473Z
|
2021-05-12T00:00:00.000
|
{
"year": 2021,
"sha1": "c126f9e9c770606439a4d0177830846c9533e23e",
"oa_license": "CCBY",
"oa_url": "http://link.aps.org/pdf/10.1103/PhysRevResearch.3.L032056",
"oa_status": "GOLD",
"pdf_src": "Arxiv",
"pdf_hash": "c126f9e9c770606439a4d0177830846c9533e23e",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Materials Science"
]
}
|
9468443
|
pes2o/s2orc
|
v3-fos-license
|
The Double-Edged Sword in Pathogenic Trypanosomatids: The Pivotal Role of Mitochondria in Oxidative Stress and Bioenergetics
The pathogenic trypanosomatids Trypanosoma brucei, Trypanosoma cruzi, and Leishmania spp. are the causative agents of African trypanosomiasis, Chagas disease, and leishmaniasis, respectively. These diseases are considered to be neglected tropical illnesses that persist under conditions of poverty and are concentrated in impoverished populations in the developing world. Novel efficient and nontoxic drugs are urgently needed as substitutes for the currently limited chemotherapy. Trypanosomatids display a single mitochondrion with several peculiar features, such as the presence of different energetic and antioxidant enzymes and a specific arrangement of mitochondrial DNA (kinetoplast DNA). Due to mitochondrial differences between mammals and trypanosomatids, this organelle is an excellent candidate for drug intervention. Additionally, during trypanosomatids' life cycle, the shape and functional plasticity of their single mitochondrion undergo profound alterations, reflecting adaptation to different environments. In an uncoupling situation, the organelle produces high amounts of reactive oxygen species. However, these species role in parasite biology is still controversial, involving parasite death, cell signalling, or even proliferation. Novel perspectives on trypanosomatid-targeting chemotherapy could be developed based on better comprehension of mitochondrial oxidative regulation processes.
Trypanosomatids and Diseases
Among trypanosomatids, there are several pathogenic species: Trypanosoma brucei, the causative agent of African trypanosomiasis; Trypanosoma cruzi, of Chagas disease; and Leishmania spp., of leishmaniasis. These diseases, with high morbidity and mortality rates, affect millions of impoverished populations in the developing world, display a limited response to chemotherapy, and are classified as neglected tropical diseases by the World Health Organization [1].
Trypanosomatids exhibit the most typical eukaryotic organelles such as plasma membrane, endoplasmic reticulum, and Golgi; however, some particular structures are also presented. Immediately below the plasma membrane, there is a structural cage of stable microtubules called subpellicular microtubules. The flagellum originated from a flagellar pocket presenting a typical axoneme and a paraflagellar rod, structures involved in the flagellum beating. The nucleus is single maintaining the integrity of its envelope during the whole mitosis [2]. Glycosomes are peroxisomes-like organelles exclusive of trypanosomatids, where it was also compartmentalized part of glycolytic pathway as well as lipids and amino acids oxidation enzymes [3]. Another peculiar structure is acidocalcisome, acidic electron-dense organelle involved in polyphosphate and pyrophosphate metabolism that also works as ions storage [4]. As it will be described in item Section 3.1, the mitochondrial morphology in trypanosomatids is unique presenting a characteristic architecture. These protozoa belong to the earliest diverging branches of the eukaryotic evolutionary tree which have mitochondria, fact that reflects in the mitochondrial organization. The topology of DNA network together with the functionality of the maxicircles and minicircles led to peculiar events in this organelle biogenesis, despite the similarities between mitochondrial genome of trypanosomatids and other eukaryotes. Some mitochondrial genes named cryptogenes presented unusual structure, being the transcripts remodeling by an RNA editing process [5].
Human African trypanosomiasis (HAT) or sleeping sickness is caused by T. brucei and can be fatal if not treated. In 2009, after continued control efforts, the number of reported cases dropped below 10,000 for the first time in 50 years; presently, the estimated number of cases is currently 30,000, and 70 million people are at risk of HAT [6]. This disease is transmitted by the bite of certain species of the genus Glossina (tsetse flies), found only in sub-Saharan Africa. HAT occurs in two clinical forms: chronic caused by T. brucei gambiense (mostly in West and Central Africa) that accounts for more than 98% of reported cases and acute caused by T. brucei rhodesiense (mainly in East and South Central Africa). The acute disease (stage 1) is characterized by the presence of the parasites in the vasculature and lymphatic systems. Without treatment, the parasites penetrate the blood-brain barrier and invade the central nervous system initiating chronic stage (stage 2) that manifests as mental disturbances, anxiety, hallucinations, slurred speech, seizures, and difficulty in walking and talking [7]. These problems can develop over many years in the gambiense form and over several months in the rhodesiense form. The type of chemotherapeutic treatment depends on the stage of the disease, that is, on the degree of central nervous system involvement and the consequent pharmacological need to breach the blood-brain barrier reaching the parasite [8]. The drugs used in the first stage are of lower toxicity and are easier to administer, with pentamidine for infections by T. b. gambiense and suramin for T. b. rhodesiense. In this case, T. b. rhodesiense infections are treated with melarsoprol, while T. b. gambiense infections are treated with either eflornithine or a nifurtimox/eflornithine combination therapy (NECT). However, none of these treatments are ideal. Melarsoprol is extremely toxic and has increasing treatment failures. Eflornithine is expensive, is laborious to administer, and lacks efficacy against T. b. rhodesiense. The development of NECT reduced the i.v. infusions of eflornithine but it is not ideal, since parenteral administration is still required and patients must be hospitalized for the duration of treatment.
Chagas disease is caused by the protozoan T. cruzi and affects approximately eight million individuals in Latin America, of whom 30-40% either have or will develop cardiomyopathy, digestive megasyndromes, or both [9]. Although vectorial (Triatoma infestans) and transfusional transmissions have steadily declined [10], this disease can also be orally transmitted through the ingestion of contaminated food or liquids. More recently, a major concern has been the emergence of Chagas disease in nonendemic areas, such as North America and Europe, due to the immigration of infected individuals [11]. Chagas disease is characterised by two clinical phases: a short, acute phase defined by patent parasitaemia and a long, progressive chronic phase [12]. The available chemotherapy for this illness includes two nitroheterocyclic agents, nifurtimox and benznidazole, which are effective against acute infections but show poor activity in the late chronic phase, with severe collateral effects and limited efficacy against different parasitic isolates. These drawbacks justify the urgent need to identify better drugs to treat chagasic patients [13].
Leishmaniasis, which is caused by different species of Leishmania with an estimated 12 million cases worldwide, being the infection caused by the bite of infected female sandflies of the genera Phlebotomus (Europe, Asia, Africa) and Lutzomyia (America) [14]. In VL, Leishmania donovani and Leishmania infantum (equivalent to Leishmania chagasi in South America), being different pathologies associated with these species. L. donovani causes distinct pathologies in India and Sudan as well as some strains of L. infantum can cause CL. The post-treatment some L. donovani-infected patients develop into the diffuse cutaneous form (DCL) named post-kala-azar dermal leishmaniasis (PKDL) [15]. CL also presents in patients in many different forms, though most patients have limited self-cured cutaneous lesions. Over 15 species of Leishmania cause CL in humans, with species such as Leishmania major, Leishmania tropica, and Leishmania aethiopica in the Old World and Leishmania mexicana, Leishmania amazonensis, Leishmania braziliensis, Leishmania panamensis, and Leishmania guyanensis in the New World. Pentostam and Glucantime are first-line drugs for both VL and CL; however, they present several limitations, including severe side effects, the need for daily parenteral administration, and the development of drug resistance. Amphotericin B, normally considered a second-line drug, has been the first line in Bihar (India) for VL following the loss of effectiveness of antimonial drugs. The Amphotericin B formulation AmBisome, the aminoglycoside paromomycin, and the phospholipid analogue miltefosine (oral administration) have been registered for the treatment of VL. On the other hand, for CL, besides antimonials, there are limited proven treatments, that is, pentamidine, amphotericin B, and miltefosine to specific types in South America and paromomycin, only as topical formulation [16,17].
Mitochondria in Higher Eukaryotes
The mitochondrion is a membrane-bound organelle responsible for energy production is involved in growth, differentiation, calcium homeostasis, redox balance, the stress response, and death [18,19]. The compartmentalised organisation of the mitochondrion in inner and outer membranes, intermembrane space, and the matrix provides an optimal microenvironment for many other biosynthetic and catabolic pathways, such as -oxidation, heme biosynthesis, steroidogenesis, gluconeogenesis, and amino acid metabolism [20].
Mitochondrial shape and positioning in cells are tightly regulated by fission and fusion events, and an imbalance between these events can lead to shifts in the morphology and viability of the organelle [21]. Fission is required for organelle biogenesis and for the removal of aged or damaged mitochondria through autophagy (mitophagy), allowing organelle content to be degraded or recycled. Fusion is a two-step process in which the outer and inner membranes fuse by separate events. In mammals, outer membrane fusion is controlled by the GTPase mitofusin (Mfn 1 and 2), whereas inner membrane fusion is controlled by optic atrophy OPA 1, a dynamin-like protein responsible for the maintenance of crista morphology [21].
The mitochondrial precursor proteins are synthesised in the cytosol by free ribosomes and must be imported into the organelle by translocases present in the outer and inner mitochondrial membranes [22]. Signal peptides and specific chaperones direct these precursors to the target compartment. The translocase of the outer membrane (TOM) complex is responsible for the first recognition, and the translocase of the inner membrane (TIM) complex is involved in the import of the cleavable preproteins into the organelle matrix. Additionally, OXA complex helps TOM in the insertion of inner membrane proteins and sorting and assembly machinery (SAM) complex is involved in the assembly of -barrel proteins into the outer mitochondrial membrane [23].
In response to changes in the intracellular environment by different stress signals, such as a loss of growth factors, hypoxia, oxidative stress, and DNA damage, mitochondria become producers of excessive reactive oxygen species (ROS) and release prodeath proteins, resulting in disrupted ATP synthesis and the activation of cell death pathways [24]. The switch to apoptotic cell death is mediated by cysteine proteases named caspases, which cleave strategic substrates. Another important step in the apoptotic pathway is the permeabilisation of the outer mitochondrial membrane, leading the release of proapoptotic proteins. During stress, both autophagy and apoptosis are activated, and enhanced mitophagy is an early response that promotes survival by removing damaged mitochondria. With increased mitochondrial injury, apoptosis becomes dominant, and inactivation of critical proteins of the autophagic pathway leads to cell death [25].
Ultrastructural Architecture and Mitochondrial Dynamics.
The most remarkable morphological difference between the mitochondria of higher eukaryotes and trypanosomatids is the number and relative volume of the organelles. Thousands of mitochondria can be detected in mammalian cells, representing nearly 20% of the total cellular volume, whereas only a single and ramified organelle is observed in the parasites [26]. This peculiar ultrastructural characteristic was confirmed in all T. cruzi developmental forms by 3D reconstruction [27,28], and the hypothesis was extended to other pathogenic trypanosomatids.
The mitochondrial distribution varies according to the parasite and its developmental form. Generally, the organelle is elongated close to the subpellicular microtubules and the plasma membrane surrounding the entire cell and is dilated only in a disk-shaped structure called the kinetoplast ( Figure 1). The ultrastructural aspect of the kinetoplast network in T. cruzi trypomastigotes is rounded, differing from all other species and developmental stages that present a bar shape in ultrathin sections. The morphology of the cristae and matrix is also variable, being irregularly distributed in most of the species [29,30]. The relative volume of the entire organelle directly depends on nutrient availability, reaching 12% of the protozoan volume [30]. As occurred in other eukaryotes, the mitochondrion of trypanosomatids is very dynamic, changing its shape and function in response to the host environment, and changes in bioenergetics metabolism affect the organelle morphology [21]. As described above for other eukaryotic cells, this mitochondrial remodelling is orchestrated by fission and fusion processes and/or autophagy [31]. Despite the morphological evidence reported, the molecular mechanisms involved in the mitophagic process in protozoa are unknown [32]. However, the presence of a dynamin-like protein (DLP) has been detected in T. brucei and L. major and is related to the fission step, as in mammals [21], and to subsequent organelle segregation during mitosis [33]. To ensure correct segregation, the kDNA network is physically connected to basal bodies through a cluster of filaments that cross the kinetoplast outer and inner membranes [5]. Furthermore, BLAST analysis indicates that DLP is highly conserved in pathogenic trypanosomatids (data not shown). To date, Mtn, the main mitochondrial fusion component, has not been detected in this protozoan family, reinforcing the 3D models of a single organelle proposed by Paulin [27].
In all eukaryotes, including trypanosomatids, a large proportion of mitochondrial proteins are encoded in nuclear genes. However, after transcription, these molecules have to be translocated by the TOM, TIM, SAM, and OXA complexes from the cytosol to the organelle [34], although such complexes are poorly characterised in protists. In trypanosomatids, these translocases were first assessed in T. brucei, in which the essential complex TOM40 is replaced by an archaic translocase named pATOM36, responsible for at least part of the import of mitochondrial matrix proteins [35]. Moreover, tbTIM50 and tbTIM17 were described recently [36], but the exact molecular mechanisms involved in mitochondrial protein import in trypanosomatids are still unknown. Interestingly, several pieces of evidence, including data on characterisation of the mitochondria protein-import machinery, suggest that trypanosomatids are among the earliest diverging eukaryotes to have mitochondria [37].
Molecular Structure and Function of the kDNA Network.
The most peculiar characteristic of trypanosomatids is DNA organisation in the kinetoplast. In these protozoa, the mitochondrial genome consists of a complex network of interlocked DNA rings subdivided into two classes: maxicircles and minicircles, representing approximately 30% of the total cellular genome [5,30]. The kDNA composition varies depending on the species. Approximately, several thousand minicircles and a few dozen maxicircles can be observed per organelle, with only 10% of the entire network mass composed of maxicircles [5,38].
Maxicircles correspond to mitochondrial DNA in other eukaryotes and encode several genes of respiratory chain complexes, such as cytochrome oxidase, NADH dehydrogenase, and ATP synthase subunits. However, the primary transcripts of these genes need to be processed by inserting or removing uridylate residues to create functional mRNAs [39,40]. Maxicircle transcripts have to be edited to create functional open reading frames. This editing process depends on the templates encoded by minicircles known as guide RNAs, which are responsible for nearly 60% of mRNA synthesised de novo [41]. The great variety of guide RNAs necessary to extensively edit maxicircle transcripts is a reasonable explanation for the large number of minicircle copies in comparison with the maxicircle repertoire in the kDNA network [42].
Despite the high heterogeneity of the minicircle population, a conserved region has been identified, to which the origins of replication are localised. Replication specifically occurs in the nuclear S phase, involving the participation of crucial proteins that support the process, such as polymerases, ligases, and topoisomerases [43]. In the early steps, minicircles are released from the network by topoisomerase II and replicate as free molecules. The minicircles are then closed covalently, forming a network again, but a continuous gap or nick can be observed until the replication of all molecules [41]. Another characteristic of minicircles is bent DNA structures consisting of multiple adenines sequences (5-26 bp) that participate in network organisation [39,44].
Role in Bioenergetics.
Trypanosomatid bioenergetics present remarkable differences from mammalian cells, such as the compartmentalisation of several steps of glycolysis into an organelle named the glycosome and mitochondrial ETC differences, accounting for the great majority of reports on T. brucei [45]. Due to their complex life cycles, trypanosomatids adapt to the environment in different hosts, reflecting the functional plasticity of the mitochondrion observed between the parasitic forms [3,41,46,47].
As in higher eukaryotes, mitochondrial respiration occurs via the electron transport chain, which is composed of four integral enzyme complexes in the mitochondrial inner membrane: NADH-ubiquinone oxidoreductase (complex I), succinate-ubiquinone oxidoreductase (complex II), ubiquinolcytochrome c oxidoreductase (complex III or cytochrome bc1), and cytochrome oxidase (complex IV or cytochrome a3), with ubiquinone (coenzyme Q) and cytochrome c functioning as electron carriers between these complexes. Complexes I, III, and IV function as H+ pumps that generate a proton electrochemical gradient that drives ATP synthesis via the reversible mitochondrial ATP synthase (complex V), which couples the processes of respiration and phosphorylation [29,48,49].
T. brucei bloodstream forms are essentially glycolytic, living in an environment that presents high glucose levels. In this life stage, many tricarboxylic acid (TCA) cycle enzymes and cytochromes are not expressed in the mitochondrion, affecting energy production [45,50]. However, F 0 -F 1 ATP synthase and consequently the mitochondrial membrane potential (MMP) are still preserved, suggesting basal uncoupled activity in the organelle [51]. The mitochondrion of insect forms is much more functional, perhaps due to the large amounts of ETC substrates in the tsetse fly midgut [46]. This hypothesis also fits T. cruzi very well. Our group showed that epimastigotes' ETC is much more efficient than that of bloodstream trypomastigotes, confirming the functional adaptation of the parasite to the host substrates' availability [47].
Among the ETC substrates, succinate plays an essential role in trypanosomatids [52,53]. In one of the most remarkable mitochondrial studies in these protozoa, Vercesi and colleagues [54] detailed the kinetics of succinatesensitive oxygen uptake in digitonin-permeabilised T. cruzi epimastigotes and also described ETC stages 3 and 4. The oxidation of succinate by complex II leads to the transfer of electrons to complex III via ubiquinone, as occurred in higher eukaryotes. The activity of complexes II-IV was demonstrated in late 1970s in these protozoa, but the presence of functional complex I is still controversial [55,56].
Curiously, rotenone-independent oxygen uptake has been described in these protozoa, with phenotypic effects observed only at high concentrations of this inhibitor [57]. Although the occurrence of NADH oxidation in T. brucei mitochondria is well known, no experimental data have confirmed its participation in respiration processes, even after the prediction of 19 complex I subunits in these parasites, including subunits that are involved in redox reactions [56]. In T. cruzi and L. donovani, oxygraphic studies revealed that pharmacological inhibition or the presence of natural subunit deletions does not affect oxygen consumption [52,58]. All of these data indicate important differences in complex I subunits between trypanosomatids and other eukaryotes [56].
Interestingly, KCN, a complex IV inhibitor, does not completely abolish the respiratory rates of T. brucei, T. cruzi, and L. donovani, indicating the existence of a terminal oxidase that is an alternative to cytochrome oxidase. In T. brucei, this alternative oxidase (AOX) has been well characterised, with its three-dimensional structure being solved by X-ray crystallography [59]. AOX is a diiron protein that catalyses the four-electron reduction of oxygen to water by ubiquinol. AOX plays a critical role in the bloodstream forms of African trypanosomes, and its expression and amino acid sequence are identical in HAT-causing and non-human infective trypanosomes [60]. In trypanosomatids, the activity of salicylhydroxamic acid, an AOX inhibitor, was observed in both T. brucei and T. cruzi, suggesting a role in the organisms' energetic metabolism [3,60]. In contrast, no effect of this inhibitor was detected in cyanide-insensitive L. donovani respiration, reinforcing the idea that the exact participation of AOX remains unclear and must be further investigated in trypanosomatids [61].
Role in Oxidative Stress.
The single mitochondrion is one of the major sources of ROS in trypanosomatids, even under physiological conditions. Interestingly, these reactive species could play different roles in the parasites, involving signalling or cytotoxicity, and the cellular strategy for scavenging these species is crucial for protozoan survival [62][63][64]. Inside the parasites' organelle, the main site of ROS generation is the ETC complexes, except for T. brucei bloodstream forms. During mitochondrial respiration, part of the oxygen is reduced to superoxide anions and subsequently to hydrogen peroxide and hydroxyl radicals [65]. These species can cross the mitochondrial membranes and spread through the cytosol and other organelles, culminating in interference in biosynthetic pathways and deleterious consequences [62].
Complex I presents low NADH dehydrogenase activity, justifying the limited generation of superoxide observed in T. brucei procyclics and T. cruzi epimastigotes [58,66]. The production of superoxide by rotenone-treated L. donovani promastigotes reinforces the necessity of further studies on the biological function of complex I in these parasites. Additionally, the involvement of complex II in ROS generation has been described in parasites treated with the specific inhibitor thenoyltrifluoroacetone [67]. However, there is no doubt that the most prominent ROS source in trypanosomatids is complex III [62,66,67]. Additionally, complex IV is not an electron leakage point in the ETC in trypanosomatids or even in higher eukaryotes. Treatment with salicylhydroxamic acid (SHAM) impairs complex IV, compromising electron flow and favouring electron escape from oxygen [62]. Our group reported that T. cruzi trypomastigotes present high activity for complexes II-III and low activity for complex IV, which correlates with the high ROS amounts detected in bloodstream forms in comparison with epimastigotes [47]. It was proposed that the AOX described in T. brucei, coexisting with complex IV, could play a role in ROS scavenging by the removal of excess reducing equivalents. The inhibition of this oxidase by SHAM confirmed this hypothesis, leading to an increase in ROS production within the protozoan mitochondrion [68].
To control the ROS concentration, pathogenic trypanosomatids present mitochondrial antioxidant defences. However, several differences can be observed in relation to mammals. Among the peculiarities of the protozoan antioxidant repertoire, the presence of an iron-superoxide dismutase and a selenium-independent glutathione peroxidase stands out, as these features are described in T. brucei, T. cruzi, and several species of Leishmania [61]. Surprisingly, the role of ironsuperoxide dismutases is distinct among trypanosomatids. In T. brucei, these enzymes are not essential for the survival of bloodstream trypomastigotes, most likely due to the low ROS amounts produced by the rudimentary mitochondrion of this parasitic form [69]. In contrast, T. cruzi metacyclic trypomastigotes and L. donovani amastigotes express ironsuperoxide dismutase isoforms in high amounts, indicating a possible relationship between the protozoan antioxidant system and host susceptibility to the infection [70,71]. Moreover, thiol-based redox metabolism in these parasites involves a dithiol named trypanothione, formed by the conjugation of two glutathione molecules and one spermidine, and its corresponding reductase, a mitochondrial isoform already described in T. cruzi [64]. Peroxiredoxins, and especially tryparedoxin peroxidase, are also crucial to hydrogen peroxide detoxification, together with trypanothione reductase and tryparedoxin [72]. Interestingly, an increase in the expression of cytosolic and mitochondrial isoforms of tryparedoxin peroxidase in benznidazole-resistant T. cruzi was previously reported, reinforcing the importance of the antioxidant system for the infection outcome [73].
Depending on their concentration, ROS can act as signalling molecules. The detoxification of these species by pathogenic trypanosomatids represents a crucial step in the success of the host-parasite interaction because ROS production is one of the mammalian mechanisms used to control the infection [74]. Recently, Piacenza and coworkers [75] demonstrated mitochondrial redox homeostasis in T. cruzi and found that its modulation by antioxidant defences (cytosolic and mitochondrial peroxiredoxins and trypanothione synthetase) contributes to the parasite's virulence, facilitating progression of the infection to the chronic phase. Additionally, Nogueira and colleagues (2011) reported that heme-induced ROS formation favours epimastigote proliferation through the activation of calmodulin kinase II and that this phenotype is regulated by treatment with exogenous antioxidants [76]. This finding indicates that an oxidative stress stimulus is necessary for cell cycle maintenance, at least in T. cruzi, and is fundamental to better comprehension of the regulation processes involved. Figure 2 summarises mitochondrial ROS production in trypanosomatids.
Role in Cell
Death. The existence of programmed cell death (PCD) in unicellular organisms has been a muchdebated subject in the last two decades, as the precise molecular mechanisms that trigger death in protozoan parasites are still poorly comprehended. Despite the absence of strong evidence, an altruistic hypothesis has been proposed for trypanosomatids and other protists [32]. In fact, certain typical apoptotic hallmarks have been found, especially in pathogenic trypanosomatids. However, due to the lack of certain crucial molecular events, the existence of PCD in protozoa is still unconfirmed, so the term "apoptosis-like" is more suitable [32,77]. Among PCD features, the proteolytic activity of caspases should be highlighted. These proteases have very specific substrates, and their cleavage represents a key step in the execution of apoptosis [78]. However, the orthologues of caspases that are present in pathogenic trypanosomatids, named metacaspases, demonstrate no involvement in cell death [79,80]. In Leishmania, metacaspases are mitochondrial, but proteolysis has not been observed in parasites under oxidative stress [79]. In T. cruzi, the overexpression of metacaspase-3 and metacaspase-5 indicates their participation in cell cycle regulation and metacyclogenesis [81]. More investigation is necessary to clarify the exact role of metacaspases in unicellular organisms.
Most of the reports examining cell death in protozoa have evaluated the involved pathways under nonphysiological conditions (physical or chemical stresses). The mitochondrion plays a central role in this process, and alterations such as mitochondrial swelling and membrane depolarisation are the most recurrent signs of cell death [29,77,[82][83][84].
As already discussed, the high ROS amounts produced by ETC impairment lead to severe deleterious effects in trypanosomatids. In this scenario, T. cruzi incubation in the presence of human sera induces important mitochondrial dysfunction and parasite death, a phenotype reverted by iron superoxide dismutase [85]. Similar results were observed in T. brucei and L. donovani after treatment with ROS inducers. Apoptotic-like features, including a loss of the MMP, were detected, and this phenotype was prevented by pretreatment with the ROS scavengers glutathione and N-acetylcysteine [67,86]. Moreover, T. brucei AOX overexpression reduces ROS generation and consequently prevents the appearance of cell death phenotypes [68,87].
Proteomic Analysis.
Due to the posttranscriptional gene regulation of trypanosomatids, high-throughput proteomics have become essential for protein expression analysis and the validation of genomic annotations [88]. Nontranslated mRNA detection in T. cruzi also confirmed the limitation of RNA-based techniques in evaluating the protozoan's gene expression [89]. The proteomic map of pathogenic trypanosomatids has been assessed for the identification of virulence factors and stage-specific proteins and even for the characterisation of immunogenic molecule candidates for vaccines or diagnosis. In the last decade, subcellular proteomic studies have investigated enriched fractions of different organelles from these parasites, including the mitochondrion [88,90]. This approach increases the number of proteins identifications in the desired fraction, increasing the coverage of the desired organellar content [91].
Different proteomic strategies have been employed in the investigation of the mitochondrial protein profile in trypanosomatids [90]. Mass spectrometry analysis of the mRNA editing mechanism presented in the mitochondria of T. brucei described 16 proteins involved in this process. The evaluation of mitochondrion-enriched fractions of T. brucei also led to the identification of several mitochondrial proteins, and especially ETC multiprotein complexes, including a unique oxidoreductase complex present only in kinetoplastids [92,93]. Subsequently, many other proteins related to the TCA cycle, -oxidation, and amino acid proteolysis were identified in procyclic but not in bloodstream trypomastigotes, reinforcing T. brucei mitochondrial plasticity [94]. In 2009, a shotgun approach was used to assess both the soluble and the hydrophobic proteomic content of the T. brucei mitochondrion [95]. This study led to the identification of 1,000 mitochondrial proteins, nearly 25% of which needed to have their function and localisation confirmed to exclude purification artefacts. More recently, label-free quantitative mass spectrometry was employed for the characterisation of T. brucei mitochondrial outer membrane [40]. Interestingly, 82 proteins were identified, of which approximately 36% are specific to trypanosomatids, but, to date, these proteins have unknown function. Knockdown assays of three of the proteins demonstrated their participation in the regulation of mitochondrial shape [40]. Additionally, proteomic characterisation of mitochondrial ribosomes was performed for procyclic forms of T. brucei, and more than 130 proteins were identified to be associated with the ribosomal structure by liquid chromatography and tandem mass spectrometry (LC-MS/MS) [96].
In T. cruzi, the specific mitochondrial protein profile has not yet been investigated. Atwood and colleagues (2005) [71] performed one of the most complete studies on this parasite's proteomics, describing the protein content of different developmental stages. Using a shotgun approach, 2,784 proteins were identified, with 838 detected in all parasitic forms, and a hypothetical annotation was presented for a substantial proportion. Among these identifications, several mitochondrial molecules, such as antioxidant enzymes and chaperones, were described, and their expression in the different parasitic forms evidenced adaptations to host environments. A large subcellular study by Ferella and coworkers [91] reported the expression of nearly all enzymes from the TCA cycle and succinate dehydrogenase subunits in the mitochondrion-enriched fraction. It is important to mention that the described mitochondrial proteins were not identified in a large-scale study by the Atwood III group [71], reinforcing the necessity of subfractionation to increase the number of identifications in specific organelles. In parallel, differential proteomic analyses of parasites treated with drugs were performed and indicated remarkable alterations in the mitochondrial protein content, confirming previous ultrastructural evidence [82,97,98]. Mass spectrometry was employed to investigate the drug resistance-related pathways in the parasite, revealing many mitochondrial proteins, such as chaperones, proteases, and antioxidant enzymes, are highly expressed in the resistant phenotype [99]. Recently, our group suggested that the mitochondrial isoform of the gluconeogenesis-related enzyme phosphoenolpyruvate carboxykinase (gi | 1709734) is a promising drug target based on proteomic analysis. The sequence differences between the parasitic and the human enzymes and their substrate specificity indicate that this molecule is a good candidate for drug intervention [100].
The profile of mitochondrial proteins in parasites of the genus Leishmania was first assessed in 2006. A twodimensional electrophoretic analysis of mitochondrionenriched fractions from L. infantum revealed several wellknown mitochondrial proteins, and especially chaperones, whose localisation was confirmed by GFP-protein detection by fluorescence microscopy [90,101]. In L. donovani, isobaric tagging for relative and absolute protein quantitation followed by an LC-MS/MS approach supported the hypothesis that changes in energetic metabolism are directly involved in parasite differentiation, as mitochondrial proteins related to the TCA cycle and oxidative phosphorylation are modulated during the parasite's life cycle [102]. The supplementary data in Supplementary Material available online at http://dx.doi.org/10.1155/2014/614014 summarize the proteomic findings in the mitochondrial profile of pathogenic trypanosomatids.
The Organelle as a Drug
Target. The identification of a drug target in a pathogen requires that the target be either absent or at least substantially different in the host. Using metabolic systems that are very different from those of the host, parasites can adapt to the low oxygen tension present within the host animal. Most parasites do not use the oxygen available within the host to generate ATP but rather employ anaerobic metabolic pathways. Phylogenetically, trypanosomatids branch out relatively early relative to the higher eukaryotes. These organisms' cellular organisation is significantly different from that of the mammalian cells, and, thus, the existence of biochemical pathways unique to these pathogens is expected [103].
The fact that kinetoplastids have a single mitochondrion, rudimentary antioxidant defences, and a set of alternative oxidases indicates that this organelle is a potential candidate for drug intervention. In addition, several metabolic pathways are common to all pathogenic trypanosomatids, so, in principle, finding a single drug that is useful against all trypanosomatid diseases is a reasonable expectation. However, to date, this has not been the case, most likely due to the diversity of surroundings inside the parasite's hosts. African trypanosomes live in the bloodstream and cerebrospinal fluid, T. cruzi lives in the cytosol of various cell types, and Leishmania spp. lives within the phagolysosomes of macrophages.
The mitochondrion represents the most recurrent target, and the intensity of the alterations in this organelle is time dependent and varies with the compound employed [30,104,105]. Numerous articles point to the mitochondrion as a drug target in trypanosomatids, primarily based on transmission electron microscopy analysis and MMP evaluation using flow cytometry [29,83,106,107]. As an example, the ultrastructural effect of a naphthoquinone on T. cruzi mitochondria can be observed in Figure 3. It is important to keep in mind, however, that induced mitochondrial alterations may be due to either a primary effect directly acting on this organelle or secondary lesions caused by a loss of cellular viability triggered by another cell component or metabolic pathway. Several other classes of compounds also interfere with the ultrastructure and physiology of the mitochondria of trypanosomatids such as sterol biosynthesis inhibitors (SBIs). Trypanosomatids have a strict requirement for specific endogenous ergosterol and analogs and cannot use the supply of cholesterol present in the mammalian host. One of the characteristic ultrastructural effects of SBIs on trypanosomatids is a marked swelling of their single giant mitochondrion, correlated with the depletion of the endogenous parasite sterols, which can lead to cell lysis [108][109][110][111][112]. Epimastigotes of T. cruzi treated with ketoconazole plus the lysophospholipid analogue edelfosine presented also severe mitochondrial swelling, with a decrease in electron density of its matrix and appearance of concentric membranar structures inside the organelle [113]. The group of Urbina has shown that T. cruzi mitochondrial membranes, in contrast to those of vertebrate cells, are indeed rich in specific parasite's sterols, which are probably required for their energy transducing activities [114,115]. The mitochondrial metabolism of Leishmania spp. amastigotes and promastigotes, T. cruzi trypomastigotes and epimastigotes, and T. brucei procyclic forms is similar [53]. The inhibition of certain potential targets is associated with triggering apoptosis-like effects by MMP impairment and/or ROS production. The mitochondrial targeting of drugs may rely on free-radical production and/or calcium homeostasis [116].
Different potential targets can be identified in trypanosomatid mitochondria due to their unique characteristics in comparison with their mammalian counterpart: kDNA, topoisomerases, ETC and related enzymes, and RNA editing [30,105].
Several drugs induce kDNA disorganisation such as diaminobenzidine, geranylgeraniol and vinblastine induce mitochondrial swelling and irregularly shaped kDNA [107,117]. In trypanosomatids, growing evidence supports kDNA as the primary target of aromatic diamidines [118]. Ultrastructural and flow cytometric studies have shown that aromatic diamidines and reversed amidines target the T. cruzi mitochondrion-kinetoplast complex by interference with the MMP [119,120]. In T. brucei, bloodstream forms exhibit a partial or even complete loss of kDNA, termed dyskinetoplastidy (Dk) and akinetoplastidy (Ak), respectively, which can be induced in the laboratory by DNA-binding drugs such as acriflavine or ethidium bromide [121]. In nature, most T. brucei strains contain a kinetoplast, and RNAi assays show that knockdown of kDNA replication and editing proteins is lethal to bloodstream forms [121], suggesting that the kinetoplast is a valid drug target. Moreover, Jensen and Englund [122] reported that minicircle replication is the most vulnerable target of ethidium bromide, which is still used to treat trypanosomiasis in African cattle [123]. Because the kinetoplast has no counterpart in other eukaryotes, complex kDNA replication and segregation present a potential drug target.
DNA topoisomerases are a well-studied mitochondrial target. These enzymes are involved in essential processes, such as DNA replication, transcription, recombination, and repair, and have been used as chemotherapeutic targets in bacterial diseases. DNA topoisomerases are broadly classified as type I, which cleaved single-stranded DNA, and type II, which acted on double-stranded DNA [124]. Two classes of drugs target topoisomerases: poisons (class I) which stabilise the DNA-enzyme complex, resulting in DNA breakdown, and catalytic inhibitors (class II) which compete with ATP for binding to the catalytic site interfering with the enzyme's function [117,125]. Topoisomerase I purified from T. cruzi and L. donovani was found to be independent of ATP [126,127]. In T. brucei, this enzyme is composed of two subunits encoded by two genes: one for the DNA-binding domain and a second for the C-terminal catalytic domain [128]. Topoisomerase II genes have been described in T. brucei, T. cruzi, L. donovani, and L. infantum [129,130]. Interestingly, topoisomerase II from T. brucei and L. donovani exhibits both ATP-dependent and ATP-independent activities. The treatment of T. brucei, T. cruzi, and L. donovani with camptothecin (an inhibitor of eukaryotic DNA topoisomerase I) induces both nuclear and mitochondrial DNA cleavage and covalent linkage to the protein, which is consistent with the existence of drug-sensitive topoisomerase I activity in both compartments [131]. In contrast to other eukaryotic topoisomerases, L. donovani topoisomerase is distinct from that of other eukaryotes with respect to its biological properties and sensitivity to drugs [127,132]. L. donovani promastigotes and amastigotes present different sensibility to topoisomerase I inhibitors [133][134][135][136]. In T. cruzi, topoisomerase II is highly expressed in the replicative forms of the parasite, accounting for the trypanocidal effect of the specific inhibitors clorobiocin, novobiocin, ofloxacin, and nalidixic acid [137][138][139]. Ultrastructural alterations were also observed in L. amazonensis promastigotes treated with these inhibitors [138].
The ETC in trypanosomatids has peculiarities that make its components a promising target, given that MMP maintenance is vital for cell survival. Studies have shown that the loss of MMP induced by drugs is associated with pathogenic trypanosomatid death [67,83,140,141]. Most of the studies on ETC as a drug target in trypanosomatids have been performed with L. donovani promastigotes. Pentamidine also induced a rapid collapse of the mitochondrial inner membrane potential of L. donovani promastigotes [142]. The association of resistance to pentamidine with mitochondrial alterations was based on studies with its fluorescent analogue DB99 in which drug accumulation in the kinetoplast was observed with wild-type L. donovani but not with a resistant strain [143]. Mehta and Saha [67] observed that concurrent inhibition of respiratory chain complex II with pentamidine administration increases the cytotoxicity of the drug. Inhibitors of respiratory chain complexes I (rotenone), II (noyltrifluoroacetone (TTFA)), and III (antimycin A) resulted in MMP dissipation, ROS production, and the induction of apoptosis-like effects. Additionally, 4,4 -bis((tri-npentylphosphonium)methyl)benzophenone dibromide and sitamaquine also target complex II, causing dramatic mitochondrial compromise, including organelle swelling, a decrease in cytoplasmic ATP, ROS production, inhibition of the oxygen consumption rate, and impairment of the cell cycle in L. donovani [144,145]. Meanwhile, tafenoquine (a primaquine analogue) and miltefosine (a lysophospholipid analogue) inhibit complexes III and IV, respectively, leading to a similar phenotype [146,147].
Because AOX does not exist in hosts, this enzyme has been proposed as an innovative target for antitrypanosomatid drug development, and related attempts have been reported in the literature [148]. Ascofuranone, an antibiotic isolated from the fungus Ascochyta viciae, has been reported as effective against African trypanosomes in vitro, and ubiquinol oxidase was identified as the drug's molecular target [148,149]. It was reported that treatment with ascofuranone led to the appearance of PCD-like features in T. b. rhodesiense bloodstream forms [87].
Mitochondrial RNA editing is a vital and unique process that occurs in the mitochondria of trypanosomatids. This specificity makes RNA editing a potential target for new antiparasitic drugs. In T. brucei, an RNA editing process has been described. The mRNAs encoding the cytochrome system are mainly edited in the procyclic forms, whereas the mRNAs encoding the NADH dehydrogenase complex are edited in the bloodstream forms [150]. This differential RNA editing observed in the parasite has been less studied in other trypanosomatids. Kim et al. [151] examined the differential expression of subunit II of cytochrome oxidase, but, in contrast to T. brucei, no differences were observed in the mRNA levels of this enzyme in either T. cruzi insects or mammalian stages. Furthermore, the contribution of the RNA editing process to mitochondrial functional plasticity cannot be excluded. Presently, this possibility should be considered as a hypothesis, and additional studies are needed for confirmation [152]. In this context, Liang and Connell [153] employed high-throughput screening to identify specific inhibitors of RNA editing. Five compounds were identified (GW5074, mitoxantrone, NF 023, protoporphyrin IX, and Dsphingosine), which proved to be inhibitors of insertional editing. More specifically, GW5074 and protoporphyrin IX inhibited the editing process at the level of endonuclease cleavage, which begins the editing process [153]. Recently, another potential target in the RNA editing process was proposed, and inhibition of the RNA ligase KREL1 was described in T. brucei [154].
Conclusions
In the last decade, the mechanisms of action of numerous drugs have been found to be involved directly or indirectly in mitochondrial metabolism, leading this organelle to become a promising target in the treatment of different diseases. In pathogenic trypanosomatids, the presence of a single mitochondrion, together with its peculiarities, such as the existence of AOX and unique antioxidant defences, attributes a crucial role to the organelle in the development of novel active compounds. Moreover, the morphological and functional plasticity of the mitochondrion during these parasites' life cycles also represent a fundamental step in protozoan adaptations to the host environment. Variation in the efficiency of the respiratory machinery could compromise the redox balance and culminate in ROS generation. Despite their well-known cytotoxic effect, the role of ROS in these protozoa is complex. Depending on the concentration, these reactive species lead to the parasites' death or participate in their cell signalling and proliferation. Thus, better comprehension of oxidative regulation could support new perspectives on trypanosomatid-targeting chemotherapy.
|
2017-04-07T02:05:16.135Z
|
2014-03-31T00:00:00.000
|
{
"year": 2014,
"sha1": "3c1c895513e1adf82bb0c74622f8cdc7d8f1501c",
"oa_license": "CCBY",
"oa_url": "http://downloads.hindawi.com/journals/bmri/2014/614014.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "595beca2f0451c355a630badf4be316e564a5c4f",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
697985
|
pes2o/s2orc
|
v3-fos-license
|
Cardioembolic Stroke: Clinical Features, Specific Cardiac Disorders and Prognosis
This article provides the reader with an overview and up-date of clinical features, specific cardiac disorders and prognosis of cardioembolic stroke. Cardioembolic stroke accounts for 14-30% of ischemic strokes and, in general, is a severe condition; patients with cardioembolic infarction are prone to early and long-term stroke recurrence, although recurrences may be preventable by appropriate treatment during the acute phase and strict control at follow-up. Certain clinical features are suggestive of cardioembolic infarction, including sudden onset to maximal deficit, decreased level of consciousness at onset, Wernicke’s aphasia or global aphasia without hemiparesis, a Valsalva manoeuvre at the time of stroke onset, and co-occurrence of cerebral and systemic emboli. Lacunar clinical presentations, a lacunar infarct and especially multiple lacunar infarcts, make cardioembolic origin unlikely. The more common high risk cardioembolic conditions are atrial fibrillation, recent myocardial infarction, mechanical prosthetic valve, dilated myocardiopathy, and mitral rheumatic stenosis. Transthoracic and transesophageal echocardiogram can disclose structural heart diseases. Paroxysmal atrial dysrhyhtmia can be detected by Holter monitoring. In-hospital mortality in cardioembolic stroke (27.3%, in our series) is the highest as compared with other subtypes of cerebral infarction. In our experience, in-hospital mortality in patients with early embolic recurrence (within the first 7 days) was 77%. Patients with alcohol abuse, hypertension, valvular heart disease, nausea and vomiting, and previous cerebral infarction are at increased risk of early recurrent systemic embolization. Secondary prevention with anticoagulants should be started immediately if possible in patients at high risk for recurrent cardioembolic stroke in which contraindications, such as falls, poor compliance, uncontrolled epilepsy or gastrointestinal bleeding are absent.
INTRODUCTION
Stroke is the leading cause of disability and the second most common cause of death worldwide [1,2]. Accurate definition of the mechanism of stroke is crucial as this will guide the most effective care and therapy. Cardioembolic stroke accounts for 14-30% of all cerebral infarctions [3][4][5][6][7]. In most cases, recurrence of cardioembolism can be prevented by oral anticoagulants. Therefore, for a patient with a cerebral infarct, early confirmation of a diagnosis of cardioembolic infarction is extremely important in order to initiate anticoagulation therapy for an adequate secondary prevention [8][9][10][11][12].
Embolism from the heart to the brain results from one of three mechanisms: blood stasis and thrombus formation in an enlarged (or affected by another structure alteration) left cardiac chamber (e.g., left ventricular aneurysm); release of material from an abnormal valvular surface (e.g., calcific degeneration); and abnormal passage from the venous to the arterial circulation (paradoxical embolism) [2]. Cardiac emboli can be of any size, but those of arising from the cardiac chambers are often large and hence especially likely to cause severe stroke, disability and death. Cardioembolic infarction is generally the most severe ischemic stroke subtype, with a low frequency of symptom-free at hospital discharge, a high risk of early and late embolic recurrences, and a high mortality [3,6] (Fig. 1).
There is no gold standard for making the diagnosis of cardioembolic stroke. The presence of a potential major cardiac source of embolism in the absence of significant arterial disease remains the mainstay of clinical diagnosis. When cardiac and arterial disease coexist (such as atrial fibrillation and ipsilateral carotid atheroma), determining the etiology of the ischemic stroke becomes more difficult. However, in many patients, history, physical examination, and routine diagnostic tests (electrocardiogram and findings on neuroimaging studies) are sufficient to easily make the diagnosis of most presumed cardiac emboligenic condition (e.g., atrial fibrillation, recent myocardial infarction, heart failure, prior rheumatic disease, splinter hemorrhages). An important exception is paroxysmal atrial fibrillation, which can be detected by 24-48 hour Holter monitoring immediately after stroke. Transthoracic echocardiogram can disclose structural cardiopathies (dilated cardiomyopathies, mitral stenosis and other structural ventricular diseases and intraventricular thrombus, vegetations or tumors) and enables measurement of the left atrial size and left ventricular systolic function [1,2]. Transesophageal echocardiogram is able to study the aortic arch and ascending aorta, left atrium and left atrial appendages, intra-arterial septum, pulmonary veins and valve vegetations [1,3]. Transesophageal echocardiography is more likely to be helpful in young patients with stroke, stroke of unknown cause and in patients with non-lacunar stroke. Cardiac magnetic resonance imaging (MRI) and nuclear cardiology studies (assessment of myocardial perfusion and analysis of ventricular function) may be useful in selected patients.
CLINICAL FEATURES
Clinical features that support the diagnosis of cardioembolic stroke includes sudden onset to maximal deficit (< 5 min), which is present in 47-74% of cases and decreased level of consciousness at onset in 19-31% of cases [20,21]. In the study of Timsit et al. [22], altered consciousness was a predictive factor of cardioembolic infarction, with an odds ratio (OR) of 3.2 as compared with atherothrombotic infarction. Sudden onset of neurological deficit occurs in 79.7% of cases of cardioembolic stroke and in 38% of lacunar infarcts and in 46% of thrombotic infarctions (P < 0.01).
In 4.7-12% of cases, cardioembolic infarctions show a rapid regression of symptoms (the spectacular shrinking deficit syndrome) [23][24][25][26]. The recognition of this syndrome is important for a clinical suspicion of the cardioembolic origin of the cerebral infarction [26]. This dramatic improvement of an initially severe neurological deficit may be due to distal migration of the embolus followed by recanalization of the occluded vessel [27][28][29].
Wernicke's aphasia or global aphasia without hemiparesis are other common secondary symptoms of cardioembolism [27,28]. In the posterior circulation, cardioembolism can produce Wallenberg's syndrome, cerebellar infarcts, topof-the basilar syndrome, multilevel infarcts, or posteriorcerebral-artery infarcts. Visual-field abnormalities, neglect, and aphasia are also more common in cardioembolic than in non-cardioembolic stroke.
A classic cardioembolic presentation include onset of symptoms after a Valsalva-provoking activity (coughing, bending, etc.) suggesting paradoxical embolism facilitated by a transient rise in right atrial pressure and the cooccurrence of cerebral and systemic emboli [29].
On the other hand, other clinical symptoms classically associated with cardioembolic infarction, such as headache, seizures at onset [23] and onset during activity are not specific for cardioembolic stroke [4,27]. In addition, some signs or syndromes, such as lacunar clinical presentations (e.g., pure motor hemiparesis or ataxic hemiparesis), a lacunar infarct and particularly, multiple lacunar infarcts, make cardioembolic origin unlikely [30]. Cardiac embolism is a very rare cause of lacunar infarction (2.6-5% of cases) [31,32].
Neuroimaging data that support cardioembolic stroke include simultaneous or sequential strokes in different arterial territories. Owing to their large size, cardiac emboli flow to the intracranial vessels in most cases and cause massive, superficial, single large striatocapsular or multiple infarcts in the middle cerebral artery. Therefore, cardioembolic infarctions predominate in the carotid and middle cerebral artery distribution territories [28,29,33]. On the computed tomography (CT) scan, bihemispheric combined anterior and posterior circulation, or bilateral or multilevel posterior infarcts are suggestive of cardioembolism. MRI studies can increase the suspicion of cardio-embolism by demonstrating lesions not apparent on CT scans [1].
Hemorrhagic transformation of an ischemic infarct and early recanalization of an occluded intracranial vessel are suggestive of a cardiac origin of the stroke [1][2][3]. Hemorrhagic transformation occurs in up to 71% of cardioembolic strokes (Fig. 2). As many as 95% of hemorrhagic infarcts are caused by cardioembolism. There are two types of hemorrhagic transformation: petechial or multifocal, which is normally asymptomatic and secondary hematoma, which has mass effects and clinical deterioration [34]. Secondary hematomas are unusual and are found in 0.8% of cases in our stroke registry [13]. The traditional explanation for hemorrhagic transformation is that the infarct is caused by blockage of a large artery by the thrombus; this blockage then causes local vascular spasm. Release of this local spasm and fragmentation of the thrombus allow the thrombus to migrate distally, exposing ischemic tissues and damaged vessel walls and capillaries to reperfusion. Arterial dissection at the site of impact of the thrombus is an alternative explanation. Decreased alertness, total circulation infarcts, severe strokes (NIHSS >14), proximal middle cerebral artery occlusion, hypodensity in more than one third of the middle cerebral artery territory and delayed recanalization (> 6 hours after stroke onset) together with absence of collateral flow predict hemorrhagic transformation in acute cardioembolic stroke [1,4].
SPECIFIC CARDIAC DISORDERS
A number of cardiac conditions have been proposed as potential sources of embolism. The risk of embolism is heterogeneous. The more common high risk cardioembolic conditions are atrial fibrillation, recent myocardial infarction, mechanical prosthetic valve, dilated myocardiopathy, and mitral rheumatic stenosis. Other major sources of cardioembolism include infective endocarditis, marantic endocarditis, and atrial myxoma. Minor sources of cardioembolism are patent foramen ovale, atrial septal aneurysm, atrial or ventricular septal defects, calcific aortic stenosis, and mitral annular calcification [5].
Atrial fibrillation is the most important cause of cardioembolic stroke [20,21]. Atrial fibrillation is the commonest sustained cardiac arrhythmia. Prevalence of atrial fibrillation increases with age, reaching a peak of 5% in people over 65 years of age, and both its incidence and prevalence are increasing. The disorder is associated with valvular heart disease, thyroid disorders, hypertension, and recent heavy drinking of alcohol. In Western populations, most causes of atrial fibrillation are unrelated to mitral valve disease. Instead, atrial fibrillation is now mainly secondary to ischemic or hypertensive heart disease. The attributable risk of stroke due to atrial fibrillation rises from 1.5% at the age of 50 to 24% at the age of 80. The incidence of stroke in people with non-valvular atrial fibrillation is estimated to be 2 to 7 times higher than in people without atrial fibrillation and for those with valvular atrial fibrillation, the risk is 17 times higher than that in age-matched controls. Chronic and recurrent atrial fibrillation appears to carry very similar stroke risk. Atrial fibrillation in the absence of organic heart disease or risk factors (lone atrial fibrillation) appears to carry significantly lower risk especially in younger patients (approximately 1.3% per year). Atrial fibrillation causes stroke because it leads to inadequate contraction of, and leads to stasis that is most marked in the left atrial appendage. Stasis is associated with increased concentrations of fibrinogen, D-dimer, and von Willebrand factor, which are indicative of a prothrombotic state, which in turn predisposes to thrombus formation with consequent increased rate of cerebral embolization [1]. In these patients, left ventricular dysfunction and left atrial size were independent echocardiographic predictors of later thromboembolism. Other factors associated with a particular high embolic risk are spontaneous echo contrast, left atrial thrombus or aortic plaque detected by transesophageal echocardiogram. Heart failure, hypertension, age > 75 years, and diabetes mellitus increase the risk of stroke in a more moderate but additive fashion [3].
The bradycardia-tachycardia (sick sinus) syndrome can be associated with cerebral embolic events. Approximately 2.5% of patients with acute myocardial infarction experience a stroke within 2 to 4 weeks of the infarction, and 8% of men and 11% of women will have an ischemic stroke within the next 6 years. Factors that enhance the risk of stroke include severe left ventricular dysfunction with low cardiac output, left ventricular aneurysm ( Fig. 3) or thrombus, and associated arrhythmias such as atrial fibrillation. Patients with an ejection fraction of less than 28% had a relative risk of stroke of 1.86 compared with patients with an ejection fraction greater than 35%. The incidence of early embolism is high, possibly up to 22% in the presence of a mural thrombus and is most likely when the thrombus is mobile or protrudes into the ventricle [6].
The annual rate of stroke in patients with congestive heart failure is 2%. The risk of stroke correlates with the severity of left ventricular dysfunction. Coexistent disease has a cumulative effect, and the combination of recent congestive heart failure and atrial fibrillation places the patient at particular high risk for cardioembolic stroke [2].
Rheumatic valvular heart disease ( Fig. 4) and mechanical prosthetic valves are well recognized risk factors for stroke even in the absence of documented atrial fibrillation. The two most commonly cited rheumatic valve abnormalities are mitral stenosis and calcific aortic stenosis [2]. Two types of endocarditis, infective and non-infective, can cause stroke. Non-infective endocarditis can complicate systemic cancer, lupus, and the anti-phospholipid syndrome. Infective endocarditis is complicated by stroke in about 10% of cases. Most stroke happens early (before or during the first 2 weeks of appropriate antimicrobial therapy). Emboli can be multiple especially in the case of infection of prosthetic valves and in infections due to aggressive agents, such as Staphylococcus aureus. Mycotic aneurysm is an uncommon (1-5%) complication of infective endocarditis. They may also enlarge and rupture, which is fatal in many cases (Fig. 5).
Myxomas account for more than half of primary cardiac tumors and thromboembolism is the most common presenting symptom in patients with myxomas. Other primary cardiac tumors include papillary fibroelastoma.
Patent foramen ovale and aortic arch atheroma are emerging embolic sources that are extensively described in other chapters.
Mitral annular calcification has been cited as a possible source of cerebral embolism with a relative risk of stroke of 2.1 in the Framingham Study independent of traditional risk factors for stroke [5]. In a recent study in patients with ischemic stroke of uncertain etiology, dense mitral annular calcification was an important marker of aortic arch atherosclerosis with high risk of embolism [35].
Spontaneous echo contrast is an independent echocardiographic risk factor for left atrial thrombus and its appendage and cardiac thromboembolic events.
Cardiological substrate and pathophysiological mechanisms presumptively involved in cardioembolic stroke in the Sagrat Cor Hospital of Barcelona Stroke Registry [36] are shown in Table 2. Atrial dysrhythmia without structural cardiac disease was documented in 89 (22%) patients, with a mean (SD) age of 75 (4) years (range 63 90 years). All these patients had normal electrocardiographic findings and 90% were asymptomatic. The cardiac condition associated with cardiogenic stroke was atrial fibrillation in 88 patients (chronic 67, paroxysmal 18, persistent 3) and atrial flutter in 1. A previous diagnosis of atrial dysrhythmia had been established in the outpatient setting in 51% of patients but none of the patients received anticoagulation.
Structural cardiac disease with sustained sinus rhythm was diagnosed in 81 (20%) of patients. Left ventricular systolic dysfunction was documented in 59 patients (ischemic heart disease in 35 and dilated cardiomyopathy in 24) associated with intraventricular thrombosis in 13. Other less frequent cardiac disorders included mitral annular calcification, cardiac tumors, aortic prosthetic valve, endocarditis, atrial septal aneurysm with patent foramen ovale, rheumatic mitral valve disease, mitral valve prolapse, calcified aortic stenosis with embolism during catheterization, and moderate mitral valve regurgitation.
In the remaining 232 (58%) patients, structural cardiac disorders were associated with atrial fibrillation in 230 cases and atrial flutter in 2. Hypertensive left ventricular hypertrophy was documented in 120 cases followed by rheumatic mitral valve disease in 49 cases and left ventricular dysfunction in 32 cases (ischemic heart disease in 19 and dilated cardiomyopathy in 13). Other less frequent cardiac disorders complicated with atrial fibrillation included mitral valve prolapse, mitral prosthesis, hypertrophic cardiomyopathy, lipomatous hypertrophy of the atrial septum, severe mitral regurgitation, and atrial septal aneurysm with patent foramen ovale.
The frequency of the different cardiac disorders in the overall series of 402 patients with cardioembolic stroke is shown in Table 3. Atrial fibrillation was documented in 79.1% of patients (in association with structural cardiac disease in 72% of cases) followed by hypertensive left ventricular hypertrophy in 29.8% of patients, left ventricular dysfunction in 22.6%, rheumatic mitral valve disease in 12.4%, and mitral annular calcification in 9.9%. Mitral valve prolapse, atrial septal aneurysm with patent foramen ovale and degenerative heart valve disease were observed in only 1% of the patients. In the group of 118 patients with hypertensive left ventricular hypertrophy associated with atrial fibrillation, anteroposterior diameter of the left atrium was significantly larger than in the group of 88 patients with lone atrial fibrillation (45 ± 3 mm vs. 41 ± 3 mm, P < 0.001). On the other hand, 80.6% of these patients were asymptomatic, 50.5% had other vascular risk factor (cigarette smoking, diabetes mellitus, hyperlipidemia) besides hypertensive disease, and although a previous diagnosis of atrial dysrhythmia had been established in the outpatient setting in 43.7% of patients, none of the patients received anticoagulation at the time of stroke onset.
MORTALITY OF CARDIOEMBOLIC INFARCTIONS
Cardioembolic infarctions are the subtype of ischemic infarcts with the highest in-hospital mortality during the acute phase of stroke [37][38][39]. In our experience and in agreement with the clinical series of Caplan et al. [34], the in-hospital mortality rate of cardioembolic infarction was 27.3% as compared with 0.8% for lacunar infarcts and 21.7% for atherothrombotic stroke (P < 0.01). Cardioembolic infarction is also associated with a lower rate of absence of functional limitation at discharge from the hospital, which may be related to the greater size of the lesion of cardioembolic stroke [15,22].
In a recent study carried out by our group in 231 patients with cardioembolic infarction with an in-hospital mortality rate of 27.3%, causes of death were as follows: a) nonneurological in 54% (n = 34), including pneumonia in 9, heart disease in 7, pulmonary thromboembolism in 7, sepsis in 5, sudden death in 4, and other causes in 2; b) neurological in 39.5% (n = 25), including brain herniation in 17, recurrence of cerebral ischemia in 6, and cerebral hemorrhage in 2; and of unknown cause in 6.5% (n = 4).
Early recurrent embolisms (within the first 7 days of stroke onset) were observed in 9 patients (3.9%) (peripheral embolisms in the extremities in 4, cerebral in 5). Only one patient was receiving therapeutic anticoagulation.
Mortality in patients with early embolic recurrence was 77.7% (7 of 9 cases) as compared with 25% for the remaining patients (P < 0.001). In the 5 patients with recurrent cerebral embolisms, the mortality rate was 100%. Two of the four patients with peripheral embolism died (mortality rate 50%).
In the multivariate analysis, four clinical variables were significantly associated with in-hospital mortality: age, congestive heart failure, hemiparesis, and decreased level of consciousness. However, when early recurrent embolism was added to the logistic regression model, this variable was associated with the highest risk for death (OR = 33.5).
Early and late embolic recurrences are not exceptional in cardioembolic infarction [38,[40][41][42]. Recurrences are more frequent during the first days of stroke [10]. In the study of Sacco et al. [43], in which recurrences within the first 30 days were assessed, mortality was also significantly higher in the group of recurrences (20%) than in the group without recurrences (7.4%); survivors after stroke recurrence also showed a longer hospital stay. In the study of Yasaka et al. [44], mortality was also significantly higher in patients with recurrent embolism (19.6%) as compared with the remaining patients (8.8%).
Taking into account that in our series, only one patient with recurrent embolism was treated with therapeutic anticoagulation, we agree with Chamorro et al. [8] in the need of starting early prophylactic anticoagulation with sodium heparin in patients with cardioembolic infarction, with strict control of partial thromboplastin time (between 1.5 and 2) in order to prevent iatrogenic bleeding due to excessive anticoagulation.
EMBOLIC RECURRENCE IN CARDIOEMBOLIC INFARCTION
The risk of early stroke recurrence in cerebral infarctions in general ranges between 1% to 10% according to the different series [38,[40][41][42]. Some studies have shown that recurrences within the first 3 months are more common in cardioembolic infarction than in atherothrombotic infarcts. The risk of early embolic recurrence in cardioembolic stroke varies between 1% and 22%. In the Cerebral Embolism Task Force, for example, it was estimated that around 12% of patients with cardioembolic infarctions would develop a second embolism within the first 2 weeks of the onset of symptoms [11]. In our experience, embolism recurrence during hospitalization occurred in 24 of 324 patients with cardioembolic stroke consecutively attended over a 10-year period (6.9% of cases) [45]. Embolic recurrence occurred within the first 7 days of neurological deficit in 12 patients (50%). The mean time of recurrence after stroke onset was 12 days. Recurrence of embolism within the first 30 days was observed in 5 of the 81 patients (6.1%) in the study of Yamanouchi et al. [46] in patients with cardioembolic infarction and non-valvular atrial fibrillation, in 6% of cerebral infarcts in the study of Sacco et al. [47], in 3.3% of patients from the Stroke Data Bank [43], and in 4.4% of patients included in the Lausanne Stroke Registry [48].
In our study, embolism recurrence was multiple in 3 cases (12.5%), which is consistent with data in the study of Yamanouchi et al. [46] in which 7 of 21 patients with cardioembolic infarctions had two or more stroke recurrences. The maximal risk of recurrence was the immediate period after the cardioembolic stroke.
Mortality in patients with recurrent embolism was twofold higher as compared with the remaining patients (70.8% vs 24.4%) [45], in agreement with the study of Sacco et al. [47] (19% vs 8%) in cerebral infarctions in general.
It is important to know factors associated with early embolic recurrence in cardioembolic infarction because patients in which these risk factors are present constitute a subgroup with the highest risk severity, requiring early treatment and strict medical control. However, risk factors for stroke recurrence are less known than risk factors for first-ever stroke. In our experience, alcohol abuse (OR = 21.8), hypertension with valvular heart disease and atrial fibrillation (OR = 4.3), nausea and vomiting (OR = 3.7), and previous cerebral infarct (OR = 3.2) were clinical predictors of cardioembolic stroke recurrence. In addition to these four variables, cardiac events (tachyarrhythmia, heart failure or acute myocardial infarction that occurred as medical complication during the patient's hospital stay) were selected in the logistic regression model based on clinical, neuroimaging, and outcome variables (OR = 4.25).
The association of hypertension with valvular heart disease and atrial fibrillation was a predictive variable of stroke recurrence but none of these variables was statistically significant when they were independently analyzed. In another study, valvular heart disease associated with congestive heart failure was the only predictive factor of stroke recurrence [49]. Although the presence of a structural cardiac disorder in a well known risk factor for system embolization [50,51], Lai et al. [52] also showed that patients with hypertension associated with non-valvular atrial fibrillation had a higher risk of embolic recurrence as compared to patients with only hypertension or with non-valvular atrial fibrillation only.
Involvement of cardiac center in the medulla oblongata may predispose to arrhythmias and cardiac arrest during the acute phase of stroke. Therefore, the presence of nausea and vomiting is a symptom usually associated with an infarction in the vertebrobasilar territory or progression compression of the brainstem due to an infarction in the carotid territory with transtentorial herniation, a clinical condition that can cause heart rhythm disturbances by concomitant involvement of the cardiac center and predispose to a potential cardioembolic recurrence [53][54][55][56].
In contrast to data observed in our study, the presence of a previous cerebral infarction was not a predictor of recurrence in the study of Sacco et al. [47]. However, other authors consider the presence of a cerebral infarction is one of the most powerful predictive factors recurrent embolism [54][55][56][57][58]. In the study of van Latum et al. [59], a previous thromboembolism of any kind was also a significant predictor of stroke recurrence.
Alcohol abuse was an important predictor of recurrent embolism in our experience of cardioembolic infarction [45], which is similar to that observed in the study of Sacco et al. [47]. There is evidence of a strong relationship between stroke and alcohol: a) alcohol intoxication is a risk factor for cerebral infarction [60]; b) a higher frequency of alcohol abuse among stroke patients has been demonstrated [61][62][63]; c) other studies even claim that continued alcohol abuse is a true risk factor for stroke [64][65][66][67]. In Caucasian populations, "J-shaped" relationship has been documented between the protective effect of mild daily alcohol consumption and an increase in the risk of cerebral infarction by increasing daily alcohol consumption [61][62][63]. Although its effect on cardioembolic stroke is still unclear, there are several pathophysiological mechanisms by which alcohol can cause stroke [61,[68][69][70][71][72][73][74][75][76][77][78][79]. These include the following: a) favoring hypertension, increasing platelet aggregation, plasma osmolarity, hematocrit, and erythrocyte aggregation and deformability; b) a consequence of a dilated cardiomyopathy due to alcohol abuse; c) induction of cardiac arrhythmias (atrial fibrillation, ventricular extrasystoles, junctional tachycardia, paroxysmal supraventricular tachycardia, and ventricular tachycardia in subjects who are habitual alcohol consumers, in sporadic alcohol users, and in those abstaining from alcohol [72]. Ethanol also increases adrenal release of catecholamines, which predisposes to arrhythmogenicity; in addition, acetaldehyde -a major alcohol metabolite-is also arryhthmogenic [73,64]. d) changes in the cerebral blood flow and autoregulation in relation to alcohol abuse have been also reported; e) liver disease secondary to alcohol abuse [68].
Our study therefore suggest that alcohol abuse is an important independent factor associated with embolic recurrence in cardioembolic stroke Any of the mechanisms outlined above may predispose to a new embolism, although the presence of a non-ischemic cardiomyopathy associated with the possibility of cardiac arrhythmia are probably the more common potential mechanisms.
A classification system based on independent risk factors for stroke and used in clinical practice for predicting stroke in patients with non-valvular atrial fibrillation is the CHADS2 index [80] (acronym for Congestive heart failure, Hypertension, Age, Diabetes mellitus and stroke). CHADS2 is formed by assigning 1 point each for the presence of congestive heart failure, hypertension, age 75 years or older, and diabetes mellitus, and by assigning 2 points for history of stroke or transient ischemic attack. Those patients with CHADS2 score of 0 or 1 have a low annual risk of stroke (1%), CHADS2 score of 2 identifies patients with moderate risk (annual risk of 2.5%), and patients with a score of 3 or greater are estimated to have a high risk of stroke (annual risk > 5%).
Early embolism is the main independent risk factor for in-hospital mortality in patients with cardioembolic infarction [40].Timing of initiation of anticoagulant treatment remains an area of uncertainty, since there is concern regarding exacerbating the risk of hemorrhage into regions of infarction ("hemorrhagic transformation") after ischemic stroke. Guidelines propose arbitrary deferral of anticoagulation for 2 weeks in patients hospitalized with stroke by extrapolation from acute trials with full-dose heparin, where reduced early recurrent ischemic stroke is balanced by increased hemorrhagic risk. In patients with transient ischemic attack or minor stroke and with exclusion of cerebral hemorrhage, oral anticoagulation can be initiated within 3-5 days. However, we agree with Chamorro et al. [8] that secondary prevention with anticoagulants should be started immediately if possible in high recurrent embolic cardioembolic stroke risk patients without contraindications, such as falls, poor compliance, uncontrolled epilepsy, or gastrointestinal bleeding. Thus, contrary to the recommendation to delay anticoagulation in patients with extensive cardioembolic infarction or marked neurological deficit, immediate anticoagulation may be indicated in this subpopulation of cardioembolic infarction with maximal risk for early cardioembolic recurrence. According to Yasaka et al. [44], early anticoagulation with intravenous sodium heparin reduces the frequency of recurrent events and would reduce mortality, providing that it is initiated as soon as possible and maintaining activated thromboplastin time values below twice the control values. Oral anticoagulation with warfarin would be indicated later.
EARLY DIFFERENTIAL DIAGNOSIS BETWEEN CARDIOEMBOLIC AND ATHEROTHROMBOTIC INFARCTS
Clinical data exclusive for cardioembolic infarctions or atherothrombotic infarctions are lacking. However, to establish an early diagnosis of cardioembolic infarction may have a therapeutic interest. In a study of our group [81], it was shown that atrial fibrillation and sudden onset of neurological symptoms were independent clinical factors significantly associated with cardioembolic stroke, whereas hypertension, chronic obstructive pulmonary disease, diabetes, dyslipemia, and age were clinical variables independently associated with atherothrombotic infarction.
On the other hand, clinical data traditionally related to cardioembolic infarction, such as seizures or headache, were not predictors of cardioembolic stroke, which is consistent with results of the studies of Ramirez-Lassepas et al. [82], Kittner et al. [83,84], and Caplan et al. [20].
ATRIAL FIBRILLATION IN CARDIOEMBOLIC AND ATHEROTHROMBOTIC INFARCTIONS
Atrial fibrillation is the main cardiac disorder in the different series of cardioembolic infarctions from industrialized countries reported in the literature [27,37,85]. However, atrial fibrillation can be also observed in atherothrombotic infarcts, not as an embolic etiology but a marker of other conditions that lead to ischemic stroke, such as atherosclerosis. It may be therefore considered as an epiphenomenon or a clinical manifestation of atherosclerotic disease [50]. In this respect, not all cerebral infarctions in patients with atrial fibrillation are of cardioembolic origin [21]. In our study, atrial fibrillation was diagnosed in 16.5% of patients with thrombotic occlusion or arterial stenosis grater than 70% presumably responsible for the cerebral infarction [86]. In these cases, some clinical or echocardiographic findings related to cardioembolism, such as recent congestive heart failure or increase of the left atrial size, or left ventricular dysfunction were absent [87,88]. Bogousslavsky et al. [21] showed that 76% of patients with cerebral infarcts in the carotid vascular territory with atrial fibrillation, the presumable pathophysiological mechanism of stroke was cardioembolic since a significant arterial vascular disease could not be documented. However, in 11% of the cases, the presumable mechanism was atherosclerosis because severe arterial stenosis or occlusion correlated with clinical features, and in the remaining 13%, the cerebral infarct could be explained by occlusion of small perforating arterial vessels in association with hypertension.
Accordingly, in a patient with cerebral infarction and atrial fibrillation it is important to make an early and precise diagnosis of the subtype of cerebral infarct, although the differential diagnosis between cardioembolic and atherothrombotic stroke with atrial fibrillation may be difficult to establish at the onset of neurological deficit. In recent classifications of stroke subtypes, this distinction is not made and these patients are included in the subgroup of cerebral infarctions of undetermined cause due to the simultaneous presence of two potential etiologies [89]. However, it should be noted that using the results of appropriate neurological and cardiological studies carried out in a delayed during hospitalization, in most of the cases, it is possible to establish the correct classification of stroke in the definite nosological entity [20].
Cardioembolic Infarctions with and without Atrial Fibrillation
When patients with cardioembolic infarction with and without atrial fibrillation were compared, female sex, history of heart failure, sudden onset of neurological deficit, altered consciousness, motor, sensory and visual deficits, and parietal topography of the ischemic lesion were more frequently recorded in cardioembolic stroke patients with atrial fibrillation. Coronary heart disease, smoking, and topography of the infarct in the internal capsule were more frequent in cardioembolic stroke patients without atrial fibrillation. The in-hospital mortality rate was 31.6% in patients with atrial fibrillation and 14.8% in those without atrial fibrillation (P < 0.01) [86].
Atherothrombotic Infarctions with and without Atrial Fibrillation
In the comparison of patients with atherothrombotic infarction with and without atrial fibrillation, those with atrial fibrillation were older, with a predominance of females, and a higher frequency of coronary and valvular heart disease, sudden onset of neurological deficit, sensory and visual deficits, speech disturbance, parietal, temporal, and occipital topography, and infarction in the vascular territory of the middle cerebral artery. Cardiac events were also more frequent. In atherothrombotic infarcts without atrial fibrillation, smoking, involvement of the cranial nerves and vertebral vascular topography were more common. Absence of functional dysfunction on hospital discharge was also more frequent. The in-hospital mortality rate was 29.3% in patients with atrial fibrillation and 18.8% in those without atrial fibrillation (P < 0.04) [86].
It should be noted that atrial fibrillation had a negative effect on outcome, both in cardioembolic and atherothrombotic infarction. It has been hypothesized that the worse outcome associated with atrial fibrillation may be explained by a higher prevalence of heart failure and ischemic heart disease. This hypothesis coincides in part with our results, given that a higher occurrence of heart failure in patients with cardioembolic stroke and a higher frequency of ischemic heart disease in patients with atherothrombotic stroke were observed. This may contribute to a decrease in cerebral blood flow as cerebral autoregulatory mechanisms in the ischemic area are impaired [90]. Other authors suggest that chronic atrial fibrillation may cause a significant reduction of regional blood flow [91], which may normalize when sinus rhythm is attained after successful cardioversion [92]. Other studies indicate that an increase in mortality may be explained by the more advanced age of the patients, a higher volume of the lesion, or a higher initial intensity of focal neurological deficit in patients with atrial fibrillation [93,94]. In summary, cerebrovascular disease in ischemic cardioembolic or atherothrombotic infarct is more severe in the presence of atrial fibrillation as compared to patients with normal sinus rhythm.
|
2014-10-01T00:00:00.000Z
|
2010-07-31T00:00:00.000
|
{
"year": 2010,
"sha1": "0c75f1c369c8c3bfe3ee4e37bc16286388ae7dee",
"oa_license": "CCBY",
"oa_url": "https://europepmc.org/articles/pmc2994107?pdf=render",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "0c75f1c369c8c3bfe3ee4e37bc16286388ae7dee",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
255595930
|
pes2o/s2orc
|
v3-fos-license
|
H\"older Regularity of the $\bar\partial-$equation on the Polydisc
In this note, we show that the canonical solution operator to the $\bar\partial-$equation in the polydisc preserves H\"older regularity. It is a well-known fact that such solution operators do not improve H\"older regularity, and as such, our solution operator is optimal in this regard.
Introduction
It is a classical problem in complex analysis to describe solutions to the∂−equation with estimates in prescribed normed function spaces. The most general result on problems of this type was given by Sergeev and Henkin in [SH], giving uniform estimates for the∂−equation in any pseudoconvex polyhedron. Recently, the Hölder spaces C k+α on product domains in C n have been given some attention, and some results have been published on this matter. In [PZ1], [PZ2], a solution operator which loses arbitrarily small amounts of Hölder regularity was found, while in [Zhang] a solution operator which preserves Hölder regularity was found in the case n = 2. In the papers mentioned, the solution operators were based on Nijenhuis and Woolf's formula in [NW]. This note seeks to improve on those results and show that optimal Hölder regularity can be achieved in the polydisc D n ⊂ C n . Indeed, the main theorem of the paper is as follows.
Theorem 1. For any integer k ≥ 0, and 0 < α < 1, Let Z k+α (0,1) (D n ) ⊆ C k+α (0,1) (D n ) denote the subspace of ∂−closed, Hölder k + α, (0, 1)−forms on the polydisc. Then for all g ∈ Z k+α (0,1) (D n ), the equation∂u = g admits a bounded linear solution operator For simplicity, we have restricted our attention to the case of the polydisc. However, Theorem 1 readily extends to the more general case of products of planar domains with smooth boundary.
Preliminary Results
The proof of the main theorem rests on an analysis of Henkin's weighted formula for solutions to thē ∂−equation on the polydisc, which was announced in his survey paper [Henkin] of 1985. The simplest case of this formula, obtained by setting all weights equal to 0, has the following form.
Theorem 2. Let Z (0,1) (D n ) ⊆ C (0,1) (D n ) denote the space of (uniformly) continuous,∂−closed (0, 1)−forms on the polydisc. Fix g ∈ Z (0,1) (D n ). Then, is a distributional solution to the equation∂u = g. Here, c(n, r) is an integer depending only on the constants n, r, while the sum ranges over all ordered r-tuples J = (j 1 , ..., j r ) such that {j 1 , ..., j r } is a size r subset of {1, ..., n}. The complement of J in {1, ..., n} is denoted by {k 1 , ..., k n−r }, while the region of integration γ J (z) is given by The kernel of integration is the n − r form A version of this formula with weights equal to 1 was used in [HP1] to solve an interpolation problem in the polydisc, while a proof of the weighted formula for the more general class of analytic polyhedra appears in [HP2]. According to [HP2], Henkin's formula gives uniform estimates for the∂−equation in the sup-norm. In addition, Henkin's formula also yields a bounded solution operator that preserves Hölder regularity for α ∈ (0, 1). Stated precisely, we have the following theorem.
Theorem 3. Let 0 < α < 1 and g ∈ Z α (0,1) (D n ) be a∂−closed Hölder−α, (0, 1)−form in the distributional sense. Then Henkin's solution operator to the∂−equation, restricts to a bounded linear operator In fact, the solutions produced by Henkin's formula agrees with the solutions produced by Nijenhuis and Woolf in [NW], and studied by Pan and Zhang in [PZ2], and [Zhang].
For any g ∈ Z (0,1) (D n ), g = i g i dz i we define Remark 1. Direct estimates of the operator norm of T using Nijenhuis and Woolf's formula suggest that it loses (arbitrarily small) amounts of Hölder regularity. This is due to the fact that the Cauchy integral operators S j lose Hölder regularity in parameters, as observed in [Tumanov] and [PZ2]. It turns out, however, that T does not lose any Hölder regularity.
In order to show that the solutions to the∂−equation produced by the operators H and T agree, we require the following lemma.
Lemma 1. Given u ∈ C(D n ) and g ∈ C (0,1) (D n ) such that∂u = g there exists a linear operator Φ depending only on n such that Here, Proof. We first introduce some notation to simplify the exposition. Let d n z = dz 1 ∧ ... ∧ dz n . For any subset (1) and define D j1,...,js = n j=1D j On the set {1, ..., n}, we work modulo n so that n + r = r. In all that follows we let C be an arbitrary constant depending only on n, and Ψ be an arbitrary linear operator depending only on n. Both the symbols C and Ψ will act as local variables, and the same symbols will be used for different constants and operators to simplify the exposition. By the Bochner-Martinelli formula [Henkin], given uniformly continuous u, g on D n ,∂u = g in the distributional sense, we have For fixed z, our integrand is a uniformly continuous form in ζ with uniformly continuous differential. Thus by Stokes' theorem, we obtain Here, the domain of integration is D j,j+1 since the pullback of the form in the integral vanishes on the other components of ∂D j . By repeating this procedure n − 2 more times, each time observing that We obtain Here, (∂D) n j is the set (∂D) n with orientation induced by successive applications of the boundary operator beginning with the j'th copy of D in the product. It is clear from inspection that (∂D) n j = (−1) (j−1)(n−j+1) (∂D) n 1 . Therefore, Summing over j and noting that We see that Since this equation holds for any u, by considering for example u = 1, we see that C = 1. By taking Φ = Ψ, this concludes the proof.
Lemma 3.
On any open sector D s , P [h] is Hölder−α uniformly in the parameters a, b with coefficient proportional to h C α (D q+2 ) .
Proof. We first show that P [h] is C α uniformly in b. Indeed, let ǫ ∈ C.
d 2 ζ 1 ∧ ... ∧ dζ q =: I 1 + I 2 We denote these two terms I 1 and I 2 respectively and control them separately. Repeating the same analysis as we did for b, we see that |I 1 | h C α (D q+2 ) |ǫ| α . Thus we need only control I 2 .
Here, the last inequality holds as the function (ζ, a, b) → h(|a|ξ, a, b)iξ 1 is C α , so that its Cauchy torus integral is bounded, depending only on h C α (D q+2 ) . Therefore P preserves Hölder−α regularity in the parameters a, b.
These lemmas assemble into the following proposition.
Proof. By the preceding two lemmas, P [h] is C α in D s uniformly in each variable z t and parameters a, b,
|
2023-01-12T06:42:44.159Z
|
2023-01-11T00:00:00.000
|
{
"year": 2023,
"sha1": "c7be77194d50890be7c1208bf0a2f0cde8d9c6d4",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "c7be77194d50890be7c1208bf0a2f0cde8d9c6d4",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
}
|
271331823
|
pes2o/s2orc
|
v3-fos-license
|
A First-in-Human Phase I Clinical Study with MVX-ONCO-1, a Personalized Active Immunotherapy, in Patients with Advanced Solid Tumors
Abstract Over two decades, most cancer vaccines failed clinical development. Key factors may be the lack of efficient priming with tumor-specific antigens and strong immunostimulatory signals. MVX-ONCO-1, a personalized cell-based cancer immunotherapy, addresses these critical steps utilizing clinical-grade material to replicate a successful combination seen in experimental models: inactivated patient’s own tumor cells, providing the widest cancer-specific antigen repertoire and a standardized, sustained, local delivery over days of a potent adjuvant achieved by encapsulated cell technology. We conducted an open-label, single-arm, first-in-human phase I study with MVX-ONCO-1 in patients with advanced refractory solid cancer. MVX-ONCO-1 comprises irradiated autologous tumor cells coimplanted with two macrocapsules containing genetically engineered cells producing granulocyte–macrophage colony-stimulating factor. Patients received six immunizations over 9 weeks without maintenance therapy. Primary objectives were safety, tolerability, and feasibility, whereas secondary objectives focused on efficacy and immune monitoring. Data from 34 patients demonstrated safety and feasibility with minor issues. Adverse events included one serious adverse event possibly related to investigational medicinal product and two moderate-related adverse events. More than 50% of the patients with advanced and mainly nonimmunogenic tumors showed clinical benefits, including partial responses, stable diseases, and prolonged survival. In recurrent/metastatic head and neck squamous cell carcinoma, one patient achieved a partial response, whereas another survived for more than 7 years without anticancer therapy for over 5 years. MVX-ONCO-1 is safe, well tolerated, and beneficial across several tumor types. Ongoing phase IIa trials target patients with advanced recurrent/metastatic head and neck squamous cell carcinoma after initial systemic therapy. Significance: This first-in-human phase I study introduces a groundbreaking approach to personalized cancer immunotherapy, addressing limitations of traditional strategies. By combining autologous irradiated tumor cells as a source of patient-specific antigens and utilizing encapsulated cell technology for localized, sustained delivery of granulocyte–macrophage colony-stimulating factor as an adjuvant, the study shows a very good safety and feasibility profile. This innovative approach holds the promise of addressing tumor heterogeneity by taking advantage of each patient's antigenic repertoire.
Introduction
Cancer vaccines have come a long way to be finally recognized as a promising modality to treat cancer.The efforts of the past decades were rewarded with modest success, and it was only the great advances in developing better methods and a deep understanding of the immune system and its interaction with cancer that supported the discovery and development of promising vaccine candidates (1,2).Numerous preclinical and clinical studies have shed light on the two important elements of effective vaccination against cancer: (i) the target of (neo)antigen and (ii) the adjuvant to enhance antigenicity and stimulate an immune response (3).A broad spectrum of novel antigen-presenting strategies is under investigation, with a focus on tumorassociated antigens or tumor-specific antigens.Vaccine antigen strategies have lately focused on tumor-specific antigens, often called neoantigens, because they are deemed to trigger the initiation of a specific T-cell response.
However, the downside of this approach is the differential expression of neoantigens in tumor cells.Neoantigens may be expressed as high-affinity antigens in some cells and may be entirely lost in others (4,5).However, tumor vaccines targeting one or several specific tumor antigens cannot include all tumor-cell information relevant to immunity, such as whole-antigen repertoire that is required for immunity pattern recognition (6).Recently, a combination of mRNA-based neoepitope vaccine plus atezolizumab plus chemotherapy raised some interesting results in a phase I trial in the adjuvant setting in resected pancreatic adenocarcinoma (7).In the current study, we used a whole-tumor-cell preparation, inactivated by irradiation, aiming at presenting a very broad antigenic repertoire including any cancer-specific targets.
However, unmodified tumor-cell preparations usually trigger only minor immune response (8), clearly illustrating the need for potent adjuvants.Many immunostimulatory cytokines have been evaluated with tumor-cell vaccines, and granulocyte-macrophage colony-stimulating factor (GM-CSF) emerged as one of the most potent adjuvants in generating antitumor immunity (9,10).
One technology holding promise is GM-CSF-modified vaccines known as GVAX that have been extensively studied in humans (11,12).Despite the excitement engendered by experimental animal models and early-phase human trials, GVAX has not lived up to its promise in inducing clinically meaningful outcomes in patients with cancer.Lack of prolonged, sustained, local delivery of GM-CSF by unprotected allogeneic cells is a likely explanation.
Another cause may be related to the dual and sometimes opposing effects of GM-CSF in inducing the recruitment of regulatory T cells and myeloidderived suppressor cells (13).Multiple studies have now demonstrated that GM-CSF can trigger a strong immunogenic response or instead lead to tolerogenicity when delivered at high dose and/or in a systemic way.To optimize the potent adjuvant effect of GM-CSF, we have designed and successfully tested encapsulated cell technology as a way to secure standardized, stable, and local prolonged release of GM-CSF at the immunization site.Encapsulated cell technology is an effective approach for the continuous and local delivery of therapeutic proteins released by genetically modified allogeneic cells.
Here, we report data from the first-in-human, phase I study of MVX-ONCO-1, a patient-specific cancer immunotherapy combining irradiated autologous tumor cells and encapsulated allogeneic cells producing GM-CSF.This open-label, single-arm study was designed to assess the safety, tolerability, and signals of the efficacy of MVX-ONCO-1 administered subcutaneously for the treatment of patients with advanced cancers progressing after standard therapies.
Study design and participants
This clinical trial is an open-label, single-center, phase I study in patients with advanced metastatic tumors.Thirty-four patients were treated at Geneva University Hospitals between May 5, 2014, and November 24, 2021.
Patients' characteristics are listed in Table 1.Eligible patients were ≥18 years old with advanced metastatic solid cancer in progression in which all standard treatments were exhausted or not feasible, with an estimated life expectancy of at least 4 months, Eastern Cooperative Oncology Group performance status grade 0 to 2, and no major impairment of liver, renal, and hematologic functions.Eligible patients also had to present with a primary tumor and/or metastasis amenable for partial/total surgery or tap and subsequent cell harvesting estimate >27 � 10 6 cells.Patients were excluded if they had participated in any other investigational study or received an experimental therapeutic procedure or chemotherapy treatment within 4 weeks of screening, if they were suffering from a systemic disease not controlled by usual medication, if they presented with untreated brain metastasis, if they were on chronic immunosuppressive treatment or therapeutic anticoagulation with coumarin or continuous intravenous heparin, if they had tested positive for human immunodeficiency virus 1 and 2, human T-cell leukemia-lymphoma virus 1, hepatitis B surface antigen, or hepatitis C antibody, if they were females of child-bearing potential who were pregnant or lactating or who were not using adequate contraception, or if they presented with a known allergy to reagents in the study product like penicillin or streptomycin.All prospective participants received written and verbal information about the study at a prior interview and signed informed consent prior to any study-specific procedure.
Procedure
The entire manufacturing process adhered to Good Manufacturing Practices 2 Two biocompatible polyethylsulfone macrocapsules, containing MVX-1 cells (8 � 10 5 cells in each macrocapsule), placed underneath the skin away from any tumor deposit.Macrocapsules are composed of materials broadly used in medicine and well described in the literature (14,15).MVX-1 is the MaxiVAX-1-certified cell line for human use in clinical trials.These cells are K562 cells genetically modified to secrete human GM-CSF.Macrocapsules are loaded with MVX-1 cells and then sealed with UV-sensitive biocompatible glue for subcutaneous implantation.Similar K562 cells engineered to produce GM-CSF have already been used in several clinical studies (16,17).The macrocapsule can be maintained in culture under controlled conditions (5% CO 2 and 37 °C) for 1 month, ensuring stable GM-CSF secretion.
Endpoints and assessments
The primary objective of this study was to assess the feasibility of the subcutaneous implantation of both macrocapsules and tumor-cell suspension as well as evaluating the safety and tolerability of the treatment.Collecting signals of efficacy and immune education were secondary objectives of this phase 1 trial.
Safety assessment was evaluated by clinical assessments, vital signs, local and systemic tolerance, laboratory tests, and electrocardiograms and were conducted from baseline until week 18.Patients could be discontinued from study treatment for unacceptable toxicity, pregnancy, or patient decision.Adverse events (AE) were graded using Common Terminology Criteria for Adverse Events v.4.0 and reported using Medical Dictionary for Regulatory Activities v.25.0 until week 18, whereas related AEs and serious adverse events (SAE) were recorded until the end of participation in the study.Patients were then followed up for survival status and SAE until death or for 5 years, whichever came first.
Efficacy was assessed by follow-up of the patient's survival at 6, 12, and 18 months, and disease status using RECIST 1.1 at baseline and then at weeks 6, 12, and 18 with a cutoff date as of April 16, 2022.Additional information on tumor status beyond week 18 was obtained for specific subject after signed agreement.
DTH
DTH tests were performed with intradermal injections of 1 � 10 6 irradiated autologous tumor cells in healthy skin before, during, and after treatment (Fig. 1).Erythema, induration, and ulceration at the sites of injections were measured to determine positivity of the test, and a punch biopsy of the injection site was collected to analyze recruited immune cells.
Ex vivo IFNγ enzyme-linked immunospot
Peripheral blood mononuclear cells (PBMC) were harvested and frozen before, during, and after treatment.For the ex vivo IFNγ enzyme-linked immunospot (ELISpot) experiment, the cells were thawed in RPMI 1640 medium (Thermo Fisher Scientific, 72400021) containing 10% heatinactivated FBS (Thermo Fisher Scientific, 10101-145) and 1% penicillinstreptomycin (Thermo Fisher Scientific, 15140122) and rested overnight under a controlled atmosphere (5% CO 2 and 37 °C).Subsequently, the ELISpot assay was performed with PBMCs coincubated with freshly thawed autologous irradiated tumor cells at a ratio of 1:1 (10 5 cells of each) or with purified Brachyury protein (1 µg/mL; Acris) per well for 18 to 24 hours using precoated 96-well plates (Mabtech).PBMCs alone and tumor cells alone were used as negative controls.All conditions were run in triplicate and cultured in serum-free X-VIVO 15 medium (Lonza, BE02-060F).The plates were washed according to the manufacturer's instructions and counted using the iSpot Robot ELISpot reader (Autoimmun Diagnostika GmbH).
The final number of spot-forming units was calculated after background subtraction (negative controls).
Statistical methods
The feasibility analysis set included all patients who had enough tumor
Feasibility
Enrollment for the study was completed by December 2021.A total of 51 patients were screened initially (Fig. 2).However, of these 51 screened patients, 16 patients were excluded from the study because of clinical and surgical screen failures.These exclusions could be related to various factors, such as medical conditions or surgical complication risk, which made these patients ineligible for participation in the study.As a result, the study in- In terms of feasibility, it is mentioned that only 1 patient of the 35 patients (2.9%) was not treated.This particular patient could not receive the treatment because of an investigational medical product defect.Specifically, the suspension of irradiated autologous tumor cells, which is a part of the investigational medical product, was found to be contaminated.As a precautionary measure, this patient was not administered the treatment.
With regard to the manufacturing of macrocapsules under Good Manufacturing Practices guidelines, no feasibility issues were encountered.
According to the protocol, a 2-week safety window applies between autologous tumor cell harvesting and the first vaccination.This window allows to check for any out of specification results.In this trial, time from harvesting tumor cells to first treatment was less than 3 weeks.All manufactured batches met our release criteria, which include a minimum secretion per capsule of 20 ng GM-CSF per day, 6 to 7 days after loading of the MVX-1 cells into macrocapsules without any bacteriologic contamination.On the day of treatment, the time required for dosing patients, from the thawing step of the irradiated autologous tumor cells to preparing the macrocapsules and implanting them in the patient, was consistently within 4 hours (±2 hours) for all patients.Following preparation, the determined expiration timeline for the product was established to be 12 hours under controlled temperature (4 °C for the irradiated autologous tumor cells and 37 °C for the macrocapsules).
A total of 34 treated patients were included in the study.The demographic and baseline characteristics of these 34 patients are depicted in Table 1.
Among the treated patients, the majority (91.2%) had tumor not considered prone to immunotherapy (cold tumors), whereas only three patients, two head and neck squamous cell carcinoma (HNSCC) cases and one melanoma case, had potentially immune-responsive cancers (hot tumors).This indicates that the cold tumor type was the most prevalent among the patients included in this study.Moreover, 82.3% of patients had received more than two previous lines of therapies: more than half of patients (52.9%) had received between two and four previous lines of therapies and 29.4% of patients had received more than five previous lines of therapy, reflecting the advanced stages and the aggressiveness of their diseases.
Safety
All 34 patients received MVX-ONCO-1, with each patient receiving at least one dose of the treatment.The safety assessment was conducted on the entire group of 34 patients.This means that both safety profile and potential AEs of the treatment were evaluated in all patients who received MVX-ONCO-1.Of the 34 treated patients, 29 (85.3%) completed the full six administrations of MVX-ONCO-1, whereas 2 patients did not have week 6 post-baseline efficacy measurement, and 3 patients did not complete the treatment because of death or disease progression (Fig. 2).
AEs were observed in all 34 patients participating in the study, primarily attributed to disease progression and associated symptoms.However, the most commonly reported AE (32.4%) was implant site hematoma, which did not have any significant impact on the patient's life-threatening condition or treatment regimen.
Fatigue was the second most frequently observed AE (29.4%).at removal, which had no impact on the treatment or on the patient's health.
In four cases (1%), the macrocapsule had a broken suture string, requiring a small incision to remove the macrocapsule.Furthermore, in one case (0.3%), a macrocapsule was damaged during removal.Two additional cases (0.5%) were not further described by the physician.
Efficacy
For the efficacy analysis, the analysis was performed on the intent-to-treat (ITT) population.The ITT population consisted of 32 patients.The ITT analysis includes all patients who were initially assigned to receive the treatment, regardless of whether they completed the treatment or withdrew from the study prematurely and for whom a week 6 post-baseline efficacy measurement was available (Fig. 2).
Antitumor activity
The efficacy analysis of tumor response demonstrated the following results (Fig. 3): Two patients [one recurrent/metastatic HNSCC (R/M HNSCC) "MVX-01-02" and one chordoma "MVX-01-21"] of 32 experienced a PR after 12 weeks of MVX-ONCO-1 treatment (Fig. 3A) and then maintained the PR at week 18, indicating a significant reduction of 30% in tumor size compared with the baseline (Fig. 3B).The disease was stable in 18 patients (56.3%) at week 6, 10 patients (31.3%) at week 12, and 6 patients (18.8%) at week 18, indicating that their tumor did not show significant growth or shrinkage during those time points (Table 3).Disease progression was observed in 14 patients (43.8%) at both weeks 6 and 12 and in 8 patients (25.0%) at week 18, indicating an increase in tumor size and/or new metastatic lesions (Table 3).The best overall response, taking all time points into account, was a
Survival
The study observed signs of prolonged survival, as depicted in Fig. 4A and Table 4. Four patients were lost to follow-up, and the dates of their last contact were included in the analysis.The global median overall survival (OS) was determined to be 186 days (Table 5).Specifically, 53% of patients were alive 6 months after their initial treatment with MVX-ONCO-1.The survival rates at 12 and 18 months were 25.8% and 23.3%, respectively.Additionally, 20% of patients were still alive 24 months after starting the treatment (Fig. 4A).
As our patient cohort is heterogeneous in cancer type, the median OS, the mean OS, or the OS depending on the number of patients for each type of cancer were also analyzed separately, and this information can be found in Table 5.Among all the different tumor types in our study population, two patients with HNSCC stand out with a remarkable mean OS of 1,156 days, surpassing the survival outcomes of all other tumor types.
Immune response
Qualitative assessment of immune-mediated response by DTH with irradiated autologous tumor cells were performed before, during, and after MVX-ONCO-1 treatment.After 48 to 72 hours, erythema, induration, and ulceration at the sites of injections were measured.The test was positive if the largest diameter measured was ≥5 mm.A skin punch biopsy was performed at the site of injection.The local reaction observed was associated with a strong perivascular inflammation in the punch skin biopsy of the DTH site (Fig. 5A).
Patients were then classified according their DTH response status.If a patient acquired a positive reaction during and/or after MVX-ONCO-1 treatment, they were classified as positive.If a patient exhibited a negative DTH reaction during and after MVX-ONCO-1 treatment, they were classified as DTH negative.Twenty-two patients (64.7%) were DTH negative, whereas seven (20.6%) were DTH positive.Last patients were classified as inconclusive (3, 8.8%) or deleterious (2, 5.9%) DTH responses due to missing information or change of positive to negative response, respectively.Interestingly, when positive, DTH was associated with a longer OS, as depicted in Fig. 5B.
Moreover, a quantitative assessment of immune response by the IFNγ ELISpot assay using irradiated autologous tumor cells or tumor-associated antigen to stimulate PBMCs was performed.Fifty percent of patients who had survived beyond 6 months showed an increase in IFNγ spots during treatment compared with baseline (Fig. 5C and E), meaning that treatment with MVX-ONCO-1 can trigger a tumor-specific immune response from PBMCs in this patient population.In contrary, all patients with survival status of less than 6 months did not mount any specific immune responses.Moreover, in patients diagnosed with chordoma and harboring Brachyury-positive tumor cells, PBMCs were also stimulated using Brachyury protein/peptides.Brachyury has been identified as a marker of chordoma cells, suggesting its potential role as an immune response initiator.(Fig. 5D and E).Among patients with a survival period exceeding 6 months, 50% exhibited a reactivity of their PBMCs against
Discussion
Significant breakthroughs in the biological understanding of the immune system have led in the past years to the development of therapeutics such as, antibodydrug conjugates, immune checkpoint inhibitors (ICI), chimeric antibody receptor T cells, and RNA vaccines (7,(19)(20)(21)(22)(23)(24)(25)(26)(27).However, therapeutic cancer vaccines have not achieved significant success despite decades of research.Analyzing preclinical cancer models and past clinical vaccine development can help understand the factors involved in efficient anticancer vaccination and guide the crafting of a clinically meaningful approach recapitulating these required features.Optimal priming conditions are crucial to set the immune system for an efficient effector phase in which antigen-specific T cells and antibodies can efficiently recognize and destroy tumor cells.Creating this favorable niche for optimal antigen presentation and subsequent processing by antigen-presenting cells requires a very finely tuned setting.This goal is achieved by a sustained, controlled, low dose of GM-CSF at the subcutaneous vaccination site.In fact, preclinical studies have shown that subcutaneous injections of GM-CSF at the vaccination site can significantly increase the infiltration of dendritic cells (DC) in regional lymph nodes that drain the site of vaccination (28,29).Genetically engineering tumor cells to secrete biologically active GM-CSF have also shown success in generating specific longlasting protective systemic antitumor immune responses in preclinical studies.
However, recombinant GM-CSF administered subcutaneously has not consistently proven effectiveness.The delivery method is critical, as low-dose, localized administration of GM-CSF display a potent adjuvant effect, whereas high doses can lead to immunosuppressive effects through the recruitment of tolerogenic DCs and myeloid-derived suppressor cells (40)(41)(42)(43) and may explain many negative trials.To address these challenges, a biocompatible macrocapsule has been developed for subcutaneous implantation.It contains a proprietary immortalized cell line that continuously produces stable concentrations of GM-CSF, providing standardized delivery of a low-dose adjuvant over a week.In comparison with other trials in which daily high doses ranging from hundreds of micrograms to milligrams of GM-CSF are utilized, the average amount of secreted GM-CSF per macrocapsule in our study was below 250 ng and remained very low.This indicates that the delivered quantity of GM-CSF by our macrocapsules in our study is 1,000 times less than that used in other studies.
Moreover, this macrocapsule acts as a physical barrier, preventing the encapsulated cells from being recognized and destroyed by the patient's immune system.Although cell debris and/or subcellular components from the encapsulated cells may be found outside the macrocapsule and could potentially be processed by antigen-presenting cells to stimulate patient immunity, the encapsulated cells themselves will remain shielded from the patient's immune reaction, ensuring the continued functionality of our encapsulated cell line.
Additionally, MVX-1 cells lack HLA expression, making it highly improbable for enrolled patients to develop an immune response against MVX-1 cells.
Another important factor is selecting the antigenic target.Despite the characterization of various tumor-associated or -specific antigens, clinical trials have failed to demonstrate significant clinical benefits across tumor types.
Recent interest lies in identifying and synthesizing patient-specific tumor neoantigens, particularly using mRNA liposomal formulations.Although single-agent activity has been limited, combining neoantigens with ICI has shown promise in patients with melanoma.However, immune evasion of neoantigens has been observed in lung and colorectal cancer studies, likely due to ongoing immune editing processes (44,45).
In summary, understanding the factors involved in efficient anticancer vaccination, such as optimal priming conditions, precise delivery of GM-CSF, and selecting appropriate antigenic targets, is crucial for success.These insights have the potential to revolutionize personalized immunotherapies and improve outcomes for patients with various cancers.
The first generation of MVX-ONCO-1 takes a unique approach by using irradiated autologous tumor cells as the antigenic repertoire instead of selecting specific tumor antigens.This allows for quick processing and potential exposure to hundreds of tumor-specific targets without a selection process.In combination with two macrocapsules delivering GM-CSF, MVX-ONCO-1 aims to recapitulate the key features required in preclinical models.This approach has been evaluated in a first-in-human phase I clinical trial involving subjects with advanced metastatic progressive refractory tumors.
The study demonstrated that MVX-ONCO-1 has a very safe and welltolerated profile.It can be administered in repeated doses without any evidence of a drug effect on vital signs, blood chemistry, hematology, or urinalysis evaluations.Notably, no SAEs related to the study drug were reported, and there were no clinically significant local or systemic reactions observed.Local inflammatory reactions could occur at the site of macrocapsule implantation such as swelling, itching, redness, or pain and are directly linked with the biological activity of the GM-CSF.The risk within a Swissmedic-certified cell therapy facility.The MVX-ONCO-1 treatment was administered at the Oncology Department of Geneva University Hospitals by qualified physicians following Good Clinical Practices and consists of the following two components: 1 Patients' autologous tumor cells, harvested from either the primary tumor or a metastasis, by a surgical procedure or an aseptic tap from malignant ascites or pleural fluid.Tumor cells are processed into a single-cell suspension through physical (gentleMACS Octo Dissociator) and enzymatical digestion (Collagenase NB6).A minimum of 27 � 10 6 cells harvested are required for enrollment [six treatments + three delayed-type hypersensitivity (DTH) tests].After 100 Gy irradiation, aliquots of 4 � 10 6 cells are prepared and stored frozen in liquid nitrogen.Each dose is prepared as a single-cell suspension and resuspended into 0.5 mL Hank's Balanced Salt Solution + calcium + magnesium for subcutaneous injection.
FIGURE 2
FIGURE 2 Trial profile.A screened population included all patients who were screened.A safety population comprised 34 treated patients who received at least one dose of the study treatment.The ITT set comprised all patients who had received at least one dose (defined as at least 1-day application without removal of the macrocapsules) of the study treatment and for whom a week 6 post-baseline efficacy measurement was available.
cluded 35 patients who fulfilled the selection criteria, signed the consent forms, and had sufficient tumor cells harvested to prepare the investigational medical product.One patient, who responded to MVX-ONCO-1 [partial response (PR)] with prolonged clinical benefit, was re-enrolled because of a late relapse more than 2 years after first treatment and is depicted as two patients: "MVX-01-21" then "MVX-01-38".
FIGURE 3
FIGURE 3 activity of MVX-ONCO-1.A, Waterfall plot of the maximum percentage of tumor size change in each patient.B, Percentage changes from baseline of tumor size (sum of longitudinal change of recorded lesions) over time in each patient.
FIGURE 4
FIGURE 4 Swimmer plot showing OS of each patient since the start of MVX-ONCO-1 treatment.*, Number of patients who have theoretically reached the time point of interest at the cutoff date.Patients lost to follow-up are included.
FIGURE 5
FIGURE 5 In vivo and ex vivo immune responses against autologous irradiated tumor cells or tumor-associated antigens.A, Hematoxylin and eosin staining of a skin punch biopsy of the DTH site showing a representative perivascular area of immune infiltration (black arrow).B, Kaplan-Meyer survival curve and median OS according to DTH status.C and D, IFNγ ELISpot of PBMC secretion after stimulation.Total PBMCs (105 per well) from patients at three different time points (before, during, and after treatment) were incubated in the presence of autologous irradiated tumor cells (C) or Brachyury protein (D).Representative plate images for MVX-01-44 PBMC, nonstimulated (left), stimulated with irradiated tumor cells (middle), or with Brachyury protein (right) is depicted in E. The final number of spot-forming units (SFU) was calculated after background subtraction (negative controls).All conditions were run in triplicate.Patient with PR are highlighted in red.
TABLE 1
Patient demographics and baseline characteristics Table 2 presents the most commonly observed AEs, reported in three or more patients.The severity of AEs varied, with four patients (11.8%) experiencing life-threatening events or 15 patients (44.1%) experiencing severe events, all related to disease progression.No patient withdrew from the study because of adverse events.
none considered related to the study treatment.One patient had two episodes of supraventricular tachycardia.Finally, a total of 384 macrocapsules were implanted in 34 patients.Of all these implanted macrocapsules, 17 (4.4%)macrocapsules were slightly bent
TABLE 2
Summary of most common AEs (in three or more patients)
TABLE 3
Tumor responses according to RECIST
TABLE 4
Summary of OS of each patient since the start of MVX- ONCO-1 treatmentTotal N a N (%) alive N (%)
lost to follow-up
Number of patients who have theoretically reached the time point of interest at the cutoff date.Patients lost to follow-up are included.the Brachyury protein.This suggests that the immune response in individuals surviving beyond 6 months may be influenced by unidentified antigens.It is worth noting that patients demonstrating a PR are not necessarily those with the highest ELISpot responses.This suggests that the response to treatment is not solely influenced by T-cell response but can also be supported by other immune responses, such as humoral response. a
|
2024-07-24T06:17:48.554Z
|
2024-07-23T00:00:00.000
|
{
"year": 2024,
"sha1": "2f969f4d05f3654f115b0a1effaac504e5bc220d",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1158/2767-9764.crc-24-0150",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "861d4c86991452a40f0345dba39d60edd2dabde3",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
216207074
|
pes2o/s2orc
|
v3-fos-license
|
The Key To Successful Early Childhood Educators: Performance Study of The Raudhatul Athfal (RA) Teacher in Yogyakarta.
This research is to find out how to improve the performance of RA teachers who excel in Yogyakarta so that it is the key to its success. This research is a qualitative research through a psychological approach that is carried out directly to the object under study, to obtain data relating to aspects of teacher performance so that the performance improvement is an example for other teachers. Data collection methods using interviews, documentation, and observation. The results revealed that to improve the performance of outstanding RA teachers through 6 (six) steps undertaken by outstanding RA teachers, namely: 1) Knowing there are still shortcomings in performance, 2) Knowing weaknesses and shortcomings in the seriousness of teaching, 3) Identifying what becomes causes of deficiency especially those related to performance itself, 4) Develop a performance plan that is presented, 5) Assessing the problem has been resolved or not (problem solving), 6) Starting from the beginning again, if needed and needed again.
Penelitian ini adalah untuk mengetahui bagaimana meningkatkan kinerja guru RA yang berprestasi di Yogyakarta sehingga menjadi kunci keberhasilannya. Penelitian ini adalah penelitian kualitatif melalui pendekatan psikologis yang dilakukan langsung ke objek yang diteliti, untuk memperoleh data yang berkaitan dengan aspek kinerja kinerja guru sehingga peningkatan kinerjanya menjadi contoh bagi guru lain. Metode pengumpulan data menggunakan metode wawancara, dokumentasi, dan observasi. Hasil penelitian mengungkapkan bahwa untuk meningkatkan kinerja guru RA yang berprestasi melalui 6 (enam) langkah yang dilakukan oleh guru RA yang berprestasi, yaitu: 1) Mengetahui masih ada kekurangan dalam kinerja, 2) Mengetahui kelemahan dan kekurangan dalam keseriusan mengajar, 3) Mengidentifikasi apa yang menjadi penyebab defisiensi terutama yang berkaitan dengan kinerja itu sendiri, 4) Mengembangkan rencana kinerja yang disajikan, 5) Menilai masalah telah diselesaikan atau tidak (pemecahan masalah), 6) Mulai dari awal lagi, jika diperlukan dan dibutuhkan lagi. Abstract This research is to find out how to improve the performance of RA teachers who excel in Yogyakarta so that it is the key to its success. This research is a qualitative research through a psychological approach that is carried out directly to the object under study, to obtain data relating to aspects of teacher performance so that the performance improvement is an example for other teachers. Data collection methods using interviews, documentation, and observation. The results revealed that to improve the performance of outstanding RA teachers through 6 (six) steps undertaken by outstanding RA teachers, namely: 1) Knowing there are still shortcomings in performance, 2) Knowing weaknesses and shortcomings in the seriousness of teaching, 3) Identifying what becomes causes of deficiency especially those related to performance itself, 4) Develop a performance plan that is presented, 5) Assessing the problem has been resolved or not (problem solving), 6) Starting from the beginning again, if needed and needed again.
INTRODUCTION
The superior paradigm demands a breakthrough process of thinking, especially if it requires quality out put able to compete with the work in an open civilization (Tilaar, 1999). The performance of comparable teachers is straight with the development of the quality of education, but not a few teachers work under the standards for which competency has been determined. Not because of not being able to but because a work culture that is conducive and commendable is not built, this is due to the low level of work enthusiasm, shaped like a sine chart that will meet the saturation point at a time if there is no curative and preventive effort either from supervisor or himself (Barnawi & Arifin, 2014). Performance is the practice of competence in the form of real work, not individual characteristics, such as abilities and talents. High-performance teachers are teachers who have cooperative productivity above prescribed standards, while low performance levels are unproductive teachers (Priansa, 2014).
The Indonesian people consciously develop education based on noble character. As said by Soekarno, the first President of Indonesia that the Indonesian nation was built by prioritizing Character Building because this is what makes Indonesia advanced, victorious, and big, and dignified. If character building is not done, then this nation will become coolies (Hendri, 2016). The successful implementation of learning that is in line with the expectations of the community and government is largely determined by the mastery of stakeholders, especially teachers. Teachers as educators in schools are special professions. Educator profession is not enough if it is only categorized as a type of work where they are finished paid work, but more than that the teaching profession has dedication, mission, vision, even a worship that has more value than the profession or other positions. Educator (teacher) is a profession that has special action, vision, and mission as the main actor to empower people (Radno, 2011).
Quality education in Indonesia requires qualified teachers, the low quality of education is inseparable from teacher quality problems. This quality can be seen at least from the results of the certification competency test of participants in 2012 which shows how low teacher competency at all levels of education reinforces the statement that nationally teacher competency in Indonesia is still low, but it cannot be denied that qualified teachers will produce students who quality. Although often Fu'ad Arif Noor. The Key To Successful Early Childhood Educators:Performance Study of The Raudhatul Athfal (RA) Teacher in Yogyakarta.
34
complaining about salaries that are not yet feasible, it turns out that teacher competencies in Indonesia are not always dependent on the wages they receive (Jatmiko, 2017).
After Indonesian independence until the 1960s, the position of teacher was so respected. This condition does not escape the programs implemented by the government in various ways to attract the best youth to become teachers. At that time among the efforts was to provide dormitories and official ties for students or prospective teacher students. It seems that this is a form of incentive that invites young people to choose the path of education to become teachers (Soedijarto, 2002
METHODS
The approach used in analyzing this research is using a qualitative descriptive approach, where research procedures that obtain descriptive data in the form of oral or written words regarding the circumstances, individual traits, and symptoms of a separate group can be observed. As well as a detailed qualitative descriptive analysis approach, which began the first time the data was collected, the analytical approach in this study was a qualitative approach using interactive analysis. The analysis step is carried out sequentially; starting from editing, reducing data, classifying data, and presenting data. Data analysis starts from the first problem, then the second and third as determined by the researcher. Some analysis will be taken by the researcher, because the researcher refers to the analysis of: data reduction, then data presentation, and concludes with conclusions (Miles & Huberman, 2014).
This research aims to find out the development of social capital of madrasah in involving the community in various activities to improve the quality of learning and the development of physical facilities. This type of research is qualitative. The approach used is phenomenology. This approach is used because it gives space to the data as a phenomenon. With this approach, it provides a phenomenon of selftalk and makes the phenomenon as a text that invites the question and then interpreted. The phenomenological approach seeks to break away from all the initial perceptions and assumptions created by the researcher. There are three aspects seen in this approach namely, First unconscious individual. Both the language and the expression that produces various narratives, rules, and conceptualizations on society. third sign and symbol. Signs become objects that have information and communication in certain contexts, whereas symbols have meant what is behind the mark. with the phenomenology of signs and symbols able to relate, shape and influence individuals when interacting and behaving like A. Schutz (1967: 33-35).
The characteristics of the study use a qualitative type, so something in it becomes a direct source of data, because research is a key instrument of research, the research was by Robert C. Bogdan and Sari Knop Biklen as key instruments (Bogdan & Biklen, 2006, pp. 27-30). The descriptive nature of research prioritizes the form of Because there are no ambassadors in the city, there is no representative, so the additional kouta is given to him as an outstanding teacher 4 (hope I).
RESULT AND DISCUSSION
In order to improve the performance of these outstanding RA teachers, at least Anton Ariyadi has stated that there are 6 (six) steps that can be done by outstanding RA teachers, namely: 1) Knowing there are still deficiencies in performance, 2) Knowing weaknesses and strengths in the seriousness of teaching, 3) Identify what is the cause of deficiencies especially those related to performance it self, 4) Develop a performance plan that is presented, 5) Assessing the problem has been resolved or not (problem solving), 6) Starting from the beginning again, if needed Fu'ad Arif Noor. The Key To Successful Early Childhood Educators:Performance Study of The Raudhatul Athfal (RA) Teacher in Yogyakarta.
relationship between teachers and students. Therefore, personality is a determinant of the high and low dignity of a teacher. According to Anton Ariyadi: "Personality is a person's behavior and characteristics such as mindset, behavior, interests, abilities and potential. This is what distinguishes it from others, because everyone has their own personalities that are different from each other.
The implementation is when teaching and interacting with teachers and guardians when in school, such as being sociable, sociable, friendly, and confident (Ariyadi, 2018).
Teacher's personality is reflected in his actions and attitudes guiding and fostering students. A teacher whose personality is getting better, the better his dedication is to carry out his responsibilities along with his duties as a teacher, means reflected in a strong dedication from the teacher in carrying out his functions and duties as an educator. One of the cornerstones of personality formation is success which makes a result of personality, general image, attitude, and skills therefore polished through the process of human interaction (Drost, 1998).
There are three elements of personality, namely: (1) Material or material that is all the power (ability) of carrying along with its features (talents), (2) Structure is its normal properties as well as the characteristics of its form. (3) The nature or quality is the process of encouragement (Suryabrata, 2001).
Whereas according to Freud (Sigmund2011), that actual personality consists of: 1) The id (Das es) is a biological aspect, this makes the original system in personality so that this aspect makes the inner world of the human being subjective that does not have a direct connection to human birth with the objective world. 2) Das ich or the ego is a psychological aspect, this arises because the needs of individuals interact with the real world, and 3) The super ego or Das Ueber Ich is a sociological aspect, this personality is representative of the ideals of society and traditional values as well as those interpreted by parents to their children, including the inclusion of rules, orders and prohibitions.
Teaching Ability
Some aspects of mental exemplary RA teachers achieving in particular will have a strong influence on students' thinking and the learning climate that Fu'ad Arif Noor. The Key To Successful Early Childhood Educators:Performance Study of The Raudhatul Athfal (RA) Teacher in Yogyakarta.
39 the teacher raises. The teacher understands that the attitudes and feelings of students will contribute and have a positive influence on the learning process.
Competent teachers to be able to have an innovative soul, leave a conservative attitude, capable (capable, capable, smart, capable) and creative, not defensive or defensive, but able to make students more offensive or responsive (Sutadipura, 1994).
Professional Development
The teacher's profession in its development is increasingly becoming its own interest along with the transformation of science and science that demands the readiness of teachers not to stutter and miss. According to Pidarta, the profession is an occupation or ordinary occupation as well as various other jobs, but the work is introduced to the public for general purposes, not for certain groups, individuals, or groups. In doing the work, it certainly meets the norms, people who do professional work are experts, people who already have the power of thought, skills and high knowledge. In addition, his work is demanded to be accountable for the work and all his actions related to the profession (Pidarta, 2000). Orienting to social assistance or serving the community, not just to get financial benefits or salaries. (6) Not offering or advertising (advertising) his expertise to obtain clients. (7) As a member of the profession. (8) The professional organization establishes the requirements for the acquisition of members, provides sanctions, monitors the behavior of members, strives for the welfare of members, and fosters the membership profession (Pidarta, 2000).
Teacher professional development is an important factor for seriousness in order to maintain the weight of demands and changes to the teaching profession.
The development of teacher professionalism demands management capabilities as well as strategies for their application or mastery of science. Maister expressed his opinion that professionalism is not just about having technology, science, and management. But professionalism also has the required behavior, and has high skills (Maister, 1997 professionalism standards as a form of willingness to get teachers who can foster students in harmony with community support, besides being forced by teachers to achieve the professional teacher title a teacher is urged to have 5 (five) conditions are: (1) Teacher has a commitment to the learning process and to students, (2) The teacher understands in depth the subjects or materials to be conveyed and how to teach them to students, (3) Teachers are obliged to monitor the results or learning products of students with various assessment techniques, The professionalism requirements for the teacher above are fulfilled, so replacing the role of a teacher who is initially passive turns into a dynamic and creative teacher, so that the determination of the requirements of professionalism for the teacher will change the role of the teacher who was originally a verbalistic (clever speech) orator (investigative) become a dynamic force in realizing an atmosphere of learning environment (Semiawan, 1991).
Fu'ad Arif Noor. The Key To Successful Early Childhood Educators:Performance Study of The Raudhatul Athfal (RA) Teacher in Yogyakarta.
Community relations with schools is a system of community communication with schools to promote community understanding of education activities and needs, as well as to move interest, participation for the community in school improvement and development. This community and school relationship is a cooperative effort to develop and maintain communication in an efficient two-way explanation and mutual understanding between schools, school personnel and with the community, where the purpose of community relations with schools can be seen from two dimensions: community needs and school interests (Mulyasa, 2004).
In carrying out community relations with schools it is necessary to follow several principles as guidelines and objectives for teachers and principals, in order to achieve the desired goals. The principles of the relationship include: (1) The principle of authority means that the school's relationship with the of accuracy means that what is given by the school to the community is appropriate and appropriate, both in terms of time, content and media that are utilized and the objectives to be achieved (Soetjipto & Kosasi, 2009).
In order for the public relations to be continuous and well-established, Mrs.
Rufiyati Ambar Ningrum gave input that: it is necessary to improve the profession of RA teachers in relation to the community. "RA teachers excel in addition to being able to carry out their respective tasks in RA, they are also expected to be able and able to perform the tasks of their relationship with society. They can understand all the activities of their community, understand their culture and customs, know their aspirations, be able to put themselves in society, be able to communicate with them and give birth to their dreams. To achieve this requires the ability and behavior of the RA teacher in accordance with the local social structure, because when the teacher's behavior and Fu'ad Arif Noor. The Key To Successful Early Childhood Educators:Performance Study of The Raudhatul Athfal (RA) Teacher in Yogyakarta.
competence do not match the social structure in society, there will be a clash of understanding and misunderstanding and even fail to understand the program implemented by school or RA and impact the lack of support or community assistance to schools, even though the community and schools have the same interests and a strategic role in educating and producing quality students (Interview with Rufiyati Ambar Ningrum, third subject as an accomplished RA teacher 3, on April 20, 2018).
Creation of a challenging atmosphere according to Anton Ariyadi that: "The atmosphere is filled with good ties between parents of student guardians and the surrounding community. This is intended to foster active and participatory roles, as well as a sense of shared responsibility for education. Only a small amount of time is used by teachers at school and most of it is in the community. In order for this outside education to be well established and what RA teachers do in school or RA, synergy between teachers, parents and the community is needed. The obligation of the teacher to hold contact relations with the community makes the part and task of the teacher in educating students and improving their profession as a teacher. The school is jointly owned by the residents of the school itself, the government and also the community" (Interview with Mr. Anton Ariyadi, fourth subject as an accomplished RA teacher 4, on June 28, 2018).
Working Climate
According to Sri Ngadiyati that: "Negative climate manifests itself in the form of contradictory, competitive, opposition, jealousy, selfishness. ignorant, and individualistic, this negative climate is able to reduce the level of work productivity of teachers. On the contrary, a positive climate shows a close relationship with each other in many cases where mutual assistance and complementarity occur between them, in synergy to complement each other, all problems or probelms that arise are resolved together through deliberation.
Positive climate shows that all activities run harmoniously with conditions of peace, calm providing a sense of peace, comfort to personnel or employees and especially for teachers (Interview with Mrs. Sri Ngadiyati; first subject as an accomplished RA teacher 1, on June 28, 2018). (Owens, 1991).
Discipline
Understanding of discipline as The Liang Gie gives the meaning of discipline is an orderly situation in which many people who are gathered in an organization are obedient and subject to various regulations that have been decided with a sense of pleasure and responsibility (Gie, 1972). While Good in his education dictionary defines discipline by: a) The results or process of control or direction of desires, interests or encouragement to achieve goals and to achieve more perfect behavior. b) Look for selected activities actively, tenaciously, and directed to yourself, even though facing trials and obstacles. c) Control of actions or behavior is direct and absolute with a reward or punishment. d) Emphasis on motivation in a painful and unpleasant way (Carter, 1959).
From the aforementioned notions, it can be summarized that discipline is compliance, accuracy and adherence to a rule that is carried out consciously without coercion or motivation from other parties, meaning also a condition that is in order, orderly, and should, and without a violations both indirectly and directly. The purpose of discipline according to Suharsimi Arikunto is that the school program can run effectively in a peaceful, calm atmosphere, and for teachers and employees in the school to feel comfortable and satisfied because their needs are met. Meanwhile, the Ministry of Education and Culture stated that there are 2 (two) objectives of the discipline: (1) the general objective is to have the curriculum run well which supports the development of the quality of education (2) specific objectives, this consists of: (a) so that the principal is able Discipline behavior in relation to teacher performance is closely related because only with strong discipline can activities be carried out in accordance with existing regulations. Therefore, in an effort to withstand the occurrence of non-discipline, it needs to be addressed by developing teacher welfare, exemplary leadership, giving threats, controlling and preventing themselves, implementing corrective actions, maintaining order, and fostering a positive strategy towards discipline. All efforts to enforce discipline include: (1) instilling positive actions, (2) self-control and prevention, (3) maintaining order (Nainggolan, 1990).
Prosperity
One factor influencing teacher performance is welfare, this factor will make the quality of performance increase. Because someone who is increasingly prosperous, the higher is the possibility to improve his performance. Sufficient variety of various needs of a person, will bring satisfaction in carrying out any task (E. Mulyasa, 2004 Teacher professionalism is not only seen from the teacher's ability to develop and provide good learning to students, but is also seen by the government by receiving appropriate and appropriate salaries. If the welfare and needs of the teachers are appropriate to be conveyed by the government, then absent or absent teachers do not, because they are looking for additional salaries outside (Denny Suwarja, 2003).
CONCLUSION
RA teacher is an example of success for early childhood Islamic education with character, good character, has a work ethic and is considered as a person who is very instrumental in achieving the educational goals of RA which is a reflection of the quality of education in the future. The implementation of the duties and obligations of RA teachers is inseparable from the influence within themselves and outside that have an impact on the change in the success of outstanding RA teachers in Yogyakarta, from the description of the explanation of the performance of outstanding RA teachers in Yogyakarta above can the authors conclude that: The performance that is the difference between one person and another in a work situation is due to differences in individual characteristics. In addition, the same person can produce different performance in different situations. This all explains that the performance of the outstanding RA teacher is largely influenced by 2 (two) things, namely: his personal individual factors and the factors in the situation surrounding him.
Factors that can affect the achievement of the performance of outstanding RA teachers, namely the ability factor (ability) and motivation (motivation). The ability of teachers in psychology consists of the ability of reality (knowledge + skill) and potential abilities (IQ). This encouragement or motivation is manifested from the attitude of a person (teacher) in encountering work or teaching. Motivation becomes a condition that awakens a person who is focused on achieving educational goals.
Factors that support the performance of outstanding RA teachers can be grouped into two elements, namely: internal factors (internal) and external (external) factors themselves. Internal factors from within the outstanding RA teacher include: Fu'ad Arif Noor. The Key To Successful Early Childhood Educators:Performance Study of The Raudhatul Athfal (RA) Teacher in Yogyakarta.
intelligence, skills and skills, talents, interests and abilities, motives, health, personality, goals and ideals in work. While external or external factors, in the form of: family environment, work environment, communication with the principal, facilities and infrastructure, teacher activities in the classroom.
The activities of RA teachers with achievements include: active participation in the field of administration, in this field the teacher has many opportunities to participate in all school activities which include: 1) improving the philosophy of education. 2) adjust and improve the curriculum. 3) planning supervision activities.
4) planning various employment policies. All the work is done with the togetherness of one teacher with another teacher, namely by deliberation techniques. To develop performance, the teacher looks at the state of the principal (leader). In the learning process both good and bad teachers depend on several factors, one of which is the supervisor in carrying out supervision or supervision of abilities (teacher performance). The key to the success of RA teachers with achievement is: having personality and dedication, teaching skills, professional development, communication and relationships, relationships with the community, work climate, and discipline, as well as welfare. Special thanks also to members of the research team who have contributed to this research project.
|
2020-04-02T09:28:20.091Z
|
2020-03-22T00:00:00.000
|
{
"year": 2020,
"sha1": "1546b89a93eb8ba1343311bcc73f147cfe78b22b",
"oa_license": "CCBY",
"oa_url": "http://journal.umpo.ac.id/index.php/indria/article/download/2407/pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "0df8d603cd90a862b0defaeb38c0eae79fafab63",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Psychology"
]
}
|
8523100
|
pes2o/s2orc
|
v3-fos-license
|
Molecularly targeted therapy for advanced hepatocellular carcinoma - a drug development crisis?
Hepatocellular carcinoma is the fastest growing cause of cancer related death globally. Sorafenib, a multi-targeted kinase inhibitor, is the only drug proven to improve outcomes in patients with advanced disease offering modest survival benefit. Although comprehensive genomic mapping has improved understanding of the genetic aberrations in hepatocellular cancer (HCC), this knowledge has not yet impacted clinical care. The last few years have seen the failure of several first and second line phase Ⅲ clinical trials of novel molecularly targeted therapies, warranting a change in the way new therapies are investigated in HCC. Potential reasons for these failures include clinical and molecular heterogeneity, trial design and a lack of biomarkers. This review discusses the current crisis in HCC drug development and how we should learn from recent trial failures to develop a more effective personalised treatment paradigm for patients with HCC. Core tip: This review discusses the current drug therapy landscape for advanced hepatocellular carcinoma, in particular the reasons for failure of several clinical trials of molecularly targeted therapy and future directions of research to address these problems.
INTRODUCTION
Hepatocellular cancer (HCC) is the sixth most prevalent cancer worldwide and accounts for over 745000 deaths a year [1] . Despite the implementation of screening programs for high-risk individuals, the majority of patients present with incurable disease. Median overall survival for advanced disease remains poor at less than 12 mo and there is an urgent need for more effective treatments [2] . Global epidemiological patterns vary depending on the prevalence of risk factor. Incidence rates are highest in East Asia in areas where hepatitis B and C are endemic [3] . However, improved management of early viral hepatitis in Japan has seen a reduction in new HCC cases [4] . By contrast the upward trends of HCV, obesity and metabolic syndrome in North America and Europe contribute to HCC being the fastest growing cause of cancer related mortality in these regions [5] . Resection, radiofrequency or microwave ablation, and liver transplantation comprise the mainstay of treatment for early disease offering the only chance of cure, but only one third of patients present with disease suitable for these treatments [6] . Loco-regional therapy with trans-arterial chemoembolization (TACE) can lead to sustained disease control for intermediate stage HCC [7,8] . Sorafenib, a multi-targeted tyrosine kinase inhibitor (TKI), remains the only systemic therapy that is effective in advanced disease offering marginal survival benefit without significant improvement in cancer related symptoms or quality of life [2] . After many years of disappointing results with chemotherapy, sorafenib was thought to herald a new era in HCC treatment with great optimism for molecularly targeted therapies. Disappointingly, several negative first and second line phase Ⅲ clinical trials ensued. However, the combination of recent extensive genomic studies and biomarker based clinical trials, provide hope for the development of a more personalised treatment paradigm. This review discusses the current concepts and management of advanced HCC with a particular focus on the failure of molecular targeted therapy beyond sorafenib and outlines how this should be addressed.
Current therapy for advanced disease
Despite only marginal benefits with chemotherapy reported in single arm studies, lack of alternative treatments meant its use was routine prior to the advent of sorafenib. Challenges with toxicities (especially in patients with underlying liver disease) led to chemotherapy being reserved for patients with good performance status and preserved hepatic function. Single agents such as doxorubicin, cisplatin and fluorouracil offer response rates of 10% [9][10][11] . This increases to 20% with combination regimens, none of which impact survival [9,12] . The recently reported EACH trial, a phase Ⅲ study conducted in China, Taiwan, Korea and Thailand randomly assigned 371 patients with advanced disease to receive either combined oxaliplatin and fluorouracil/leucovorin (FOLFOX4) or doxorubicin [13] . The trial failed to demonstrate a significant survival difference between each arm, although a trend towards improved outcomes with FOLFOX4 was noted (median overall survival was 6.4 mo for FOLFOX4 and 4.97 mo for doxorubicin; P = 0.7; HR = 0.8; 95%CI: 0.63-1.02).
The search for more efficacious treatments eventually led to two large randomised phase Ⅲ trials that reported a significant survival benefit with sorafenib in close succession. The first, conducted in a European, Australian and American population, demonstrated a median overall survival (OS) of 10.7 mo for patients treated with sorafenib (400 mg BD) compared with 7.9 mo for placebo (HR = 0.69; 95%CI: 0.55-0.87; P < 0.001) [2] . The latter, conducted in the Asian-Pacific region reported that patients treated with sorafenib led to a median overall survival of 6.5 mo compared with 4.2 mo (HR = 0.68; 95%CI: 0.50-0.93; P = 0.014) [14] . The survival advantages in both trials were modest and neither study established any improvement in cancer symptoms or quality of life. Yet this benefit was sufficient for sorafenib to become the new standard of care for patients with advanced disease. Data extracted from the prospectively maintained GIDEON database (Global Investigation of Therapeutic Decisions in Hepatocellular Carcinoma and of its Treatment with Sorafenib) showed that in 3202 patients treated with HCC, adverse events were comparable between patients with Child-Pugh A and Child-Pugh B cirrhosis [15] . Yet the frequency of serious adverse events was higher in the Child-Pugh B group (60.4% for Child-Pugh B and 36.0% for Child-Pugh A) and median overall survival was shorter 5.2 mo (4.6-6.3) for Child-Pugh B and 13.6 mo (12.8-14.7) for Child-Pugh A (Table 1).
Four separate phase Ⅲ trials exploring different multitargeted TKIs have now failed to show superior outcomes to sorafenib. HCCs are vascular tumours and both VEGF and angiopoietin-2 (Ang2) were independent prognostic markers during the SHARP trial and have been associated with tumour growth and metastatic spread [16] . The success of sorafenib was thought to be predominantly related to its anti-angiogenic properties and subsequent studies aimed to identify more potent anti-angiogenic drugs. Sunitinib, a multi-kinase inhibitor targeting VEGFR, PDGFR, c-KIT and FLT-3 has been approved for use in gastro-intestinal stromal tumours and renal cell carcinomas and was more potent that sorafenib in preclinical models [17,18] . Phase Ⅱ studies showed modest benefit in HCC at best although did highlight potential biomarkers such as interleukin-6, stromal-derived factor1alpha and soluble c-KIT, as changes in tumour vascular permeability and circulating inflammatory molecules were associated with poorer outcome [19][20][21] . Adverse events in these phase Ⅱ studies were concerning with liver related toxicities including encephalopathy and hepato-renal syndrome and 5%-10% of patients died from treatment related causes. The daily dose of 50 mg that is routinely used in other tumour types was deemed too high for patients with HCC where it precipitated liver toxicities including portal hypertension, encephalopathy, oesophageal variceal bleeding, ascites and thrombocytopenia. A subsequent head-to-head phase Ⅲ study of 1074 patients randomised to either sunitinib or sorafenib patients terminated early due to both futility and safety concerns [22] . The most frequent grade 3/4 adverse events in the sunitinib group were thrombocytopenia (29.7%) and neutropenia (25.7%) and in the sorafenib group were hand-foot syndrome (21.2%). Overall survival was also significantly lower in the sunitinib arm (7.9 mo vs 10.2 mo P = 0.0014). Temporary treatment discontinuation was more frequent with sunitinib (76.6% vs 58.7%). The failure of sunitinib was likely related to a combination of inadequate dosing, toxicities and trial design, and highlights the need for caution in overinterpretation of phase Ⅱ data and decision to move to Phase Ⅲ trials.
Pre-clinical studies identified linifanib as a more potent dual vascular epidermal growth factor receptor (VEGFR) and platelet derived growth factor receptor (PDGFR) inhibitor than sorafenib (IC50 = 25 nmol for linifanib and IC50 = 57 nmol sorafenib) and VEGFR (IC50 = 8 nmol for linifanib and IC50 = 90 nmmol for sorafenib) [23] . A single arm phase Ⅱ trial in the first line setting resulted in a median overall survival of 9.7 mo (10.4 mo in patients with Child-Pugh-A status), which led to a non-inferiority phase Ⅲ trial with sorafenib [24] . The study of 1035 patients failed to reach its end-point with an overall survival of 9.1 mo for linifanib and 9.8 mo for sorafenib (HR = 1.04; 95%CI: 0.89-1.22; P = 0.001) [25] . Toxicities of hypertension and hepatic toxicities including encephalopathy were also higher in the linifanib arm.
A single arm first line phase Ⅱ study of 55 patients treated with brivanib, an ATP competitive inhibitor of several kinases including VEGFR2 (IC50 = 25 nmol), FGFR-1 (148 nm) and VEGFR1 (380 nmol), resulted in a median overall survival of 10.0 mo [26,27] . Phase Ⅱ studies confirmed that brivanib was well tolerated and one patient had a completed response, three had a partial response and twenty-two had stable disease. Yet BRISK-FL, the subsequent phase Ⅲ direct comparison trial of brivanib and sorafenib, failed to establish a significant survival benefit (9.5 mo for brivanib vs 9.9 mo for sorafenib; HR = 1.06; P = 0.31) [28] . Due to the trial design, in order to demonstrate non-inferiority, brivanib needed to produce a hazard ratio between 1 and 1.08, which it narrowly failed to reach. The BRISK-FL trial highlighted the difficulties in extracting comprehensive survival data from non-randomised phase Ⅱ trials.
Grade 3/4 toxicities for sorafenib and brivanib were hyponatraemia (9% and 23% respectively), elevated liver enzymes (17% and 14%), fatigue (7% and 15%) and hand-foot reaction (15% and 2%). Even if this trial had met its end-point of non-inferiority, the significant toxicity and economic profiles were not more favourable than sorafenib, and thus would have been of little meaningful clinical benefit.
Erlotinib, an epidermal growth factor receptor (EGFR) TKI was tested in a first-line phase Ⅲ trial in combination with sorafenib compared to placebo/sorafenib in a study of 720 patients with advanced disease [29] . The combination had not previously been tested in phase Ⅱ trials, with two single arm phase Ⅱ studies demonstrating modest disease control [29][30][31] . The combined treatment did not improve overall survival (9.5 mo compared with 8.5 mo for sorafenib alone HR = 0.92; P = 0.2). Toxicities in the combination arm were also higher resulting in a reduced median treatment duration that may have contributed to its diminished efficacy. This trial demonstrates both the danger of proceeding to large-scale phase Ⅲ trials without a clear signal of efficacy from earlier phase studies and the difficulties in combining therapies for HCC (especially for drugs that have overlapping toxicities). Robust HCCspecific phase Ⅰ/Ⅱ studies are needed to identify optimal dosing of combination regimens ( Table 2).
FGF has been pursued as a potential target in HCC and recent data suggests the FGF signalling pathway may play a key role in the development of resistance to anti-VEGF therapies by activating alternative proangiogenic signalling pathways [32] . Forty-six patients who had not responded to prior anti-angiogenic therapies were treated with brivanib in a single arm phase Ⅱ study [33] . The results were promising with a median overall survival of 9.7 mo.
A subsequent phase Ⅲ trial that was conducted in parallel to the BRISK-FL trial compared brivanib with placebo as second line treatment failed to meet its end point [34] . Patients treated with brivanib had a median overall survival of 9.7 mo compared with 8.2 mo in the placebo arm (P = 0.3). Yet significant improvements were seen in the secondary end points of overall response rate (10% for brivanib vs 2% for placebo P = 0.003), disease control rate (61% vs 40% P ≤ 0.001) and alpha-feto protein reduction in 74% of patients with elevated baseline levels (> 50% reduction seen in 54% vs 7%). These indicate that brivanib has anti-tumour activity despite the negative primary outcome. Furthermore, despite stratification the placebo cohort had fewer patients with macro-vessel invasion and a numerically lower median AFP level. The unexpectedly long survival of patients in the placebo cohort has been cited as one of the reasons for treatment failure. As expected, there were also higher rates of treatment discontinuation and elective patient withdrawal from the brivanib arm, which may have reduced efficacy in this group.
Mammalian target of rapamycin (mTOR) is upregulated in many solid tumours including HCC and appears to have a critical role in pathogenesis [35,36] . A second line study with the mTOR inhibitor everolimus, offered no survival advantage over placebo (7.6 mo for everolimus vs 7.3 mo; HR = 1.05; P = 0.68) [37] . Ramucirumab is a fully human monoclonal antibody against vascular endothelial growth factor receptor 2 (VEGFR2), which also failed to improve survival compared with placebo (median overall survival for ramucirumab was 9.2 mo compared with 7.6 mo; HR = 0.86, P = 0.13) in the REACH trial [38] . However, a pre-planned sub-group analysis revealed that in patients with elevated baseline alpha-feto protein (AFP) of more than 400 ng/mL, ramucirumab extended both overall and progression free survival. Grade 3 toxicities that occurred more frequently in the ramucirumab arm included hypertension (12% compared with 4%) and fatigue (5% compared with 2%), but its toxicity profile is otherwise favourable compared to the multi-targeted TKIs. Due to this data, a phase Ⅲ trial with second line ramucirumab in a select population with AFP > 400 ng/mL is ongoing.
Clinical and molecular heterogeneity
So far all phase Ⅲ trials have unexpectedly failed to reach their end-points. There are several reasons for this.
In the majority of patients with HCC, the cancer arises predominantly as a consequence of liver injury secondary to a variety of causes. It is clear that underlying liver pathology affects both outcome and treatment response, suggesting trials need to be stratified according to aetiology as well as Child-Pugh status, histological grade and stage [39] . Whilst patients with hepatitis B had longer overall survival and shorter time to progression following treatment with sorafenib in the SHARP trial, these results may have been confounded by the imbalance in numbers between patients with hepatitis B and C [2] . Without prior stratification, it is difficult to analyse the survival between sub-groups, highlighting the need for careful trial design.
Limited understanding of oncogenic drivers mean all recent negative phase Ⅲ trials were for "all comers", yet there is marked molecular heterogeneity amongst HCC tumours. Extensive genomic studies have revealed multiple genetic aberrations with more than 30 somatic mutations per tumour [40,41] . The challenge lies in distinguishing which are oncogenic drivers and which are bystander passenger mutations. Once drivers are identified, trials can be tailored to pertinent pathways. However, several studies have challenged the idea that single biopsies can represent the mutational landscape of the whole cancer. With highly mutated tumours such as HCC, the key is finding the so-called "trunk" mutations that exist in all tumour sites [42] . Even if a driver is found, inhibiting pathways may induce resistant mutations. Whilst "liquid" biopsies evaluating circulating DNA are under evaluation, further research is needed to validate these techniques before their use in the clinical setting [43] . One of the barriers to drug development is that many previous HCC trials did not mandate a tissue diagnosis, relying on clinical criteria alone. Several studies have now highlighted histological changes following treatment with loco-regional therapy such as TACE. In a prospective analysis of 80 nodules found in explant livers following transplantation for HCC, 14 cases of mixed hepatocholangiocellular tumours were found in patients who had received TACE whilst none were seen in the treatment-naive group, implying differentiation into a cholangiocellular phenotype for some patients [44] . Furthermore, the lack of histology arguably impedes both predictive and prognostic biomarker development. For example, a phase Ⅱ trial with the selective non ATP competitive c-MET inhibitor tivantinib, did not offer a survival advantage in patients with advanced HCC but a post study sub-group analysis revealed that the overall survival was longer in patients with high baseline expression of c-MET (overall survival was 7.2 mo for tivantinib and 3.8 mo for placebo HR = 0.38, P = 0.01) [45] . A phase Ⅲ trial for patients with tumours over-expressing c-MET in the second line setting is on going (NCT01755767). Therefore, several agents that have failed in phase Ⅲ trials may still be efficacious in sub-groups of patients, emphasising the urgent need for tissue collection and more sophisticated trial designs that accommodate molecular stratification.
Underlying liver cirrhosis
Another challenge when treating patients with HCC is the presence of underlying liver cirrhosis. Historically, clinical trials were reserved for patients with good hepatic reserve so that competing liver morbidity does not overshadow outcomes from malignancy. Yet even in patients with preserved baseline hepatic function, reaching the optimal maximum tolerated dose in patients can be limited by hepatotoxicity. Treatment duration in these trials may have been insufficient to elicit a response. Liver dysfunction and co-existing cirrhosis may affect drug metabolism and due to the consequent changes in the pharmacokinetic and pharmacodynamics profiles of drugs, there is now a trend to conduct HCCspecific phase trials rather than extrapolate results from "all-comer" phase 1 studies conducted in patients with normal or near normal liver function.
There are no approved therapies in patients who progress on sorafenib and who retain well preserved liver function and good performance status. Many centres use cytotoxic chemotherapy (usually with FOLFOX due to results of the EACH trial) despite the lack of clear evidence supporting its use. Due to the lack of effective second-line therapy, patients are encouraged to enter clinical trials of novel agents. By definition, patients suitable for second line trials are more likely to have less aggressive disease than the wider HCC population in whom performance status often deteriorates rapidly on progression and is associated with decompensation of liver function. In a number of the recent second-line phase Ⅲ trials comparing novel therapies to placebo, there has been unexpected prolonged survival in the placebo cohort, potentially diminishing the survival differences between groups. Although the trend for overall survival favoured brivanib in the second line BRISK-PS trial, the results were non-significant suggesting the study was not sufficiently powered to detect benefits with brivanib against a placebo controlled population in whom survival was unexpectedly long [34] . Novel direct-acting antivirals (DAA) that target HCVencoded proteins necessary for viral replication, can offer patients with hepatitis C sustained virological responses (SVR). The increasing use of these novel agents are expected to have a future impact on the incidence of HCV related HCC. Yet the presence of advanced fibrosis will continue to pose a risk for oncogenesis, even in the absence of a detectable viral load, and screening high risk individuals is still required [46] . The development of molecular predictive biomarkers could help identify patients that require ongoing surveillance. Furthermore, biomarker based stratification could be used to enrich HCC chemoprevention trials [47] .
Response evaluation
Finally, response criteria in trials must be chosen carefully. Traditional endpoints such as tumour shrinkage relate to chemotherapy treatments and may not be applicable when assessing the benefits of targeted treatments, which can be cytostatic rather than cytoreductive [48] . Drugs that have been deemed failures in phase Ⅲ studies may have therapeutic activity in HCC, but insufficient potency to improve conventional end-points in phase Ⅲ trials [49] .
Furthermore liver disease can elicit an inflammatory response, which can be mistaken for progression resulting in premature cessation of treatment. Thus the use of traditional imaging has been highlighted as insufficient in assessing response in HCC whereby functional imaging provides more useful information. RECIST criteria that is routinely used to measure disease response in many solid tumours, has been recognised as insensitive in HCC. In the SHARP trial, despite an improvement in overall survival, only 2% of patients treated with sorafenib underwent a response by RECIST criteria. The RECIST response criteria were amended to incorporate tumour necrosis induced by treatment. The modified RECIST (mRECIST) measures arterially enhancing lesions that are more representative of residual viable tumour [50,51] . Large multi-centre clinical trials in patients with HCC pose unique challenges and future study designs must accommodate these in order to exploit the true potential of novel agents in this disease [52,53] .
THE GENETIC BACKGROUND OF HCC
In malignancies such as melanoma, key driver mutations have now been identified, leading to the use of effective targeted therapy that directly translates to improved patient survival [54] . Despite the presence of more than 40 somatic mutations, there does not appear to be solitary frequent genetic defects in the majority of HCC tumours [40,41,55,56] . Polyclonality has been noted in patients with HCC reflecting a complex genetic landscape. The recently proposed concept of "trunk vs branch" heterogeneity can be applied to HCC, whereby key mutations that drive tumorigenesis exist in both primary and secondary lesions (trunk) and need to be distinguished from those that are only present in a minority of tissue (branch) [42] . The question remains as to whether the vast number of genetic alterations in HCC reflect multiple "trunk" mutations that would each require inhibition, or if the majority are mere passenger alterations that do not need treating. Recent advances in high throughput sequencing have uncovered several mechanisms of genetic changes, including somatic mutations, copy number alterations, HBV integration and somatic changes of retrotransposons [55,57] . Whole genome sequencing of 88 primary HCC tumours with matched adjacent liver tissue revealed the predominant oncogenic mutation was beta catenin (15.9%) which is mutually exclusive with the most frequently mutated tumour suppressor gene Tp53 (35.2%) echoing results from previous genomic studies [41,55,58,59] . Further mutations have been found in ARID1 and 2 (both of which regulate chromatin remodelling pathways) and rare mutations in RPS6KA3 which codes for RSK2 (a serine threonine kinase of the MAPK pathway) [60] . A larger study of 503 HCC liver genomes revealed 30 driver genes implicating 11 core pathways in tumorigenesis. Recurrent focal amplifications were seen in 25% of cases, including telomerase reversetranscriptase (TERT) and CCND1-FGF19. Key oncogenic pathways included TP53-RB, Wnt and mTOR-PIK3CA [61] . Frequently altered in HCC, somatic TERT mutations have also been found in pre-cancerous cirrhotic nodules and hepatic adenomas, suggesting they play a pivotal role in malignant transformation. Sequencing of the promoter region of tissue taken from 305 HCCs revealed recurrent TERT mutations in 179 samples (59%) at two common mutually exclusive hot spots [62] . Yet despite a greater understanding of the role of TERT in HCC, its potential as a druggable target remains unknown. A small early phase Ⅱ study of a telomerase derived peptide, GV1001, failed to elicit any responses, although the trial was not enriched for TERT mutated tumours [63] . HCC can be classified into two distinct sub-groups based on genetic aberrations [64][65][66][67] . The proliferative subclass is characterised by activation of RAS, mTOR and IGF signalling and has been associated with poor outcomes. This group can be further divided into those with Wnt/transforming growth factor (TGF)-β activation and the progenitor cell group that have higher progenitor cell, epithelial cell adhesion molecules and type 1 cytoskeletal 19 markers. By comparison, the nonproliferative group is more heterogeneous with less shared mutations. The Wnt/beta catenin and JAK/STAT signalling pathways are the most frequently affected pathways, with alterations in as many as 50%-62.5% and 45% of cases respectively [66,68,69] . Several distinct protein-altering JAK1 mutations have been identified, the majority of which affect the kinase domain [55,70] . HCC development is often attributed to chronic inflammation triggered by both viral infection and cell necrosis and the JAK/STAT pathway has been identified as a promoter of carcinogenesis in a sub-set of HCC via cytokine-induced JAK/STAT pathway activation [55,71] . Copy number analyses using array based comparative genomic hybridization (aCGH) have revealed recurrent amplifications in genes for p53, Wnt signalling, proliferation pathways with recurrent deletions of genes involved in the immune response, chromatin remodelling and NFkβ pathways [72,73] . Furthermore, the DNA virus hepatitis B (HBV), a leading cause of HCC, integrates into the host genome affecting gene expression. Deep sequencing of HCC samples on a background HBV found direct genetic disruption, aberrations of viral promoter-driven transcription, viral-human transcription and copy number changes confirming theories that alternate aetiologies lead to distinct genetic alterations [74,75] . Whole exome sequencing of 243 liver tumours revealed mutational signatures that appeared to correlate with specific risk factors for HCC development including CTNNB1 (alcohol) and TP53(HBV) [76] . In addition, different mutations were associated with varying clinical outcomes. Early stage disease harboured TERT promoter mutations whereas FGF, CCDN1, TP53 were associated with more aggressive pathology.
Conclusions from these extensive genetic studies have highlighted not only the heterogeneity of HCC tumours but also the significant differences in key oncogenic drivers of HCC compared with many other solid malignancies. In breast, colorectal and lung for example, MAPK and PI3K as well as EGFR activated pathways dominate progression in distinct cohorts [77][78][79] . However, for HCC Wnt/β-catenin and JAK/STAT pathways have consistently been identified as responsible for key oncogenic signalling. These differences are likely to explain the failures of therapies in HCC that have provided benefit in other malignancies. Comprehensive genetic mapping will undoubtedly aid drug development for HCC but a major challenge is that the majority of pathways found remain "undruggable" and interacting protein kinases must be targeted instead ( Figure 1). A selection of key pathways and novel agents recently or currently under investigation are discussed below.
MEK inhibition
The RAF/MEK/ERK pathway plays a pivotal role in several cellular process including proliferation, apoptosis and migration [80,81] . Although RAS and RAF mutations are uncommon in HCC, there is evidence that this pathway is activated in the majority of HCC tumours. Selumetinib, a potent selective MEK 1/2 inhibitor, was assessed in a single arm phase 2 trial in 19 patients who had not received prior systemic therapy. There were no responses and time to progression was short (8 wk). The trial was subsequently terminated at the interim analysis [82] . Examination of pre and post treatment tissue revealed that four out of five patients achieved significant inhibition of phospho-ERK1/2 in tumours suggesting the failure of selumetinib was not due to lack of target inhibition. A small study assessing in combination with sorafenib resulted in three partial responses and six with stable disease. Whilst these numbers were small and therefore difficult to interpret, it suggests that perhaps this combination should be assessed further [83] . A phase Ⅱ study assessing the efficacy and safety of combination inhibition using sorafenib and the MEK inhibitor refametinib, resulted in a median time to progression of 122 d and median OS of 290 d [84] .
Toxicities however were significant with rash, diarrhoea, elevated liver enzymes and vomiting and the majority of patients required dose reductions. Interestingly the best responders harboured a RAS mutation and a proof of concept phase Ⅱ trial using this combination for Thillai K et al . Therapy for advanced hepatocellular carcinoma patients with RAS mutations is on going (NCT01915602). Crucially, this study is one of the first attempts to select a specific cohort of HCC patients based on molecular genotype utilising cfDNA to detect mutations in RAS. The study raises a number of important issues regarding feasibility and cost given the incidence of RAS mutation is approximately 3%-5%, requiring a large cohort of patients to be prescreened to identify the small group with aberrant genotype (Table 3).
Anti-angiogenic therapy
HCC is a hyper vascular tumour enriched with high levels of angiogenesis due to the presence of growth factors such as vascular endothelial growth factor (VEGF) and platelet derived growth factor (PDGF) [85] . A meta-analysis assessing the prognostic value of VEGF expression confirmed that tissue and serum VEGF levels seemed to predict poor disease free and overall survival [86] . Biomarker data from the SHARP trial also demonstrated that VEGF and angiopoietin-2 [(Ang2) a further critical molecule in angiogenesis] were independent prognostic markers but not predictive of response [16] . Sorafenib has anti-angiogenic properties and its success fuelled the search for more potent, selective anti-angiogenics. Yet several negative clinical trials have questioned the emphasis on VEGF inhibition in HCC, supporting theories that multiple mechanisms may be in play. As discussed the VEGF inhibitors, sunitinib, linifanib and brivanib failed to prove non-inferiority compared with sorafenib. Some commentators have therefore argued that an antiangiogenic monotherapy "ceiling" has been reached, and combination strategies will be required to extend survival beyond this [87] . Trials of sorafenib in combination with other antiangiogenic therapy (bevacizumab), chemotherapy (doxorubicin or FOLFOX) or other molecularly targeted therapy (e.g., everolimus and temsirolimus) are on-going. In order to ensure optimal results with these agents, the development of predictive biomarkers is needed to select patients who are most likely to benefit.
HGF/c-MET pathway
In vitro studies suggest that c-Met may play a role in proliferation, angiogenesis and metastatic spread in HCC and the hepatocyte growth factor (HGF)-cMET axis is therefore an attractive target. Whilst HGF expression in HCC tumours is low compared with surrounding liver tissue, over-expression of cMET has been observed in nearly a quarter of HCC cases and there is some evidence to suggest c-MET expression is a poor prognostic marker [88][89][90] . Biomarker data from the SHARP trial revealed that HGF levels correlated with tumour size [16] . There is also evidence of an interaction between c-MET and both EGFR and VEGF [91] . Preliminary data from c-MET inhibition with cabozantinib is promising and as previously discussed a phase Ⅲ trial with tivantinib in patients with high levels of MET expression is on going [92] .
FGFR inhibition
Fibroblast growth factors are trans membrane receptor kinases that signal downstream pathways including the RAS-RAF-MAPK. FGF3/4 is expressed in normal tissue including benign hepatocytes [93] . Gene array studies and Immunohistochemical expression assays have shown overexpression of FGF3 and FGF4 in HCC tumours that mediate proliferation, cell death and alpha feto protein (AFP) levels [94] . Brivanib, in addition to its anti-angiogenic properties as discussed above, is an ATP competitive inhibitor of FGF1-3. Although it failed to improve survival in the first and second line setting, further multi-kinase inhibitors that also target FGFR are currently underway. The lack of response to brivanib may be partly explained by its use in an unspecified population and biomarkers may aid selection of patients likely to respond to inhibition. Lenvatinib, an oral multitargeted tyrosine kinase inhibitor of VEGFR-1, FGFDR1-4, PDGFRβ, RET and KIT is currently under evaluation in a non-inferiority study with sorafenib following a phase Ⅱ trial which resulted in a median time to progression of 12.8 mo (95%CI: 7.23-14.7) and median OS of 18.7 mo NCT01761266 [95] . The REFLECT phase Ⅲ trial comparing sorafenib to lenvatinib has recently been completed. This trial has attempted to learn the lessons from the previous high profile failures described in this article by utilising stricter criteria for trial entry, excluding poor prognosis groups such as those patients with greater than 50% liver involvement, bile duct invasion, or main branch portal venous infiltration.
Dovitinib, an FGFR, VEGFR and PDGFR TKI demonstrated efficacy in xenograft mouse models and is currently under investigation in a phase Ⅱ trial [96,97] . FGF19, located on chromosome 11q13, a region amplified in 10%-15% of HCC tumours, is a potential predictive biomarker for FGF inhibitors and FGF19 targeted antibodies are under investigation in in vitro models [97] . In vivo studies with murine models suggest that dual targeting with FGFR and mTOR inhibition impaired tumour growth unlike treatment with the FGFR inhibitor alone providing support for combination trials [98] .
TGF-β signalling
TGF-β signalling plays a role in the micro-tumour en-vironment promoting epithelial-mesenchymal transition (EMT), dysplastic nodule formation and subsequent HCC development [99][100][101] . Patients with higher levels of TGF-β signalling are associated with larger less differentiated tumours with higher levels of AFP [102] . It remains unclear whether TGF-β plays a role in a sub-group of patients, or in the carcinogenesis of all HCCs due to its dual role in tumour suppression in normal tissue and tumour promotion in HCC. TGF-β inhibitors modulate EMT leading to reduced tumour growth in pre-clinical models. Galunisertib, a selective TGF-β TKI is currently under investigation in a phase Ⅱ trial (NCT02178358).
Immunotherapy
Recent years have seen a resurgence in the use of immunotherapy, led partly by the success of anti-CTL4 antibodies in solid tumours such as melanoma and more recently antibodies targeting the programmed death (PD) receptor and its ligand [103,104] . Immunotherapy works by enhancing anti-tumour response, an important mechanism in HCC as the surrounding micro-tumour environment is rich in immune cells. Tremelimumab, a fully human IgG2 monoclonal anti-CTL4 antibody was assessed in a phase Ⅱ study of 24 patients with HCC on a background of HCV. The drug had a good safety profile and a partial response of 17.6% and disease control rate of 76.4%. Time to progression was 6.48 m (95%CI: 3.95-9.14). Changes were also seen in the predominant variants of HCV as well as a reduction in viral loads. These early reports are promising and suggest that immunotherapy may have the dual benefit of treating both HCC and underlying viral hepatitis. Anti-programmed death ligand 1 (PDL1) inhibitors are checkpoint inhibitors that block T cell activation when bound by PD ligands 1 and 2. Patients with tumours that over-express PD-L1 are associated with a poorer prognosis. In a recently reported phase Ⅰ/Ⅱ dose escalation study, patients received 0.1 to 10.0 mg/kg of the anti-PDL1 agent nivolumab intravenously for up to 2 years. 2 patients had a complete response (CR) and a further 7 patients had a partial response (PR) [105] . The overall survival rate at 6 mo was 72%. Although these results are from a very small early phase trial, they are highly encouraging and a number of trials using checkpoint inhibitors are now planned in both first and second line settings.
CONCLUSION
The era of personalised medicine and treatment stratification has yet to impact clinical practice of HCC and the failure of several clinical trials has been disappointing. Nevertheless our understanding of this unique disease has improved significantly with the benefit of genomic sequencing and biomarker data from clinical trials. Proof of concept studies such as the ongoing phase Ⅱ trial with refametinib for RAS mutated cancers and tivantinib for c-MET positive tumours are a step forward in designing adequate trials to maximise potential benefit of novel agents in pre-determined sub groups. Molecular testing, improved clinical trial design and the development of predictive biomarkers should finally see an improvement in survival for this global disease.
|
2018-04-03T00:05:51.199Z
|
2016-02-15T00:00:00.000
|
{
"year": 2016,
"sha1": "60e47238a22fe64e4a3a155e9625d621002ad49d",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.4251/wjgo.v8.i2.173",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "d5fc83a638d673d3e31835364dc7f1e1fbbe27ad",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
119299692
|
pes2o/s2orc
|
v3-fos-license
|
Spherical coupled-cluster theory for open-shell nuclei
A microscopic description of nuclei is important to understand the nuclear shell-model from fundamental principles. This is difficult to achieve for more than the lightest nuclei without an effective approximation scheme. The purpose of this paper is to define and evaluate an approximation scheme that can be used to study nuclei that are described as two particles attached to a closed (sub-)shell nucleus. The equation-of-motion coupled-cluster formalism has been used to obtain ground and excited state energies. This method is based on the diagonalization of a non-Hermitian matrix obtained from a similarity transformation of the many-body nuclear Hamiltonian. A chiral interaction at the next-to-next-to-next-to leading order using a cutoff at 500 MeV was used. The ground state energies of ${}^6$Li and ${}^6$He were in good agreement with a no-core shell-model calculation using the same interaction. Several excited states were also produced with overall good agreement. Only the $J^\pi=3^+$ excited state in ${}^6$Li showed a sizable deviation. The ground state energies of ${}^{18}$O, ${}^{18}$F and ${}^{18}$Ne were converged, but underbound compared to experiment. Moreover, the calculated spectra were converged and comparable to both experiment and shell-model studies in this region. Some excited states in ${}^{18}$O were high or missing in the spectrum. It was also shown that the wave function for both ground and excited states separates into an intrinsic part and a Gaussian for the center-of-mass coordinate. Spurious center-of-mass excitations are clearly identified.
I. INTRODUCTION
In the past decade, the computing resources made available for scientific research has grown several orders of magnitude. This trend will continue during this decade, culminating in exascale computing facilities. This will promote new insights in every discipline as new problems can be solved and old problems can be solved faster and to a higher precision.
In nuclear physics, one important goal is a predictive theory, where nuclear observables can be calculated from first principles. But even with the next generation of supercomputers, a virtually exact solution to the nuclear many-body problem is possible only for light nuclei (see Leidemann and Orlandini [1] for a recent review on manybody methods). Using a finite basis expansion, a full diagonalization can currently be performed for nuclei in the p-shell region [2]. This might be extended to light sdshell nuclei within the next couple of years with access to sufficient computing resources. For ab initio access to larger nuclei, the problem has to be approached differently [3][4][5][6][7]. * gustav.jansen@utk.edu In coupled-cluster theory, a series of controlled approximations are performed to generate a similarity transformation of the nuclear Hamiltonian. At a given level of approximation, efficient formulas exist to evaluate the ground state energy of a closed (sub-)shell reference nucleus. The similarity-transformed Hamiltonian is then diagonalized to calculate excited states and states of nuclei with one or more valence nucleons. This defines the equation-of-motion coupled-cluster (EOM-CC) framework (see Bartlett and Musia l [8] for a recent review and Shavitt and Bartlett [9] for a textbook presentation). Recently, this method was applied to the oxygen [10] and the calcium [11] isotopic chains, as well as 56 Ni [12], extending the reach of ab initio methods in the medium mass region. Further calculations in the nickel and tin regions are also planned.
In this work, I will refine the EOM-CC method for two valence nucleons attached to a closed (sub-)shell reference (2PA-EOM-CC). The general theory was presented in Jansen et al. [13], where calculations were limited to small model spaces. The working equations are now completely reworked in a spherical formalism. Since the Hamiltonian is invariant under rotation, this formalism enables us to do calculations in significantly larger model spaces. All relevant equations of spherical formalism are explicitly included here for future reference. The method, in the form presented here, has already been successfully applied to several nuclei [10,11,14,15], but the formalism has not been presented.
A brief overview of general coupled-cluster theory and the equation-of-motion extensions is given in Sec. II. In Sec. III I derive the working equations for 2PA-EOM-CC and discuss numerical results for selected p-and sd-shell nuclei in Sec. IV. As a proper treatment of three body forces and continuum degrees of freedom is beyond the scope of this article, the focus will be on convergence, rather than comparison to experiment. Finally, in Sec. V I present conclusions and discuss the road ahead. All angular momentum transformations used in this work are defined in the appendix.
II. COUPLED-CLUSTER THEORY
In this section the Hamiltonian that enters the coupled-cluster calculations is defined. I have also included a brief review of single reference coupled clustertheory together with the equation-of-motion (EOM-CC) extensions. In this framework, a diagonalization in a truncated vector space yields excited states, where also nuclei with different particle numbers can be approached by choosing an appropriate basis. The presentation is kept short and is focused on the aspects important for deriving the spherical version of the 2PA-EOM-CCSD method presented in Sec. III.
All calculations are done using the intrinsic Hamiltonian Here A is the number of nucleons in the reference state, A * = A + 2 is the mass number of the target nucleus, andv ij is the nucleon-nucleon interaction. Only twobody interactions are included at present. In the second quantization, the Hamiltonian can be written aŝ H = pq ε p q a † p a q + 1 4 pqrs pq||rs a † p a † q a s a r .
The term pq||rs is a shorthand for the matrix elements (integrals) of the two-body part of the Hamiltonian of Eq. (1), p, q, r and s represent various single-particle states while ε p q stands for the matrix elements of the onebody operator in Eq. (1). Finally, second quantized operators like a † q and a p create and annihilate a nucleon in the states q and p, respectively. These operators fulfill the canonical anti-commutation relations.
A. Single-reference coupled-cluster theory In single-reference coupled-cluster theory, the manybody ground-state |Ψ 0 is given by the exponential ansatz, Here, |Φ 0 is the reference Slater determinant, where all states below the Fermi level are occupied andT is the cluster operator that generates correlations. The opera-torT is expanded as a linear combination of particle-hole excitation operatorŝ whereT n is the n-particle-n-hole(np-nh) excitation operator Throughout this work the indices ijk . . . denote states below the Fermi level (holes), while the indices abc . . . denote states above the Fermi level (particles). For an unspecified state, the indices pqr . . . are used. The amplitudes t a1...an i1...in will be determined by solving the coupledcluster equations. In the singles and doubles approximation the cluster operator is truncated aŝ which defines the coupled-cluster approach with singles and doubles excitations, the so-called CCSD approximation. The unknown amplitudes result from the solution of the non-linear CCSD equations given by Φ a i |H|Φ 0 = 0, Φ ab ij |H|Φ 0 = 0 .
The term H = exp (−T)Ĥ N exp (T) = Ĥ N exp (T) is called the similarity-transform of the normal-ordered Hamiltonian. In this formulation, the state |Φ ab... ij... is a Slater determinant that differs from the reference |Φ 0 by holes in the orbitals ij . . . and by particles in the orbitals ab . . . . The subscript C indicates that only connected diagrams enter, while the normal-ordered Hamiltonian is defined asĤ The operatorF is the onebody part of the normal-ordered Hamiltonian defined aŝ where Here ǫ p q and pi||qi are the matrix elements of the Hamiltonian in Eq. (2). The sum is over all single particle indices, i, below the Fermi energy. The operatorV N is the twobody part of the normal-ordered Hamiltonian, while E 0 denotes the vacuum expectation value with respect to the reference state.
Once the t a i and t ab ij amplitudes have been determined from Eq. (7), the correlated ground-state energy is given by The CCSD approximation is a very inexpensive method to obtain the ground state energy of a nucleus. In most cases however, the accuracy is not satisfactory [16]. The obvious solution would be to include triples excitations in Eq. (6) to define the CCSDT approximation. This leads to an additional set of non-linear equations that has to be solved consistently. Unfortunately, such a calculation is computationally prohibitive [4]. The computational cost of CCSDT scales as n 3 o n 5 u , where n o is the number of single-particle states occupied in the reference determinant and n u are the number of unoccupied states. For comparison, the computational cost of the CCSD approximation scales as n 2 o n 4 u . Instead of solving the coupled-cluster equations (7) including triples excitations, one calculates a correction to the correlated ground state energy (12), using the Λ-CCSD(T) approach [17,18]. Here, the left-eigenvalue problem using the CCSD similarity-transformed Hamiltonian is solved, yielding a correction to the ground state energy. The left-eigenvalue problem is given by whereΛ is a de-excitation operator, The unknown amplitudes λ i a and λ ij ab are the components of the left-eigenvector with the lowest eigenvalue in Eq. (14). Once found, the energy correction is given by Here,F hp is the part of the normal-ordered one-body Hamiltonian (10) that annihilates particles and creates holes. The energy denominator is defined as where f pp are the diagonal elements of the normal ordered one-body HamiltonianF defined in Eq. (11). Using this approach, the ground state wave function (3) and the similarity transformed Hamiltonian (8) are calculated using the CCSD approximation, while the ground state energy is given by This approximation has proved to give very accurate results for closed (sub-)shell nuclei [19].
B. Equation-of-motion coupled-cluster(EOM-CC) theory
In nuclear physics, the single reference coupled-cluster method defined by the coupled-cluster equations (7) is normally used to obtain the ground state energy of a closed (sub-)shell nucleus. While it is possible to apply the CC method to any reference determinant to obtain the energy of different states, the EOM-CC framework is usually employed for such endeavors.
Equation (8) defines a similarity transformation. This guarantees that the eigenvalues ofH are equivalent to the eigenvalues of the intrinsic Hamiltonian (1) and that the eigenvectors are connected by the transformation defined by Eq. (3). However, approximations are introduced by limiting the vector space allowed in the diagonalization ofH. This is the foundation of the EOM-CC approach.
To simplify the equations and for effective calculations, the eigenvalue problem in the EOM-CC approach is modified. A new eigenvalue problem is defined as the difference between a target state and the coupled-cluster reference state (3). Formally, a general state of the A-body nucleus is written HereΩ µ is an excitation operator that creates the state |Ψ µ when applied to the coupled-cluster reference state |Ψ 0 . The label µ identifies the quantum numbers(eg. energy and angular momentum) of the target state. The Schrödinger equations for the target state and the coupled-cluster reference state are written Here E µ is the energy of the target state and E CC is the coupled-cluster reference energy in Eq. (12).
By multiplying Eq. (22) with e −T and Eq. (23) witĥ Ω µ e −T from the left and take the difference between the two equations, the eigenvalue problem is written as where ω µ = E µ −E CC and we have used that Ω µ ,T = 0. Finally, none of the unconnected terms in the evaluation of the commutator survive, resulting in This operator equation can be posed as a matrix eigenvalue problem where ω µ are the eigenvalues and the matrix elements ofΩ µ are the components of the eigenvectors. The subscript C implies that only terms whereH andΩ µ are connected by at least one contraction survive.
In diagrammatic terms, this means that only connected diagrams appear in the operator product HΩ µ C .
The similarity-transformed Hamiltonian (8) is a non-Hermitian operator and is diagonalized by an Arnoldi algorithm( for details, see, for example, Golub and Van Loan [20]). This algorithm relies on the repeated application of the connected matrix vector product defined by Eq. (25). A left-eigenvalue problem is solved to obtain the conjugate eigenvectors [21], but this is beyond the scope of this article.
To find the explicit expressions for the connected matrix vector product, the excitation operator must be properly defined. When used for excited states of an A-body nucleus, the excitation operator in Eq. (21) is parametrized in terms of np-nh operators and written aŝ wherê The unknown amplitudes r(with the sub-and superscripts dropped) are the matrix elements ofR, and can be grouped into a vector that solves the eigenvalue problem in Eq. (25). The explicit equations for the matrix vector product are established by looking at each individual element using a diagrammatic approach, Calculations using the full excitation operator (26), are not computationally tractable, so an additional level of approximation is introduced by a truncation. When the CCSD approximation is used to obtain the reference wave function, the excitation operator is truncated at the 2p-2h level [22] which defines EOM-CCSD.
In the EOM-CC approach, the states of A ± k nuclei are also treated as excited states of an A-body nucleus. The general wave function for an A ± k nucleus is written The operatorΩ µ and the energies E µ of the target state, also solve the eigenvalue problem in Eq. (25). The energy difference ω µ = E µ − E * 0 is now the excitation energy of the target state in the nucleus A ± k, with respects to the closed-shell reference nucleus with the mass shift A * = A ± k in the Hamiltonian (1). This mass shift ensures that the correct kinetic energy of the center of mass is used in computing the A ± k nuclei.
The operatorŝ define the particle attached equation-of-motion coupledcluster [23] (PA-EOM-CC) and the particle removed equation-of-motion coupled-cluster [24](PR-EOM-CC) approaches. These methods have been used successfully in quantum chemistry for some time (see Bartlett and Musia l [8] for a review), but also have recently been implemented for use in nuclear structure calculations [25]. In Jansen et al. [13] 2PA-EOM-CCSD and 2PR-EOM-CCSD were defined for systems with two particles attached to and removed from a closed (sub-)shell nucleus. For this problem, the excitation operators were given bŷ In this article, I will focus on the 2PA-EOM-CCSD method, where (34) is truncated at the 3p-1h level. This approximation is suitable for states with a dominant 2p structure. It is already computationally intensive with up to 10 9 basis states (see Sec. IV) for the largest nuclei attempted. A full inclusion of 4p-2h amplitudes therefore is not feasible at this time.
C. Spherical coupled-cluster theory
For nuclei with closed (sub-)shell structure, the reference state has good spherical symmetry and zero total angular momentum. For these systems, the cluster operator (4) is a scalar under rotation and depends only on reduced amplitudes. Thus, and where the amplitudes t(J)(sub-and superscripts dropped) are a short form of the reduced matrix elements of the cluster operator (4) (see Appendix A for details). Moreover, J is a label specifying the total angular momentum of a many-body state and standard tensor notation has been used to specify the tensor couplings. The single particle operatorã i is the time reversal of the a † i operator that creates a particle in the orbital labeled i.
As the similarity-transformed Hamiltonian (8) is a product of three scalar operators (remember that the exponential of an operator is defined in terms of its Taylor expansion), it is also a scalar under rotation. This allows a formulation of the coupled-cluster equations that is completely devoid of magnetic quantum numbers, thus reducing the size of the single-particle space and the number of coupled non-linear equations to solve in Eq. (7). For further details, see Hagen et al. [19].
Within the same formalism, the connected operator product in Eq. (25) is established. This will greatly reduce the computational cost of calculating the product but also allow a major reduction in both the singleparticle basis and the number of allowed configurations in the many-body basis.
Given a target state with total angular momentum J (in units of c), the excitation operator, Ω µ (21), is a spherical tensor operator by definition (see, for example, Bohr and Mottelson [26]). It has a rank of J, with 2J + 1 components labeled by the magnetic quantum number M ∈ [−J, . . . , J]. It is written as where A is the number of particles in the reference state, A±k is the number of particles in the target state, while µ identifies a specific set of quantum numbers. Identifying the excitation operator as a spherical tensor operator, invokes an extensive machinery of angular momentum algebra with important theorems. Of special importance is the Wigner-Eckart theorem(see for example Edmunds [27]), which states that the matrix elements of a spherical tensor operator can be factorized into two parts. The first is a geometric part identified by a Clebsch-Gordon coefficient, while the second is a reduced matrix element that does not depend on the magnetic quantum numbers.
To develop the spherical form of EOM-CC, I will use the following notation for the matrix elements of a general operator where the single-particle states labeled a and b are occupied in the outgoing state, while the single-particle states labeled i and j are occupied in the incoming state. All single-particle states shared between the incoming and outgoing many-body states are dropped from the notation.
In this form, a component of the spherical basis is written as where α denotes a particular many-body state, while J α (M α ) is the total angular momentum(projection) of this state. Using the spherical notation, the matrix elements of the excitation operator are written where we have dropped the cumbersome sub-and superscripts on the excitation operator in favor of standard tensor notation. The matrix elements of the matrix vector product in Eq. (25) are written Now the Wigner-Eckart theorem allows a factorization of the matrix elements into two factors Here C JJ β Jα MM β Mα is a Clebsch-Gordon coefficient and the double bars denote reduced matrix elements and do not depend on any of the projection quantum numbers. This equation is simplified by dividing by the Clebsch-Gordon coefficient. This means that for each set of α, β, J α , and J β , where J, J α , and J β satisfy the triangular condition, there are (2J + 1) × (2J α + 1) × (2J β + 1) identical equations for a given J. Only one is needed to solve the eigenvalue problem, which reduces the dimension of the problem significantly. In the final eigenvalue problem the unknown components of the eigenvectors are the reduced matrix elements of the excitation operator The eigenvalue problem in Eq. (45) is the spherical formulation of the general EOM-CC diagonalization problem. For a given excitation operator, both the connected operator product and the reduced amplitudes must be defined explicitly.
III. SPHERICAL 2PA-EOM-CCSD
In this work I derive the spherical formulation of the 2PA-EOM-CCSD [13] method, where the excitation operator in Eq. (34) has been truncated at the 3p-1h level. It is defined aŝ where the cumbersome sub-and superscripts in the operator have been dropped. Let us begin by introducing the notation used throughout this section. The unknown amplitudes r are the matrix elements ofR and defined by while a shorthand form of the components of the matrixvector product is introduced In this notation, the eigenvalue problem in Eq. (25) is written In the spherical formulation, the excitation operator is a spherical tensor operator of rank J and projection M , Here the a † a (j a ) andã i (j i ) are spherical tensor operators of rank j a and j i respectively, where the latter is the timereversed operator of a † i (j i ). Standard tensor notation has been used to define the spherical tensor couplings. The reduced amplitudes are now the reduced matrix elements of the spherical excitation operator (51). They are defined as where j a and j b are coupled to J in left to right order. Moreover, where j a and j b has been coupled to J ab , while J ab and j c has been coupled to J abc , also in left to right order. The shorthand form of the reduced matrix elements of the connected operator product is defined analogously by and The transformations that connect the reduced matrix elements ofR J with the uncoupled matrix elements are given in Eqs. (A25)-(A28). The final form of the spherical eigenvalue problem (45) is written where the amplitudes are the reduced matrix elements defined above. Table I presents the main result of this section. The first column lists all possible diagrams that contribute to the matrix-vector product in Eqs. (50) and (56). The remaining two columns contain the closed form expressions for these diagrams in the uncoupled and in the spherical representation respectively. All matrix elements and amplitudes are defined in Appendix A, while the permutation operatorsP(a, b) andP(ab, c) are defined in Appendix B. Note that in the spherical representation the permutation operators also change the coupling order.
The last two diagrams contain the three-body parts of the similarity-transformed Hamiltonian (8) and have been combined in the spherical representation. The details of how the intermediate operatorχ is defined, are contained in Appendix C.
Let us briefly go through the derivation of a single spherical diagram expression. The first diagram in Table I will serve as a good example. This diagram contributes to the 2p matrix elements defined in Eq. (48) in Table I. All diagrams for the 2PA-EOM-CCSD method with both ordinary and reduced amplitudes and matrix elements. The reduced amplitudes and matrix elements are defined in Appendix A, while χ a i (J) is defined in Appendix C. Note that the two last diagrams are combined into one expression in the spherical formulation and that repeated indices are summed over.
the uncoupled representation and to the reduced matrix elements defined in Eq. (54) in the spherical representation.
The first step is to use the transformation in Eq. (A25) to write the reduced matrix elements (54) in terms of the uncoupled matrix elements (48). This gives us whereĴ ≡ √ 2J + 1. The diagram contributions to the uncoupled matrix elements are given by where the arrow indicates that it is only one of several contributions to this matrix element. Here,H b e is a matrix element of the one-body part of the similaritytransformed Hamiltonian (8) and bothH b e and r ae are in the uncoupled representation.
Second, Eq. (58) is inserted into Eq. (57) to get Note that for the moment, we are ignoring the permutation operatorP(ab) that is a part of the diagram. Third, the reverse transformations in Eqs. (A17) and (A26) are used to transform the uncoupled matrix ele-ments ofH b e and r ae to the corresponding reduced matrix elements. This gives where δ is the Kronecker δ and comes from the application of the Wigner-Eckart theorem to the matrix element ofH. The Clebsch-Gordon coefficients are orthonormal so The remaining expression simplifies to Note that M 1 = 2J + 1 and that repeated indices are summed over.
Initially, we left out the permutation operatorP(a, b) that is needed to generate antisymmetric amplitudes. In the uncoupled representation this operator is defined aŝ where1 is the identity operator andP a,b changes the order of the two indices a and b, but leaves the coupling order unchanged. Let us apply this operator to HR ab (J). The result iŝ where the last matrix element has the wrong coupling order compared to the reduced amplitudes defined in Eq. (54) where To change the coupling order, one of the symmetry properties of the Clebsch-Gordon coefficients is exploited to write To simplify the notation, the permutation operator in the spherical representation is defined to also change the coupling order. This results in the following definition The total contribution from the first diagram in Table I in the spherical representation is given by whereP(ab) is defined by Eq. (67). The three-body permutation operatorsP(ab, c) are defined in the same manner, but they must change the coupling order of three angular momenta. The details have been left to Appendix B.
A. Model space and interaction
All calculations in this section have been done in a spherical Hartree-Fock basis, based on harmonic oscillator single-particle wave functions. These are identified with the set of quantum numbers {nlj} for both protons and neutrons, where n represents the number of nodes, l represents the orbital momentum, and finally j is the total angular momentum of the single-particle wave function.
The size of the model space is identified by the variable where N = 2n + l, so the number of harmonic oscillator shells is N max + 1. All single-particle states with are included and no additional restrictions are made on the allowed configurations. Thus, N max completely determines the computational size and complexity of the calculations. . Column two and three list the number of matrix elements for the different model spaces and the memory footprint of the interaction in our implementation. All numbers are based on the coupled representation, also known as jj-scheme. Table II lists the size of the single-particle space for different values of N max in the spherical representation. In addition, it includes the total number of matrix elements of the interaction in Eq. (1), as well as the memory footprint in the implementation. Given the memory requirements, it is clear that a distributed storage scheme is needed.
In addition to the interaction elements, the Arnoldi vectors in the diagonalization procedure also has to be stored. Typically 150 iterations are performed, where one vector has to be stored for each iteration. Table III lists the size of a single vector for selected target states in various model spaces. As an example, for a double precision calculation, where each element requires 8 bytes of storage, the Arnoldi diagonalization would require ≈ 76 GB of memory for the J π = 3 + state of 6 Li with N max = 16. Thus the Arnoldi procedure quickly becomes the largest memory consumer in this method. In general, there is a large computational cost from increasing the total angular momentum of the target state, comparable to increasing the size of the model space.
The interaction used in this work is derived from chiral perturbation theory at next-to-next-to-next-to-leading order(N 3 LO) using the interaction matrix elements of Entem and Machleidt [28]. The matrix elements of this interaction employs a cutoff Λ = 500 MeV and all partial waves up to relative angular momentum J rel = 6 are included. The relevant three-and four-body interactions defined by the chiral expansion at this order are not included.
For the treatment of center-of-mass contamination, a softer interaction where the short-range parts are removed via the similarity renormalization group transformation (SRG) [29], is used. A cutoff λ = 2.0fm −1 is sufficient for this purpose.
B. Treatment of center of mass
Recently, Hagen et al. [19,30] demonstrated a procedure to show that the coupled-cluster wave function separates into an intrinsic part and a Gaussian for the centerof-mass coordinate. This is important, because the model spaces employed in coupled-cluster calculations are not complete N ω spaces, where the basis sets consist of all A-body Slater determinants not exceeding N ω in excitation energy. In practical calculations, where the model spaces are not complete, the separation therefore is not a priori guaranteed. As a result, the intrinsic Hamiltonian, where all reference to the center-of-mass has been removed, is usually employed.
In the EOM-CC approach, one makes further approximations by truncating the many-body basis before a diagonalization is performed. It therefore is not clear that the final wave functions separate in the same way as the coupled-cluster reference state. In the following, I will investigate the center-of-mass properties of 2PA-EOM-CC wave functions. As an example, I will highlight selected solutions for A = 6 nuclei. First, we review the procedure from Hagen et al. [19,30] and introduce the notation.
First, it is assumed that the wave function is the n'th eigenvalue of the center-of-mass Hamiltonian with a frequency ω that, in general, differs from that of the harmonic oscillator basis employed in the calculation. The expectation value of this operator should vanish given the correct value of n. For all physical solutions, the expectation should vanish for n = 0, provided the solutions are converged. This assumption is rooted in the observation that for most coupled-cluster wave functions, the expectation value E (0) cm (ω) = Ĥ (0) cm (ω) is, in general, not zero for different values of ω but close to zero for a specific value. Further, there seems to be very little correlation between E cm (ω) vanishes for a given value of ω, independent of ω. Under this requirement, the numerical value of ω is given by that only depends on the frequency of the harmonic oscillator basis employed the calculation. Finally, one calculates the expectation value E In the following, I will present results for the J = 0 + ground state of 6 He and the first excited J = 3 + state of 6 Li. In addition I include a low lying J π = 1 − state that shows up in the numerical spectrum of 6 He. This state has not been documented experimentally and is a prime candidate for a spurious center-of-mass excitation. All calculations were performed in a model space defined by N max = 16, which was sufficient for converged energies for all states, using an SRG transformed interaction with a momentum cutoff λ = 2.0 fm −1 . Figure 1 shows the expectation value of the center-ofmass Hamiltonian (71) at the frequency ω = ω for the three states in question. It is assumed that all states are degenerate with the ground state with n = 0. The J π = 0 + state of 6 He and the J π = 3 + state of 6 Li shows the expected behavior as observed in Hagen et al. [19,30]. The expectation value vanishes for ω ≈ 12 MeV, but not in general. The expectation value with respect to the J π = 1 − state however, does not vanish for any frequency. It is clearly wrong to assume that it is the ground state of the center-of-mass Hamiltonian (71).
Instead, let us assume that it is the first excited state of the center-of-mass Hamiltonian (71) with n = 1 . This Table III. Size of the many-body space in the diagonalization procedure in the Arnoldi algorithm for all states calculated in this work. All numbers are based on the angular-momentum coupled representation(jj-scheme). would make it a p state with negative parity which gives a J π = 1 − state when coupled to a J π = 0 + intrinsic state. If this is the case, it will be a spurious centerof-mass excitation where the intrinsic wave function is degenerate with the intrinsic ground state. Figure 2 shows the same information as Fig. 1, only Three different states are shown -the J π = 0 + ground state of 6 He, the J π = 1 − excited state in 6 He and the J π = 3 + excited state in 6 Li. The positive parity states are assumed to be the lowest eigenstates of the center-of-mass Hamiltonian (71) with n = 0, while the negative parity state is assumed to be the first excited state of the center-of-mass Hamiltonian (71) with n = 1.
now the J π = 1 − state in 6 He is assumed to be the first excited state of the center-of-mass Hamiltonian (71) with n = 1. The expectation value now vanishes for all three states at ω ≈ 12 MeV. [19,30] as a function of the oscillator parameter ω. Three different states are shown -the J π = 0+ ground state of 6 He, the J π = 1 − excited state in 6 He and the J π = 3 + excited state in 6 Li. The positive parity states are assumed to be the lowest eigenstates of the center-of-mass Hamiltonian (71) with n = 0, while the negative parity state is assumed to be the first excited state of the center-of-mass Hamiltonian with n = 1.
Under these assumptions, the appropriate ω is calculated using Eq. (72). As seen in Fig. 3, where ω is plotted as a function of ω, the frequency of the centerof-mass Hamiltonian is approximately independent of the frequency of the underlying harmonic oscillator basis for all three states.
Finally, the expectation values are calculated using Eq. (71). The results are shown in Fig. 4.
From these results, we can draw a couple of conclusions. First, since the expectation values are approximately zero, this shows that our assumptions were valid. All states are approximate eigenstates of the center-ofmass Hamiltonian (71). This means that the total wave function separates into an intrinsic part and a center-ofmass part for all three states. Second, the wave functions for the ground state of 6 He and the first excited J π = 3 + state of 6 Li, factorizes into intrinsic states and the ground state of the center-of-mass Hamiltonian. Last, the J π = 1 − state in 6 He factorizes into the intrinsic ground state and the first excited state of the center-of-mass Hamiltonian. It is identified as a spurious center-of-mass excitation and should be removed from the spectrum.
Ideally, one should go through the entire procedure outlined above to make sure that the calculated state is not a spurious center-of-mass excitation. In practice, it is only necessary to verify that the center-of-mass energy E (0) cm ( ω) vanishes for some value of ω. There are several reasons why the results in this section are only approximate. First, the method used to calculate expectation values is not exact. Second, the single-particle space employed in the calculations is cut (71) is calculated at the center-of-mass frequency ω for the J π = 0+ ground state of 6 He, the J π = 1 − excited state in 6 He and the J π = 3 + excited state in 6 Li. The positive parity states are assumed to be the lowest eigenstates of the center-of-mass Hamiltonian (71) with n = 0, while the negative parity state is assumed to be the first excited state of the center-of-mass Hamiltonian with n = 1.
off at some maximum energy. Although it is verified that the total energy is converged with respect to this cutoff, properties of the wave function might require higher cutoffs. Third, the results obtained by the coupled-cluster machinery are truncated both in the coupled-cluster expansion and in the operator used to define the diagonalization space. Finally, the interaction used in these calculations has been evolved using SRG transformations. The three-and many-body forces induced by this transformation have not been included in these calculations. If one assumes that the first two items yield small deviations from zero, then it might be possible to use this to evaluate how good the coupled-cluster truncations are and say something about the current level of approximation. However, one will need to incorporate 4p-2h corrections to analyze this further.
The purpose of this section has been to show that it is possible to identify and exclude spurious center-of-mass excitations for both ground and excited states calculated with EOM-CC theory. The wave function factorizes, to a very good approximation, into an intrinsic part and a harmonic oscillator eigenfunction for the center-of-mass coordinate. To deternine why this is the case will require additional research and is beyond the scope of this article.
C. Applications to 6 Li and 6 He
For any given reference nucleus, there are only three nuclei accessible to the 2PA-EOM-CC method. Using 4 He as the reference, one can add two protons to calculate properties of 6 Be, two neutrons for 6 He and finally a proton and a neutron to calculate properties of 6 Li. Of these, only 6 Li and 6 He are stable with respect to nucleon emission and will be the focus of this section. The structures of 6 Li and 6 He differ markedly. This is important, because the quality of the current level of approximation will inevitably depend on the structure of the nucleus under investigation. 6 Li is well bound and has four bound states below the nucleon emission threshold at 4.433 MeV [31]. The ground state has spin parity assignment J π = 1 + , while the first excited states have J π = 3 + , 2 + and 0 + . The J π = 0 + ground state in 6 He has a two neutron halo structure, bound by only 800 keV [31] compared to 4 He. There are no bound excited states, only a narrow resonance at 1.710 MeV [31], and recently also resonances at 2.6 and 5.3 MeV [32]have been documented.
First, let us look at convergence with respect to the size of the model space. Coupled-cluster theory is based on a finite basis expansion, where N max effectively determines the numerical cutoff. The cutoff is increased until the corrections are so small that the uncertainties in the method dominate the error budget. Typically the corrections are down to a tenth of a percentage of the total binding energy. Extrapolations to infinite model spaces [33,34] have not been performed here but will be included in future work. Figure 5 shows the calculated total binding energy of 6 Li as a function of the oscillator frequency ω. Different lines correspond to different model spaces. At N max = 16, there is a shallow minimum around ω = 24 MeV and, in a 10-MeV range including this minimum, the binding energy varies by approximately 100 keV. This is less than half a percentage of the total energy. At low frequencies, the energy deviates substantially from the minimum, due to the lack of resolution in the singleparticle space.
Note that the gain in binding energy when going from N max = 14 to N max = 16 is also very small, about 40 keV. The binding energy of 6 Li is converged with respect to the size of the model space (N max ) and the energy at ω = 24 MeV will be tabulated. The picture is largely identical for the binding energy of 6 He, only the minimum in energy occurs at ω = 20 MeV. Here the difference in energy between the two largest model space is about 140 keV. Figure 6 shows that the excited states follow the same pattern of convergence as the ground states. Here the total energy of the J π = 3 + state in 6 Li is plotted as a typical example. As before, the energy is plotted as a function of ω and different lines correspond to different values of N max . In this section, the excitation energy will be defined as where E J π ( ω) is the total energy of the excited state with spin-parity assignment J π , calculated at the oscillator frequency ω. Moreover, E gs ( ω) is the ground-state energy calculated at the same frequency. In Fig. 7, the convergence pattern of the excitation energies for selected states in the spectrum of 6 Li is shown. The horizontal axis denotes the size of the model space, where the values in the rightmost column are the experimental values [31]. All excitation energies have been calculated at ω = 24 MeV, which correspond to the minimum of the ground-state energy. There is very little model space dependence at N max = 16 and none of the states shown are classified as spurious center-of-mass excitations according to the prescription in Sec. IV B. A second J π = 1 + state was found higher in the spectrum, but this state was found to be a spurious state and was therefore excluded.
In Fig. 8, an equivalent plot for the first J π = 2 + excited state in 6 He is shown. This result is also converged with respect to the size of the model space. No significant center-of-mass contamination was found in either this state or the ground state. As already discussed a lowlying J π = 1 − state was also found, but was identified as a spurious center-of-mass excitation. Note that all excitation energies for 6 He were calculated at ω = 20 MeV.
Let us also look at some properties of the wave function. Although it is not an observable, other expectation values might be more sensitive to changes in the wave function than the energy. First, the partial norms are defined by where n(2p0h)+n(3p1h) = 1. The amplitudes r ab (J) and r abc i (J ab , J abc , J) are the spherical amplitudes defined in Eqs. (A25) and (A27), respectively, while J x are angular momentum labels. Note that the angular momentum factors are included so the partial norms are consistent between the coupled and uncoupled schemes. These norms quantify the part of the wave function in 2p-0h and 3p-1h configurations, respectively. Note also that they differ in how they are defined from those used in Hagen et al. [11], where the 1 2 and 1 6 prefactors were not used. This gave a larger 3p-1h norm than those in this work, due to a significant overcounting of the 3p-1h amplitudes.
Second, the total weights are defined by where the label pw identifies the partial wave content of the weight. The sum is over all configurations with this partial wave content, because the weights of individual configurations are not stable. In addition, spin-orbit partners are not distinguished. In Table IV partial norms and dominant weights of selected states in 6 Li and 6 He are listed. A few comments are in order. First, all physical states are consistent with the shell-model picture, where the dominant contributions to the wave function come from two valence nucleons in the p shell. Only the J π = 1 − state in 6 He contains contributions from the sd shell, but this is natural as no pure p shell configuration will give a negative-parity state. It is also a spurious center-of-mass excitation and is excluded from the spectrum. Second, the 2p-0h norm for all physical states are around 0.9. Only the spurious state has a significantly lower norm at 0.84. The remaining 0.1 in the 3p-1h norms are needed to relax the reference wave function as it changes due to the presence of the extra nucleons. Finally, the wave function of the ground state of 6 He and the first excited J π = 0 + state in 6 Li are very similar. This is not surprising, since they can be viewed as two parts of a degenerate isospin triplet. Table V shows results with estimated numerical uncertainties for the ground and selected excited states of both 6 He and 6 Li. For comparison, both experimental values and results from a no-core shell-model (NCSM) calculation [35] are tabulated where data are available. Note that the results from the NCSM calculation are based on the same interaction as the results from this work, but the interaction is renormalized using the procedure defined in Suzuki and Lee [36] before the diagonalization was performed. In addition, the final results were extrapolated to an infinite model space.
Let us discuss the uncertainties indicated by the parenthesis in the table. For the results from this work, listed in the second column, the numbers in parenthesis give the difference in energy between the two largest model spaces. The results from Navrátil and Caurier [35] give the extrapolation errors in the last column, while the experimental energies [31] in the first column are listed without uncertainties. Figure 9 shows a graphical representation of the data in Table V. Compared to the results from the NCSM calculation, our results are quite promising. First, the ground-state energy of 6 Li is well within the uncertainties of the "exact" result, while the ground-state energy of 6 He is just outside. The difference between the two nuclei can be explained by the extended spatial distribution of 6 He. Additional correlations are necessary to account for this structure. Although the α core in 6 He is expected to stay largely unchanged when adding two neutrons, the distribution of these extra neutrons are biased . Excitation levels of selected states in 6 Li and 6 He, calculated using 2PA-EOM-CC(this work) and NCSM [35], compared to experimental [31] values.
in one direction. This results in a skewed center of mass compared to the center of mass of the α core alone. Additional correlations are necessary to absorb the resulting oscillations of the α core with respect to the combined center of mass. The spatial distribution of 6 Li is tighter, so this effect is not that prominent. Second, the ordering of excited states in 6 Li is reproduced. Finally, the excitation energies are consistently overestimated. For the first J π = 0 + and 2 + states the differences between the two calculations are small enough to be ascribed to differences in the interaction used. But the difference for the J π = 3 + state, however, is too large for such a simple explanation. Neither the partial norms nor the total weights listed in Table IV provide any hint of explanation for this discrepancy. About 90 % of the wave function is in 2p-0h configurations, which is comparable to the ground state. The wave function is dominated by configurations where both nucleons are in p orbitals, which is consistent with the shell-model picture. Furthermore, the level of convergence for this state is no different from the other excited states. As noted in Sec. IV B, with the SRG evolved interaction, the J π = 3 + state had a slight center-of-mass contribution, which was not present in the other states. This was illustrated in Fig. 4, but a similar calculation using the bare interaction was too computationally intensive to extract any meaningful information. I include it because it might indicate that additional correlations are needed in the calculation, either in the reference or the EOM operator. This matter needs to be investigated further, but currently the implementation will not allow model spaces large enough for a converged description of the center-of-mass admixture in the final wave function.
Using the in-medium similarity renormalization group (IM-SRG), Tsukiyama et al. [37] performed a sim-ilar study with a softer interaction. Here, the J π = 3 + state in 6 Li is reproduced on the same level of accuracy as for the other bound states.
Let us also look at some of the differences between the results in this work and the experimental data. First, all excitation energies are overestimated compared to data. Again, the J π = 3 + states is exceptional, but this has been discussed in detail by Navrátil and Caurier [35]. The matter was resolved by the inclusion of three-nucleon forces [38], which also brought the binding energy very close to data.
There is also an ≈ 500-keV difference for the J π = 2 + resonance in 6 He, but here the effects of three-body forces might be less important. This state was also investigated using a chiral interaction with a different cutoff of 600 MeV. With this interaction, the excitation energy of this state was unchanged. That was not the case for the excited states in 6 Li, where especially the J π = 3 + state turned out to be very cutoff dependent. Since the J π = 2 + state in 6 He is a resonance, the continuum is expected to have a larger impact. The current single-particle basis cannot handle the description of both bound, resonance and continuum states that are necessary in this case. These effects have not been included in this calculation, as the focus has been on properties of the method rather than the interaction. The method has been extended to include a Gamow basis as in Michel et al. [39] and Hagen et al. [40]. It has already been applied to 26 F [15], but a comprehensive discussion is beyond the scope of this article.
Summing up this section, I would like to point out that for well-bound states, with simple structure, the current approximation will yield total energies comparable to exact diagonalization. The calculations can be done in sufficiently large model spaces for the results to be converged for six nucleons, but for certain states, the effects of 4p-2h configurations need to be investigated. To compare to experimental data, however, both three-nucleon forces and continuum degrees of freedom are necessary.
D. Applications to 18 O and 18 F
When 16 O is used as a reference state, the three isobars 18 O, 18 F, and 18 Ne are reachable by the 2PA-EOM-CCSD method. All have well-bound ground states and a rich spectra of bound excited states below their respective nucleon emission thresholds. The spectrum of 18 F is especially rich, as the exclusion principle does not affect the placement of nucleons in the sd shell. The proton-neutron interaction is responsible for the compressed spectrum in the fluorine isotope, while strong pairing effects in 18 O result in a lower ground-state energy. In both 18 O and 18 Ne the spectra are opened up and the first excited states are higher in energy. Our current focus is on convergence and the viability of this method. Thus, 18 Ne is not explicitly discussed, as results are similar to those of 18 O. Let us first look at the convergence of the binding energy of 18 O. Figure 10 shows the ground-state energy of 18 O as a function of the oscillator parameter ω. The different lines correspond to different model spaces, parametrized by the variable N max (69). A shallow minimum develops around ω = 32 MeV, where the energy is converged with respect to the size of the model space. The difference in energy is about 20 keV when the size of the model space is increased from N max = 14 to N max = 16. For a wide range of values around the minimum, the ground state energy shows very little dependence on the ω parameter. Thus, the result is converged with respect to the size of the model space.
A similar result is obtained for the ground-state energy of 18 F, where the difference in energy is around 160 keV between the two largest model spaces. This is almost an order of magnitude larger than for the ground state of 18 O but is still well within 1% of the total energy. Figure 11 shows the total energy of the first excited J π = 3 + state in 18 O for different model spaces. Here, a shallow minimum develops at ω = 28 MeV. Moreover, this state is very well converged, with a difference in energy of only about 25 keV between calculations in the two largest model spaces. It is clear that the rate of convergence differs for different values of ω. When excitation energies are wanted, different choices of ω lead to different results. Let us discuss two options to evaluate the excitation energy. First, the total energies can be treated as variational results, where the lowest energy for the ground state and the lowest energy for the J π = 3 + excited state, are chosen. Thus, at N max = 16 the excitation energy can be calculated as where E 3 + (28MeV) is the total energy of the J π = 3 + excited state, calculated at ω = 28 MeV, while E 0 + 1 (32MeV) is the ground-state energy calculated at ω = 32 MeV. Second, the same value of ω can be used for both energies, typically where the ground state has a minimum. Thus, for the current case the excitation energy is calculated as The difference in energy between these two options is minimal if sufficiently large model spaces are used, but it will have a significant impact on the rate of convergence. The effect is best viewed in Figs. 12 and 13, where the excitation energies of the first J π = 2 + , 3 + , and 4 + excited states in 18 O as functions of the size of the model space, are plotted. In Fig. 12, the excitation energies are calculated according to Eq. (77), while they are calculated according to Eq. (78) in Fig. 13. For 18 O, this choice will affect only the J π = 3 + state, as the other states all have minimum values at ω = 32 MeV. The effect is significant, but the first approach of Fig. 12 correctly depicts the level of convergence of the excited states and will be used in the following.
As in the previous section, let us look at some properties of the wave functions. First, the partial norms defined in Eq. (74) and the total weights (76) of the different configurations are calculated. The results are tabulated in Table VI. All positive-parity states have two nucleons in the sd shell and are consistent with the standard shell model picture. As in the previous section, all 2p-0h norms are close to 0.90, except for the negative-parity states which are closer to 0.80. The negative-parity states are dominated by cross-shell configurations as these are the only 2p-0h configurations that can give a negative parity. 3p-1h excitations from the p shell give a substantial contribution to the 3p-1h norm.
Second, the center-of-mass contamination of each wave function was analyzed according to the prescription in Sec. IV B. Of the states tabulated in Table VI, four states had a significant center-of-mass contamination. Let us, first, focus on the three negative-parity states in 18 O. Figure 14 shows the excitation energies of the negativeparity states in 18 O and they are not yet converged at N max = 16. Although they had a large center-of-mass component, it was not possible to establish what kind of center-of-mass excitations these states corresponded to. Calculations in larger model spaces needs to be performed to correctly describe these states. However, it is also necessary to include 4p-2h correlations to get these states right. This can be understood by examining how negative-parity states can occur in 18 O. First, they can be produced by placing one neutron in the sd shell, while the other is placed in the pf shell. If this was the dominant configuration, the current truncation would have been enough. Second, they can also be produced by placing two neutrons in the sd shell and excite a nucleon from the p shell up to the sd shell. If these kind of excited configurations are comparable in energy to the first kind, 3p-1h configurations start to dominate and 4p-2h configurations are necessary for the proper relaxation of the wave function. One can ask whether the center-of-mass contamination would change if these configurations were included and whether it is a result of a poorly converged wave function, but this will be a topic for future work.
In the spectrum of 18 O, there are three bound J π = 0 + and 2 + states. The second J π = 0 + state is especially interesting for this method, as it is a 4p-2h state [42]. In the shell-model language, it is an intruder state, because configurations outside the sd shell are important to get this state right. As the current implementation includes only the 3p-1h configurations, this state can provide clues as to what type of behavior can be expected from states that are not converged with respect to the level of approximation. Table VI lists three J π = 0 + states in 18 O and none of them stands out. They all have similar partial norms of around 88 % and are dominated by two neutrons in d 5/2 , s 1/2 , and d 3/2 , respectively. In Fig. 15 the convergence patterns for these states are plotted, along with those of the three J π = 2 + states. All states show similar level of convergence and it is not possible to single out one of the states. However, if we look at the centerof-mass contamination of these states, the third J π = 0 + shows a large contamination, while the other states show almost none. Assuming that missing many-body correlations will manifest as larger center-of-mass contaminations in the final wave functions, this state is associated with the experimental second J π = 0 + state. The calculated second J π = 0 + is closer in energy, but including effects of three-nucleon forces pushes this state higher in energy and very close to the experimentally observed J π = 0 + state at 5.34 MeV. [10]. A similar effect occurs among the J π = 2 + excited states, but it is less prominent. Here, the center-of-mass contamination were negligible for all but the third J π = 2 + state, but even here, the contamination was small compared to the third J π = 0 + state. Let us summarize the discussion of missing many-body correlations. Three different markers have been identified to indicate missing physics. Unfortunately, none of them can be used quantitatively and all must be evaluated simultaneously to form a general picture. First, the partial norms can be used to differentiate among different states. From these calculations it seems that a 2p-0h norm of around 90 % is the standard. A lower partial norm, might indicate the need for 4p-2h or higher correlations.
Second, we look at the convergence patterns and if energies converge slowly, this probably means that something is missing from the calculation. In weakly bound states, for example, continuum effects result in the need for additional resolution in the single-particle basis. Finally, we look at the level of center-of-mass contamination present in the wave function. Either the state can be identified as a spurious center-of-mass excitation or a small non-zero center-of-mass component might indicate missing correlations. None of these arguments can be analyzed in detail before 4p-2h configurations are included. This is a work in progress, but, computationally, it will only be possible to include these configurations in a small single-particle space. If the 3p-1h and 4p-2h configurations are defined only in a so-called active space around the Fermi level, the computational cost might be manageable. This has been done successfully in Gour et al. [43] and should prove to be a valuable approximation also in this method. The formation of a correlated α cluster around the Fermi level is important in this mass region and can hopefully be accounted for using a minimal set of 4p-2h configurations. Let us also look at the convergence of selected states in 18 F. Figure 16 shows the excitation energy of the first few states in 18 F for different model spaces. Here all states are relatively well converged, with only the J π = 4 + state showing some model space dependence at N max = 16. None of these states have significant center-of-mass contamination and all partial norms are on the same level as can be seen in Table VI. This table also shows there is a slight contribution to the wave function from outside the sd shell. Figure 17 shows the excitation spectra of 18 O, 18 F, and 18 Ne. Only states that are considered good are plotted and compared to data.
For future comparison, Table VII lists the numerical values used in Fig. 17, together with the ground-state energies. The uncertainty is calculated as the difference in energy between calculations in the two largest model spaces. The experimental values are from Tilley et al. [41].
The total binding energy of 18 O is comparable to what was found in Hergert et al. [44], where the in-medium similarity renormalization group (IM-SRG) [6] method was used to compute the ground-state energies of even [41]. The number in parenthesis indicates the level of convergence and is the difference in energy between calculations in the two largest model spaces.
oxygen isotopes. Although they used an SRG evolved interaction based on chiral interaction at fourth order by Entem and Machleidt [28], induced three-nucleon forces were also included in the final calculation to make the results comparable to those in Table VII. Compared to data, the level ordering in 18 F is reproduced, but the excitation energies are systematically overestimated. Disregarding the missing states in 18 O, the level ordering is also reproduced, but here the excitation energies are systematically underestimated. This is consistent with shell-model calculations [45,46] of these nuclei using different model-space interactions. For results based on the chiral interactions used in this work, the inclusion of three-nucleon forces [10,47] gives results that better match experimental data. Recently, Ekström et al. [14] showed that the effects of three-nucleon forces depends on the low-energy constants used in the parametrization of the chiral potential. To accurately evaluate the quality of these forces will require not only three-nucleon forces and continuum degrees of freedom but also additional correlations in the many-body wave function [10,44,[48][49][50][51].
V. CONCLUSIONS AND OUTLOOK
The spherical version of the 2PA-EOM-CCSD method has been presented. This is appropriate for the calculation of energy eigenstates in nuclei that can be described as two particles attached to a closed (sub-)shell reference. The method has been evaluated in both A = 6 and A = 18 nuclei, where the results were converged with respect to the single-particle basis.
It was also shown that the wave function from a 2PA-EOM-CCSD calculation separates into an intrinsic part and a Gaussian for the center-of-mass coordinate, not necessarily the ground state of the harmonic oscillator Hamiltonian. Wave functions with significant center-ofmass contamination were either identified as a spurious center-of-mass excitation or were not converged with respect to the current approximation level.
In comparison with a full diagonalization, both ground-state and excited-state energies were in general very accurate. However, one excited state in 6 Li deviated significantly from the "exact" result, showing the need to include additional correlations like 4p-2h configurations for the accurate treatment of complex states. For simple states, where a 2p structure is dominant, the current level of truncation is adequate.
Both three-nucleon forces and a correct treatment of the scattering continuum are needed to refine the results.
This work was partly supported by the Office of Nuclear Physics, U.S. Department of Energy (Oak Ridge National Laboratory), under Contracts No. DE-FG02-96ER40963 (University of Tennessee) and No.de-sc0008499 (NUCLEI SciDAC-3 Collaboration). An award of computer time was provided by the Innovative and Novel Computational Impact on Theory and Experiment (INCITE) program. these deserve special attention. These elements can be factorized in just the same way as the coupled-cluster amplitude equations, to reduce the computational cost of these diagrams. The two three-body contributions to the 3p-1h amplitudes are Note that we have swapped the indices to facilitate the angular-momentum coupling. Now the angularmomentum coupling in the coupled-cluster amplitudes match the coupling in the 3p-1h amplitudes, so we do not need to break these couplings when rewriting the diagram in a spherical basis. In the spherical formulation, it is clear that χ c m are the reduced matrix elements of the tensor operatorχ J , which has the same rank asR J . This is a consequence of the scalar character ofH. By coupling the matrix elements in Eq. (C3) to form reduced matrix elements, we get the following expression for the reduced matrix elements of χ J :
|
2013-08-05T16:54:00.000Z
|
2012-07-30T00:00:00.000
|
{
"year": 2012,
"sha1": "e09d2f74c466ca7c87ddf16582edb07c5f0ccc3f",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1207.7099",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "e09d2f74c466ca7c87ddf16582edb07c5f0ccc3f",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
266569811
|
pes2o/s2orc
|
v3-fos-license
|
Microbiota based personalized nutrition improves hyperglycaemia and hypertension parameters and reduces inflammation: a prospective, open label, controlled, randomized, comparative, proof of concept study
Background Recent studies suggest that gut microbiota composition, abundance and diversity can influence many chronic diseases such as type 2 diabetes. Modulating gut microbiota through targeted nutrition can provide beneficial effects leading to the concept of personalized nutrition for health improvement. In this prospective clinical trial, we evaluated the impact of a microbiome-based targeted personalized diet on hyperglycaemic and hyperlipidaemic individuals. Specifically, BugSpeaks®-a microbiome profile test that profiles microbiota using next generation sequencing and provides personalized nutritional recommendation based on the individual microbiota profile was evaluated. Methods A total of 30 participants with type 2 diabetes and hyperlipidaemia were recruited for this study. The microbiome profile of the 15 participants (test arm) was evaluated using whole genome shotgun metagenomics and personalized nutritional recommendations based on their microbiota profile were provided. The remaining 15 participants (control arm) were provided with diabetic nutritional guidance for 3 months. Clinical and anthropometric parameters such as HbA1c, systolic/diastolic pressure, c-reactive protein levels and microbiota composition were measured and compared during the study. Results The test arm (microbiome-based nutrition) showed a statistically significant decrease in HbA1c level from 8.30 (95% confidence interval (CI), [7.74–8.85]) to 6.67 (95% CI [6.2–7.05]), p < 0.001 after 90 days. The test arm also showed a 5% decline in the systolic pressure whereas the control arm showed a 7% increase. Incidentally, a sub-cohort of the test arm of patients with >130 mm Hg systolic pressure showed a statistically significant decrease of systolic pressure by 14%. Interestingly, CRP level was also found to drop by 19.5%. Alpha diversity measures showed a significant increase in Shannon diversity measure (p < 0.05), after the microbiome-based personalized dietary intervention. The intervention led to a minimum two-fold (Log2 fold change increase in species like Phascolarctobacterium succinatutens, Bifidobacterium angulatum, and Levilactobacillus brevis which might have a beneficial role in the current context and a similar decrease in species like Alistipes finegoldii, and Sutterella faecalis which have been earlier shown to have some negative effects in the host. Overall, the study indicated a net positive impact of the microbiota based personalized dietary regime on the gut microbiome and correlated clinical parameters.
INTRODUCTION
The human gut microbiome, a complex ecosystem of trillions of microorganisms, plays a crucial role in our health and disease.It does so by influencing various physiological processes, including metabolism, nutrition, immunity, and even cognitive and behavioural functions (Kim et al., 2019;He & You, 2020;Cunningham, Stephens & Harris, 2021;Vandeputte, 2020;Gomaa, 2020).Further, the gut microbiome's composition and diversity vary among individuals, influenced by factors such as age, diet, and environment (Odamaki et al., 2016).A balanced gut microbiome or eubiosis, is crucial for health, while an imbalance or dysbiosis, can contribute to various diseases like systemic inflammation, insulin resistance, and autoimmune and metabolic disorders (Srivastava et al., 2022).Furthermore, a strong and expanding evidence base supports the influence of gut microbiota in human metabolism, particularly in relation to conditions like hyperglycaemia and hyperlipidaemia (Kim et al., 2019;He & You, 2020;Cunningham, Stephens & Harris, 2021;Srivastava et al., 2022).
Emerging research have consistently demonstrated that the gut microbiota can impact the nutritional status and health of the host.The gut microbiota has been shown to influence control over essential processes such as nutrient absorption, storage, and metabolism.These findings suggest the potential role played by the gut microbiota, including its metabolites, which can be possibly harnessed to promote regulation of metabolic syndrome (Vandeputte, 2020;Gomaa, 2020;Brunkwall & Orho-Melander, 2017;Martínez-López et al., 2022).Specifically, this can be achieved by exploiting the responsiveness of gut microbiome to changes in diet.In other words, the gut microbiome has been shown to adapt remarkably fast to alterations in our diet, leading to differential abundance of various bacteria and their diversity based on the type of food consumed.This adaptability can be advantageous for positive regulation of metabolism (Leeming et al., 2019).Consequently, modulation of the gut microbiota, through various means, has emerged as an important tool to fulfil nutritional requirements, combating malnutrition and even diseases such as hyperglycaemia and hyperlipidaemia (Kim et al., 2019;He & You, 2020;Liu et al., 2019).
Several recent studies have reported the application of personalized nutrition to improve intestinal microflora, and in turn the health status of the individual (Vandeputte, 2020;Lee, Davies & Barnett, 2023;Valdes et al., 2018;Song & Shin, 2022).Specifically, the difference in individual responsiveness based on the gut microbiota has the potential to become an important research approach for personalized nutrition and health management (Valdes et al., 2018).In the context of hyperglycaemia, the gut microbiota's influence on postprandial glycaemic responses to identical meals has been demonstrated (Wilson et al., 2021).This suggests that a personalized diet based on one's gut microbiome could significantly help in lowering hyperglycaemia and alleviating its negative effects (Singh et al., 2017;Ben-Yacov et al., 2023).A study conducted by the Mayo Clinic Centre for individualized medicine found that a personalized diet based on one's microbiome (along with genetics, age, and activity level) is a far better way to control one's blood glucose than cutting carbohydrates and calories (Mendes-Soares et al., 2019;Stanimirovic et al., 2022;Kallapura et al., 2023).This approach fits into precision medicine paradigm by considering different diet patterns and adopting the best one based on individual microbiota composition to achieve significant adiposity reduction and improve metabolic status (De Coster et al., 2018;Nurk et al., 2022;Langmead & Salzberg, 2012).
Therefore, microbiota modulation through diet provides an impactful tool to improve disease conditions such as type 2 diabetes.In this context we have evaluated BugSpeaks Ò which is a microbiome profiling test that provides nutritional recommendation based on the individual's microbiota profile ascertained through whole genome shotgun metagenomics sequencing.BugSpeaks Ò derived personalized nutritional recommendation is individualized and aims to provide individualized nutritional recommendation to improve balance in the gut microbiota ecosystem.It is theorized that improved balance in the gut microbiota ecosystem can decrease chronic inflammation, help improve production of SCFA and other helpful metabolites thus alleviating chronic disease symptoms and improving overall health.In this prospective interventional trial therefore, we have investigated the impact of BugSpeaks Ò in hyperglycemic individuals.Specifically, the impact on clinical parameters such as HbA1c, total cholesterol, LDL, HDL, triglycerides, non-HDL cholesterol, CRP and IL-10 (inflammatory markers) have been evaluated.HbA1c, dyslipidaemia parameters and inflammation markers such as CRP have been evaluated in type 2 diabetes patients earlier (Schofield et al., 2016;Stanimirovic et al., 2022).Most importantly, the impact of the personalized diet on the gut microbiome, and possible correlations with the clinical parameters have also been evaluated.
MATERIALS AND METHODS
Portions of this text were previously published as part of a preprint (Kallapura et al., 2023)
Study design and subjects' selection
This prospective interventional trial was an open label, controlled, randomized, comparative, parallel group study, with two arms, conducted in conformity with ICH-GCP (E6 R2) guidelines, the Helsinki Declaration, and the local regulatory requirements (Indian GCP, Indian Council of Medical Research, and New Drugs and Clinical Trials Rules-2019).There was no further changes or amendments made after protocol approval.The study was initiated only after the receipt of ethics committee (EC) approval (Institutional Ethics committee, Charak Hospital-Reg:ECR/152/Inst/MP/2021, Bhopal, India).After obtaining the informed consent, subjects were screened by undergoing various assessments as per the schedule of assessment mentioned in protocol.This trial is registered and approved with the clinical trials registry-India with the following number CTRI/2022/05/042791 on 24/ 05/2022.
The trial included 30 Indian adults with hyperglycaemia and hyperlipidaemia, with HbA1c ≥ 8 or LDL cholesterol S120 mg/mL, or both.Both male and female participants were included in the study, with an age range of 42-65 years (Mean 53.82 ± SD 7.97 yrs.); with varying body weights (Body Mass Index (BMI) of 19.6-33.3kg/m 2 , mean 24.32 ± SD 2.89); willingness to provide written informed consent and comply with study instructions for its duration, specifically agree to follow a personalised diet for 3 months.Exclusion criteria included subjects with history of alcohol, smoking or tobacco consumption; history of clinically significant physiological or neurological or psychiatric disease; organ transplantation or surgery in the past 6 months; known hypersensitivity or idiosyncratic reaction or intolerance to any dietary changes or any related products as well as severe hypersensitivity reactions (like angioedema) to any drugs or food products; difficulty with donating blood were excluded from the study.With the evaluation of gut microbiome in mind, individuals treated with oral antibiotics during 2 weeks prior to the study, who are undergoing any dietary restrictions, who consume antioxidant supplements, fermented foods (>3 servings per week) and/or laxatives, were also excluded from the study.Women who consume oral contraceptives or who are pregnant or breastfeeding, were also excluded from the study.Participants who met the necessary inclusion criteria were further encouraged not to change their current physical activities, and to refrain from any changes in their dietary habits before starting the clinical trial.
After obtaining the informed consent, subjects were screened by undergoing various assessments.Subsequently, eligible subjects were randomized to receive either BugSpeaks Ò based personalised diet (15 subjects) or regular diet (15 subjects) for 90 days at the trial site.Subjects in both arms continued with their stable dose of diabetic medication (Sulphonylureas, DDP4 Inhibitors, Thiazolidinediones, Nateglinide).The dietary restriction in the study were monitored for 90 days under the supervision of investigator through four onsite visits, Day 1, Day 30, Day 60 and Day 90 and daily patient diary recording.Briefly, the subjects were provided nutritional recommendations and meal plans.The subjects were asked to follow the meal plan and self report the food items consumed in a patient diary.This was remotely monitored by the investigator and nutritionist by phone call and during onsite visits.Safety assessment was done through subject reporting and laboratory parameters.
Investigation product: BugSpeaks Ò The investigation product (IP) used in this clinical trial was a personalized gut microbiome-based diet, generated based on the individual's gut microbiome.Briefly, the personalization of the diet was based on an in-silico compilation of associations between gut microbiome, microbial metabolism, disease and nutrition (and in extension foods).We overlayed and integrated these resources on to an individual's gut microbiome profile in order to establish nutritional associations between one's gut microbiome, with the overall objective of formulating a personalized set of dietary recommendations.
To elaborate, we have created a proprietary database integrating current knowledge of gut microbiome, microbial metabolism and nutrition (and in extension foods).The database is an effort to interlink and integrate these resources to an individual's microbiome, with the objective of formulating a set of dietary recommendations personalized for the individual.Basically, the microbe abundance information is fed into a proprietary algorithm that considers the information of which food item is associated with either increasing or decreasing its abundance (derived from a curated proprietary database).Depending on the overall abundance profile the final frequency of the food items are arrived at.This leads to the generation of a personalized nutritional recommendation based on the microbiota profile of an individual.In this study, the characterized microbiome of every participant in the intervention group was overlayed with this curated information, to generate personalized dietary recommendations, using proprietary algorithms.Largely, the objective is to customize the diet, with different frequencies of foods, in order to increase largely beneficial microorganisms and reduce any dysbiosis in the gut.
Study protocol and intervention
The current randomized prospective study was conducted as per the schedule provided in Fig. 1 (study design).The trial initiated with screening and baseline testing of all the subjects followed by stool sample collection for gut microbiome testing.Block randomization of 15 participants to either test arm (receiving BugSpeaks Ò gut microbiome-based personalized diet) or control arm (receiving regular diet) for the random sequence for treatment allocation was generated using online randomization tool (https://ctrandomization.cancer.gov/).The gut microbiome was profiled for the 15 participants of the test arm using whole genome shotgun metagenomics and personalized diet regimes were generated based on the individual microbiota profile.Personalized diet plans based on the individual's microbiota profile were generated using algorithms and matrices which took into consideration abundance of various microbes and the effect of various food items in modulating their level.
During the intervention period (day 1 to 90), all the participants were instructed to follow either the BugSpeaks Ò gut microbiota based personalized nutritional meal plan (test arm) or regular diet regimes (diabetic meal plan-control arm) under the supervision of a dietician and principal investigator.Under the diabetic meal plan the participants were provided tailored meal plans with focus on food items that are recommended for patients with type 2 diabetes.The daily diet and occurrence of adverse events were recorded throughout the trial period.Site visit was planned on day 1, 30, 60 and 90 of the study periods.HbA1c, CRP, IL-10, triglycerides, LDL and HDL and anthropomorphic parameters such as systolic and diastolic blood pressure, BMI etc., were evaluated for all the participants on Day 1 and at the end of the study on day 90.One participant in the control arm was lost to follow up.Faecal samples of participants of the test arm were collected for microbiome sequencing, analysis and for providing nutritional recommendation based on the microbiota profile of each of the participants.
Plasma parameters
Plasma concentration of total cholesterol, LDL, HDL, triglycerides, non-HDL cholesterol, were estimated using photometric methods.While HbA1c was measured by HPLC, CRP was measured using immunoturbidimetry and IL-10 was determined employing ELISA.All plasma parameters were measured at Samadhan Pathology and Diagnostics, Bhopal.
Statistical analysis
A preliminary Shapiro-Wilk test for normality was conducted on a significance level of 0.05, using NumPy-Matplotlib (V.1.5.0) function within Python (V.3.11.8).The "shapiro" function in "scipy.stats"was used to calculate the p-value of the Shapiro-Wilk test.
The Shapiro-Wilk test static was estimated along with p-value.Since the p-values were > 0.05 the data points were deemed approximately normally distributed.Later, the difference between test and placebo group was statically analysed using student's t test.All the data was represented as mean ± standard deviation (SD).A p value < 0.05 was denotes statistical significance unless specified.Also, all endpoints were analysed separately, however gut metagenomic analysis was performed with different suite of tools, as described below.
Gut microbiota analysis
Faecal samples subjects in the test arm were collected 7 days before intervention (day 1) and on the last of the study period (Day 90).Gut microbiota of all the subjects belonging to the test arm were processed, sequenced and analysed at Leucine Rich Bio Pvt Ltd., India, using shotgun metagenome sequencing method, as detailed below.
Sample collection
Stool samples were collected from participants in the test arm using Invitek Molecular Stool Collection Module (Cat.No. 1038111300, Berlin, Invitek Molecular GmbH).Each participant was given the stool collection kit, with clear instructions about sample collection.The stool collection tube contained 8ml of DNA stabilizing solution and an integrated spoon in cap.All participants were instructed to collect ~2-3 spoons of stool into the 8ml stabilizing solution.Once collected, they were instructed to gently mix the sample with the stabilizing solution for 15 s, seal and then shipped under room temperature to the processing unit for DNA extraction.
DNA extraction
DNA was extracted from stool samples using QIAamp Ò Fast DNA Stool Mini (Cat No./ ID; 51604; Qiagen, Hilden, Germany) following the manufacturer's "Fast DNA Stool Mini Handbook" for fast purification of genomic DNA.Briefly, the extraction protocol consisted of two major steps: Lysis of and separation of impurities from stool samples and purification of DNA thereafter.Lysis of and separation of impurities from stool samples was carried out using the InhibitEX Buffer (Cat No./ID: 19593; Qiagen, Hilden, Germany), during which cellular structures release their DNA content in the solution.The sample matrix was pelleted by centrifugation and the DNA in the supernatant was purified on QIAamp Mini spin columns, which involved removal of proteins, binding DNA to the QIAamp silica membrane, washing away impurities, and eluting pure DNA from the spin column.Eluted DNA was collected in 1.5 ml DNA Lo-Bind microcentrifuge tubes, and the quantity and quality were assessed by Qubit 2.0 DNA HS Assay (ThermoFisher, Waltham, Massachusetts, USA) and NanoDrop Ò (Roche, Basel, Switzerland, USA) to meet the sequencing requirements.
Sequencing
Whole metagenome sequencing was performed on all samples using long read sequencing technology.Briefly, the DNA library was prepared with the Ligation sequencing kit (SQK-LSK114) (Oxford Nanopore Technologies (ONT), Oxford, UK), then loaded onto a R10.4.1 MinION flow cell (FLO-MIN114) and sequenced on the ONT MinION Mk1C device (MIN-101C).Basecalling and demultiplexing of sequence reads was performed with Guppy v4.2.2 and with assistance by MinKNOW GUI v20.10.Raw sequencing reads were stored in FastQ format for further computational analysis.
Upstream analysis
The upstream analysis involved quality check and quality improvement measures, including but not limited to host (human) sequence removal.This was followed by alignment of quality processed reads to a reference database of microbial genomes.The % normalized abundances, of all the microorganisms identified within these samples, were quantified, and later used for downstream analysis involving various statistical measures.
To elaborate, a thorough quality check of the raw sequencing data and some quality improvement measures were adopted to retain only quality reads for further processing.Primarily, the pre-processing operations included a quality check through NanoStat (De Coster et al., 2018) (v1.4.0) (https://github.com/wdecoster/nanostat)and removal of short and sub-par quality reads.Later, the reads that were deemed suitable for further analyses were mapped to the latest stable version of the human reference genome GH38 (Nurk et al., 2022) (GRCh38), using Bowtie2 (Langmead & Salzberg, 2012) (v2.5.2), to align and filtered out host (human) sequences from the data.
Kraken 2, a taxonomic classification system that uses exact k-mer matches to achieve high accuracy and fast classification of sequences was utilized for rapid, accurate, and sensitive microbial classification and quantification of species within the samples (Wood, Lu & Langmead, 2019) (https://github.com/DerrickWood/kraken2/wiki/About-Kraken-2).A custom database, built on the comprehensive, integrated, non-redundant, well-annotated set of sequences from Reference Sequence (RefSeq) collection (https://ftp.ncbi.nlm.nih.gov/genomes/refseq/), was used as the reference database.The result were the raw abundance profiles of prokaryotes (bacteria, archaea), eukaryotes (protozoa, metazoa etc.,) and viruses, stratified across all taxonomic levels.
Downstream metagenomic analysis
Data filtering and data normalization steps were performed to remove low quality or uninformative features from raw abundance data to improve downstream statistical analysis.Briefly, features with exceedingly small counts (<5 reads) and in very few samples (<10% prevalence) were filtered out, followed by a low variance filter using variances measured by inter-quantile range (IQR).Normalization is an essential step in the analysis of microbial abundances in shotgun metagenomics.Data normalization addressed the variability in sampling depth and any sparsity of the data to enable more biologically meaningful comparisons.Trimmed mean of M-values (TMM) is one of the best performing normalization methods, which has showed a high True Positive Rate (TPR) and low False Positive Rate (FPR) (Pereira et al., 2018).It is also known to be best in controlling the FDR.Hence, we performed TMM normalization on the data to ensure accurate biological interpretation of the metagenomic data.
Taxonomic composition of communities across samples and comparing groups were visualized for direct quantitative comparison of abundances.Percentage bar plots were created for comparing group of the test arm, Day 1 (before intervention) and Day 90 (after intervention), for viewing the composition at various taxonomic levels.
Alpha diversity was characterized using different measures.Chao1 index was used for richness-based measure, while Shannon index was used to estimate diversity of the community based on richness as well as evenness (the abundance of organisms).Further, the statistical significance of grouping based on experimental factor was also estimated.Furthermore, 'similarity' or 'dissimilarity' between the two experimental factors was also measured using Beta diversity methods.Non-phylogenetic beta diversity analysis was performed employing Bray-Curtis distance.Principle coordinate analysis (PCoA) was used to visualize the distance matrix created by the beta diversity analysis and statistical significance of the clustering pattern in PCoA plots were evaluated using Permutational ANOVA (PERMANOVA).Both the alpha and beta diversity analyses were performed using the phyloseq packages (McMurdie & Holmes, 2013, 2015) (https://github.com/joey711/phyloseq) and the results were plotted as box and whisker plots for alpha diversity and PCoA plot for beta diversity respectively.
Differential abundance (DA) analysis was also performed to identify and characterize significantly altered microbial abundances across the experiment factors.Recently it has been highlighted that there is a high variation in the output of DA tools across sequencing datasets, presenting issues with reproducibility among microbiome researchers.Hence, it is recommended that researchers use a consensus approach based on several DA tools to help ensure results are robust (Nearing et al., 2022).Considering this, we performed the differential abundance analysis with five different DA tools, viz., Univariate Analysis (T-Test ANOVA) (Luz Calle, 2019), MetagenomeSeq (Paulson, 2016;Paulson, Talukder & Bravo, 2017;Paulson et al., 2013), EdgeR (v3.12) (Robinson, McCarthy & Smyth, 2009), DeSeq2 (Love, Huber & Anders, 2014), and LEfSe (Linear discriminant analysis Effect Size) (Segata et al., 2011).While each of these DA tools differ in their approach to data normalization and the algorithms used for evaluation of variance or dispersion, features were deemed to be significant based on their adjusted p-value (default adj.p-value cutoff = 0.05).Once the DA analysis was performed using individual tools, we identified those microbial species that were called significantly (p < 0.01) differentially abundant in "consensus" by three or more DA tools, ensuring the robustness of the DA characterization.
In order to gain insights into the probable role of taxa in terms of correlation and deduce the importance of their participation in biological interactions, we also performed the network analysis.In order to illustrate the differential correlations of the gut metagenome profiles before and after the BugSpeaks Ò personalized diet, analysis was conducted for selected taxa obtained after data pre-processing and only those significantly correlated taxa were reported.Briefly, abundance profiles across all samples were imported and loaded using Pandas (V.2.1.2,https://pandas.pydata.org/),and preprocessing of the data was performed, which included removal of genus and species with low variance and low raw abundance counts.TMM transformation of the data was performed using Conorm (V.1.2.0), and other pre-processing steps were performed using Sklearn (V.1.3.0),which together created train test splits, for using this data to train.Spearman R correlation, between all species/genus and the intervention arm, was performed using Scipy (V.1.11.1),followed by a filtration step of removing all the correlations with <0.5 Spearman R coefficient and >0.05 p-value.Network diagrams of species interactions of pre-and post-intervention groups were also generated using Networkx (V.3.1).
Efficacy and safety variables
All endpoints were set to assess impact of gut microbiome-based dietary intervention.It included the estimation change in serum HbA1c, CRP, total cholesterol, LDL, HDL, triglycerides, non-HDL cholesterol and IL-10, and change in faecal gut microbiome.Additionally, assessment of adverse events, vital signs (pulse rate, systolic and diastolic blood pressure (seated), BMI, body temperature and respiratory rate and physical examination was done to evaluate safety of the intervention.
Data availability
The datasets generated from the next-generation sequencing in this study is available in the NCBI Sequence Read Archive (SRA) repository, Bioproject ID: PRJNA1046298.
RESULTS AND DISCUSSION
Present clinical trial was conducted to evaluate the safety and impact of BugSpeaks Ò microbiome-based personalized dietary regime in hyperglycaemic and hyperlipidaemic individuals, specifically to evaluate the impact of such diet on gut microbiota and other disease related clinical parameters.Please note, that portions of this text were previously published as part of a preprint (Kallapura et al., 2023).The current clinical trial was a randomized, double-blinded, prospective study, initiated with 30 Indian subjects with hyperglycaemia and/or hyperlipidaemia per study design (Fig. 1, study flow chart).Demographic details of the test subjects at baseline were provided in Table 1 (subject characteristics at baseline).The effect of the intervention on HbA1c, total cholesterol, LDL, HDL, triglycerides, non-HDL cholesterol, CRP and IL-10 levels were determined on 90 th day of the study, post 3 months of dietary intervention.The changes in gut microbiome profiles were characterized for the test arm only, before and after the microbiome-based intervention.Further, safety of the gut microbiome-based personalized diet was studied in terms of the adverse events.
Significant reduction in HbA1c levels
Statistically significant decrease in HbA1c level was observed in the test arm with personalized microbiome-based diet from 8.30 (95% CI [7.74-8.85]) to 6.67 (95% CI [6.2-7.05]),p < 0.001, while only small numerical non-statistical decrease in HbA1c level was observed in the control arm with regular diet from 8.24 (95% CI [7.7-8.6]) to 7.32 (95% CI [6.22-8.4]),p = 0.15, after 90 days of dietary intervention (Fig. 2A).A total of 100% of the participants in the test arm, who followed microbiota based personalized diet, showed a decrease in HbA1c levels, with a mean reduction of 1.62% in HbA1c absolute count (Fig. 2B), while only 78.5% participants in the control arm showed a decrease in HbA1c levels, with a mean reduction of only 0.91% in HbA1c absolute count (Fig. 2C).This meant that there was a 19.6% drop in mean HbA1c levels in the test arm with microbiome-based dietary intervention, as compared to only a 11.1% drop in the control arm (Fig. 2D).This strongly indicated that achieving a significant reduction in HbA1c levels is possible with personalization of diet based on one's gut microbiome.The reduction in HbA1c was found to be much more profound as compared to diabetic specific diet in this trial (test arm vs control arm).This reduction in HbA1c levels was also correlated with changes in the gut microbiota, with shift in composition, abundance and diversity of several species within the gut (detailed below).
Significant decrease in systolic pressure
Elevated blood pressure is a major cardiovascular and metabolic disease risk factor (Vallescolomer et al., 2023).Gut microbiota dysbiosis has been reported in patients with high blood pressure (Dan et al., 2019).Gut microbiota modulation has been shown to impact blood pressure (Yan et al., 2022).Hence, we wanted to investigate if microbiota based nutritional intervention would impact the blood pressure parameters in the participants of the test arm.Mean systolic blood pressure was found to be slightly reduced in the participants of the test arm 139 mm Hg (95% CI [127.8-150.1]) to 132 mm Hg (95% CI [126.4-137.5])post intervention whereas it was found to slightly high in the participants of the control arm who undertook regular nutrition 126 mm Hg (95% CI [122.5-129.4]) to 135 mm Hg (95% CI [127.9-142.1]).This change however was not statistically significant (Fig. 3A).Interestingly, statistically significant decrease in systolic pressure 153 mm Hg (95% CI [138.7-167.2]) to 131 mm Hg (95% CI [125.8-137.7]),p < 0.01 was found in a subset of the participants (eight out of 15) in the test arm at the end of the study period (day 90) whose basal systolic pressure was >130 mm Hg prior to the microbiome based personalized dietary intervention.Similarly, a 4.5% decline in diastolic pressure was also found in this subset of participants from the test arm, although this decrease was not statistically significant (Fig. 3B).It has been reported that increase in Lactobacillus and Bifidobacteria are associated with lower blood pressure (Yan et al., 2022).Interestingly, gut microbiota analysis of these participants showed increased abundance of phylum Firmicutes (Lactobacillus is member of this phylum) and Actinobacteria (Bifidobacteria is a member of this phylum) (Fig. 4A).More specifically Levilactobacillus brevis and Bifidobacterium angulatum were found higher post intervention in the test arm participants.At the genus level we observed an increased abundance of Roseburia and Bacteroides, and decreased abundance of Prevotella and Phocaeicola post intervention with personalized diet (Fig. 4B).Also, our analysis showed a decreased abundance of Alistipes finegoldii in the participants post intervention.Strikingly, high abundance of Alistipes finegoldii has been reported in the intestine of patients with high blood pressure (Kim et al., 2018).So, a reduction in Alistipes finegoldii abundance might also be a contributing factor in the improvement of the blood pressure parameters in this group.Lower serum CRP levels Chronic inflammation has been found to be associated with type 2 diabetes, hyperlipidaemia etc., (Han & Lin, 2014;Guo et al., 2023).High CRP level is an indicator of inflammation and underlying disease conditions such as cardiovascular diseases and type 2 diabetes (Bafei et al., 2023;Kuppa et al., 2023;Guo et al., 2023).We evaluated the change in CRP level in a subset of participants whose basal serum CRP level was >= 2 mg/L prior to intervention (nine out of 15 participants).We found 20% decrease in the CRP level post intervention although the decrease was not statistically significant (Fig. 5).Interestingly, we found decreased Prevotella and increased Roseburia along with an increased Levilactobacillus brevis in these participants post intervention (Fig. 4B).This might be one of the reasons for reduced inflammation as increased Prevotella has been found to have pro-inflammatory effect (Larsen, 2017).Similarly increased Levilactobacillus brevis and Roseburia have been associated with reduced inflammation (Nie et al., 2021;Riccia et al., 2007;Fernández-Veledo & Vendrell, 2019;Riccia et al., 2007;Gupta et al., 2020;Wei et al., 2023).Lowered CRP levels after dietary intervention further shows the positive impact of the microbiome-based personalized nutrition in chronic disease conditions such as Type 2 Diabetes.Increase in subject population, along with further customization of the microbiome-based diet, might be helpful to get statistically significant changes in CRP levels.
Change in other endpoints
No statistically significant decrease in the levels of total cholesterol, LDL, HDL, triglycerides, non-HDL cholesterol and IL-10 was observed in the test arm with personalized microbiome-based diet, as compared to the control arm with regular diet, after 90 days of intervention.
Significant changes in gut microbiome diversity and species abundances
Change in gut microbiome profiles after 90 days of intervention with microbiome-based personalized dietary regime was characterized only for the test arm and was visualized for direct quantitative comparison of abundances, followed by alpha and beta diversity measures, and lastly differentially abundant species and network and correlation analysis across the comparing groups.
Since we performed the whole metagenomic sequencing, we were able to profile all the microbial taxa within the sample, including fungi and viruses.We did not observe any significant difference in the composition; however, we did observe some shift in abundance and diversity of few groups.Largely, abundances of Bacteria, Archaea, and Viruses were slightly decreased by Day 90 of microbiome-based intervention, by 0.07% (from 99.44% to 99.37%), 0.04% (from 0.12% to 0.08%), 0.03% (from 0.13% to 0.10%) respectively.Inversely, abundances of Fungi, and Eukaryota increased slightly by Day 90 of microbiome-based intervention, by 0.08% (from 0.20% to 0.28%) and 0.06% (from 0.11% to 0.17%) respectively.In context of diversity, Shannon diversity index estimated an increase in diversity in Bacteria, while small decrease in diversity in kingdoms Archaea, Fungi, Eukaryota and Virus (Fig. 6).Other patterns emerged at phylum level, with a significant decrease in abundance of Bacteroidetes (from 76.10% to 66.19%), with a 9.91% shift, post microbiome-based intervention.This reduction in abundance of Bacteroidetes was largely attributed to the net shift in abundance of genus Prevotella, with decrease in abundance of Prevotella copri (↓ by 9.79%), Phocaeicola plebeius (↓ by 6.54%) and Prevotella hominis (↓ by 1.56%), and increase in abundance of Pseudonocardia cytotoxica (↑ by 1.44%), Prevotella stercorea (↑ by 1.51%) and Bacteroides_sp_CBA7301 (↑ by 4.45%).There was substantial increase in the abundances of Firmicutes and Actinobacteria in test arm, with 5.57% increase (from 16.09% to 21.66%) and 2.84% increase (from 0.98% to 3.82%), respectively.Many butyrate producing bacteria and probiotics are from these phyla and hence it is possible that a positive shift in these phyla may have led to an improvement in hyperglycaemic and inflammation parameters in this current study.
Alpha diversity measures further confirmed these abundance shifts, with significant increase in Shannon diversity measure, from 2.43 to 3.11 (p = 0.029), post microbiome-based personalized dietary intervention (Fig. 7A).On the other hand, Chao1 indicated a minor decrease in diversity, from 1,154 to 1,126 species (Fig. 7B).Together, these estimates indicated that there was an overall decrease in species richness, while a significant increase in species evenness.In other words, the microbiome-based dietary intervention impacted the gut microbiota by reducing the number of species by a small degree, while modulating the other abundances of other species, overall displaying evenness in context of diversity.Similar observation was found by Gupta et al. (2020) in their study that showed that subjects in the healthy cohort had higher Shannon diversity whereas species richness was higher in the subjects of the "unhealthy cohort".So, it can be speculated that the lower species richness and higher Shannon diversity in the intervention arm is possibly better in the context of health of the subjects.The beta diversity measure by Bray-Curtis distance didn't establish any significant difference between the two arms.However, it displayed clustering between the groups, represented by the ellipses in Fig. 7C.
Increase in subject population, along with finetuning of the personalization of the microbiome-based diet, might aid in statistically significant changes in diversity measures.
We observed changes in microbiome profile at the higher taxonomic levels (Phyla), specifically Firmicutes and Actinobateria phyla showed increase whereas Bacteroidetes phyla showed decrease in the participants post BugSpeaks Ò intervention (Fig. 4).We also observed several specific changes at species level that were statistically and potentially functionally significant as highlighted below.Based on the consensus approach employed with five different differential abundance (DA) tools, we could establish as many as 15 species to be significantly differentially abundant (p < 0.05) between the two arms of the study.We microbiome-based personalized dietary intervention.The comparative abundance plots of some of these species are displayed in (Figs.8A-8H) Some of the more interesting correlations, between the above highlighted reduction in HbA1c and CRP levels, were observed at species level.To begin with, maintenance of optimal levels of succinate is key during glucolipid metabolism, where succinate regulates glucose homeostasis to ameliorate hyperglycemia (Wei et al., 2023).
Phascolarctobacterium succinatutens belonging to the Negativicutes class of Firmicutes was found to be significantly higher in abundance within the test arm post microbiota based nutritional intervention.P. succinatutens is a species of asaccharolytic (does not ferment sugars) bacteria, has been previously isolated and identified from the healthy human gut, known to play a key role in governing intestinal homeostasis and energy metabolism (Watanabe, Nagai & Morotomi, 2012;Ikeyama et al., 2020).The most important characteristic of P. succinatutens is that it is a succinate-utilizing bacterium, that exclusively uses succinate produced by other bacteria (such as Bacteroides species) as the substrate for propionate production (Watanabe, Nagai & Morotomi, 2012;Sawaswong et al., 2023;Fernández-Veledo & Vendrell, 2019;Muhammad et al., 2023).Further, a Mediterranean (plat rich) diet has been found to increase the ratio of succinate-consuming bacteria (Like, P. succinatutens, Odoribacteraceae and Clostridaceae) to succinate producing bacteria (like Prevotella copri, and other species of Prevotellaceae and Veillonellaceae) (Wei et al., 2023;Sawaswong et al., 2023).This pattern was also observed in the current study, where the succinate producing Prevotella copri was observed to be reduced in abundance by 9.791%, while the succinate consuming P. succinatutens was found to be significantly increased in its abundance (by 4.5-fold (log2 fold change)), post implementation of microbiome-based personalized diet.Further, as highlighted above, maintenance of optimal levels of succinate is key during glucolipid metabolism to regulate glucose homeostasis and ameliorate hyperglycemia (Wei et al., 2023).It would be interesting to recommend a larger study with simultaneous measurements of HbA1c, succinate and other serum parameters to confirm this observation.This may open up prospects for using specific succinate consuming bacteria that are beneficial to host health or to administer succinate-consuming probiotics and promote their growth through high fibre dietary intervention, which is expected to lead to the uptake of excessive succinate and provide new avenues for treating related diseases (Muhammad et al., 2023;Sharma, Bhardwaj & Singh, 2016;Bernier et al., 2021;Chen et al., 2023).
Decreasing abundance of Prevotella and increasing abundance of Phascolarctobacterium along with Levilactobacillus brevis and Roseburia could be the correlating factor for the observed reduction in inflammation.Furthermore, succinate has potential as a target for immune monitoring (Wei et al., 2023;Macias-Ceja et al., 2019), and recently, reducing the succinate concentration has shown promise in treating gut chronic inflammatory diseases and obesity-related inflammation, suggesting a new way to alleviate these diseases (Serena et al., 2018;Fremder et al., 2021).Hence, the personalization of diet based on one's gut microbiome might have true potential in addressing various inflammatory diseases.Few more species of bacteria, with potential probiotic properties were also found to be significantly increased in the test arm who adopted the microbiome-based personalized diet.Bifidobacterium angulatum and Levilactobacillus brevis, both were estimated to be increased in the test arm by 3.485-and 2.213-fold (Log2 Fold Change), after 90 days of personalized diet regime.Bifidobacterium angulatum is a species of bacteria that is part of the human gut microbiota, a relatively less common species Bifidobacterium group of probiotics (Zakharevich et al., 2019).Administration of other Bifidobacterium probiotics, such as Bifidobacterium bifidum and Bifidobacterium breve have been associated with amelioration of hyperglycaemia, dyslipidaemia, and oxidative stress in various studies (Fu et al., 2022;Sharma, Bhardwaj & Singh, 2016;Bernier et al., 2021).The observation of this study also indicates the potential of Bifidobacterium angulatum in amelioration of hyperglycaemia and reduction of HbA1c levels.On the other hand, Levilactobacillus brevis has been previously reported to alleviate the progression of type 2 diabetes in animal models, via interplay of gut microflora, bile acid and NOTCH 1 signalling (Chen et al., 2023).Also, Levilactobacillus brevis possess inhibitory effects on a-amylase and a-glucosidase activities, and has been reported to have anti-diabetic properties (Martiz et al., 2023).Hence, the significantly reduced levels of HbA1c in this study, might be a direct correlation to the increased abundance of Levilactobacillus brevis.
Network diagrams of species interactions of pre-and post-intervention, with all statistically significant associations (Spearman coefficient >0.5 and p-value < 0.05), have been shown in Figs.9A and 9B.Within these network analysis comparisons, few key negative correlations were observed between species, especially between Sutterrella sp KLE1602 and Phocaeicola massiliensis (Spearman correlation coefficient −0.66) (Fig. 10C) in the pre-intervention group of the test arm.Interestingly, Phocaeicola massiliensis is observed in a positive correlation with Parabacteroides distasonis both in the pre and post intervention participants (Figs.9A and 9B).Recent report suggests that higher abundance of Parabacteroides distasonis may alleviate metabolic syndrome through production of succinate (Wang et al., 2019).As we find that Sutterrella sp KLE1602 is reduced in post intervention group, it is tempting to speculate that in the pre intervention group due to higher abundance of Sutterrella sp KLE1602, it may indirectly reduce Parabacteroides distasonis level thereby causing some of the metabolic syndrome effects.Clustering based on Spearman correlation shows an interesting pattern wherein some clusters of microbes are seen to be positively correlated among each other in the post intervention as compared to the pre intervention group (Figs. 11A and 11B).This is an interesting pattern and we do not know the reason or the possible implication.We can speculate that this is due to the microbiota modulation through individualized diet however, it would be interesting to find out if this shift in the correlation pattern influences the clinical outcome in the participants of the test arm in subsequent trials.
parameters such as total cholesterol, HDL and LDL levels in this study.This might be because of lesser duration of the study to see the impact of microbiota based personalized nutrition on such parameters.Overall, no adverse effect was reported by the participants following the gut microbiota based nutritional meal plan thereby showing the safety of such an intervention.As we provided gut microbiota based nutritional recommendation to the participants of the test arm, we expected a modulation of their gut microbiota and hence their microbiota were profiled post intervention.We did not profile the gut microbiota of the control arm post routine nutritional intervention however we do not rule out subtle microbiota changes in the participants of that cohort as well.
Our study also found specific changes in the gut microbiota post intervention that may have contributed to the positive effects.Specifically, we found increased Shannon diversity post intervention and possibility of better utilization of succinate in that group.Increased abundance of bacteria such as Bifidobacterium angulatum and Levilactobacillus brevis may have also contributed in improving hyperglycaemic, hypertensive and inflammation parameters (Fig. 12).Many bacteria belonging to the genera Bifidobactria and Levilactobacilli are known probiotics.This trial to our knowledge, is the first of its kind study in Indian patients thus emphasising the positive impact of gut microbiota modulation in disease irrespective of the ethnicity.
In totality, personalized nutrition based on one's gut microbiome aims to preserve or increase the overall gut health using relevant information about the individual's gut microbiome, by delivering personalized nutritional recommendations (Vandeputte, 2020).Such personalization of nutritional advice will be far more effective than more generic approaches and future of personalized nutrition strategies would rely significantly on the gut microbiome to manage disease conditions and overall health (Vandeputte, 2020;Lee, Davies & Barnett, 2023;Bianchetti et al., 2023;Aarnoutse et al., 2017).While the potential of such an approach is promising, it's important to note that our understanding of the gut microbiome and its complex interactions with our bodies and our diets is still evolving.More research is needed to fully understand the potential benefits and challenges of a microbiome-based personalized diet.Personalized nutrition based on one's gut microbiome offers a promising approach to rectify dysbiosis and improve health outcomes (Song & Shin, 2022;Hernández-Calderón, Wiedemann & Benítez-Páez, 2022;Vandeputte, 2020).
Figure 1
Figure 1 Study design.A flow chart depicting the study design, with two arms of the study, list of clinical parameters evaluated as primary end points and the microbiome profiling for the intervention arm (left).Full-size DOI: 10.7717/peerj.17583/fig-1
Figure 2
Figure 2 Change in HbA1c levels.(A) Overall change in HbA1c levels across the arms, where *** p < 0.001.(B and C) Change in HbA1c levels in each individual, within the BugSpeaks personalized nutrition arm and the regular nutrition arm, respectively.(D) Overall % drop between the two arms.Full-size DOI: 10.7717/peerj.17583/fig-2
Figure 3 Figure 4
Figure 3 Change in blood pressure parameters.(A) Overall change in systolic and diastolic pressures across the comparing arms of regular nutrition and BugSpeaks nutrition.(B) Change in systolic and diastolic pressures within a sub-cohort of patients in BugSpeaks nutrition showing a significant reduction in systole within the arm, with ** p < 0.01.Full-size DOI: 10.7717/peerj.17583/fig-3
31 Figure 9
Figure 9 Network analysis.(A) Network of associations pre-intervention, along with (B) network of associations post-intervention with BugSpeaks personalized nutrition.The features of the network
Figure 12
Figure12Trial summary.The trial showed that BugSpeaksÒpersonalized nutrition led to improvement in HbA1c, CRP and blood pressure parameters in Type 2 diabetic patients.This improvement may be attributed to an increase in beneficial microbes such as Bifidobacterim angulatum and Levilactobacillus brevis.Possible mechanism may also include better balance between succinate producers and consumers in the host leading to appropriate concentration of succinate in the system.All icons, graphics are used by utilizing Canva pro account (www.canva.com).Full-size DOI: 10.7717/peerj.17583/fig-12
Table 1 Table
-subject characteristics at baseline.
Note:Values represented as mean ± standard deviation.
|
2023-12-28T20:04:16.970Z
|
2023-12-28T00:00:00.000
|
{
"year": 2024,
"sha1": "4318e89dc7f351592bc642d69bea725ced4c058f",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.7717/peerj.17583",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "6f191e9d7c2bbed7cf423552787fa034d12e265e",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
16968770
|
pes2o/s2orc
|
v3-fos-license
|
Serum carnitine as an independent biomarker of malnutrition in patients with impaired oral intake
Carnitine is a vitamin-like compound that plays important roles in fatty acid β-oxidation and the control of the mitochondrial coenzyme A/acetyl-CoA ratio. However, carnitine is not added to ordinary enteral nutrition or total parenteral nutrition. In this study, we determined the serum carnitine concentrations in subjects receiving ordinary enteral nutrition (EN) or total parenteral nutrition (TPN) and in patients with inflammatory bowel diseases to compare its levels with those of other nutritional markers. Serum samples obtained from 11 EN and 11 TPN patients and 82 healthy controls were examined. In addition, 10 Crohn’s disease and 10 ulcerative colitis patients with malnutrition who were barely able to ingest an ordinary diet were also evaluated. Carnitine and its derivatives were quantified using liquid chromatography-tandem mass spectrometry (LC-MS/MS). The carnitine concentrations in EN and TPN subjects were significantly lower compared with those of the control subjects. Neither the serum albumin nor the total cholesterol level was correlated with the carnitine concentration, although a significant positive correlation was found between the serum albumin and total cholesterol levels. Indeed, patients with CD and UC showed significantly reduced serum albumin and/or total cholesterol levels, but their carnitine concentrations remained normal. In conclusion, only a complete blockade of an ordinary diet, such as EN or TPN, caused a reduction in the serum carnitine concentration. Serum carnitine may be an independent biomarker of malnutrition, and its supplementation is needed in EN and TPN subjects even if their serum albumin and total cholesterol levels are normal.
Carnitine is a vitamin like compound that plays important roles in fatty acid β oxidation and the control of the mitochondrial coenzyme A/acetyl CoA ratio. However, carnitine is not added to ordinary enteral nutrition or total parenteral nutrition. In this study, we determined the serum carnitine concentrations in sub jects receiving ordinary enteral nutrition (EN) or total parenteral nutrition (TPN) and in patients with inflammatory bowel diseases to compare its levels with those of other nutritional markers. Serum samples obtained from 11 EN and 11 TPN patients and 82 healthy controls were examined. In addition, 10 Crohn's disease and 10 ulcerative colitis patients with malnutrition who were barely able to ingest an ordinary diet were also evaluated. Carnitine and its derivatives were quantified using liquid chromatography tandem mass spectrometry (LC MS/MS). The carnitine concentra tions in EN and TPN subjects were significantly lower compared with those of the control subjects. Neither the serum albumin nor the total cholesterol level was correlated with the carnitine concentration, although a significant positive correlation was found between the serum albumin and total cholesterol levels. Indeed, patients with CD and UC showed significantly reduced serum albumin and/or total cholesterol levels, but their carnitine concentrations remained normal. In conclusion, only a complete blockade of an ordinary diet, such as EN or TPN, caused a reduc tion in the serum carnitine concentration. Serum carnitine may be an independent biomarker of malnutrition, and its supplementa tion is needed in EN and TPN subjects even if their serum albumin and total cholesterol levels are normal.
Introduction C arnitine was first discovered in muscle tissue in 1905, (1) but its physiological roles remained unknown for many years. In the 1950s, carnitine was recognized as a vitamin-like compound that plays important roles in fatty acid β-oxidation via the transportation of long-chain fatty acids into the mitochondrial matrix and the control of the mitochondrial coenzyme A (CoA)/acetyl-CoA ratio. (2)(3)(4) Subsequently, it was recognized that carnitine is not a proper vitamin due to the identification of the biosynthetic pathway of two amino acids, lysine and methionine. (3) Although cardiac and skeletal muscles contain the highest concentrations of carnitine, the kidney, liver and brain are the primary organs for its biosynthesis in humans. (5) Carnitine homeostasis in the body is preserved by its biosynthesis and intestinal absorption from diet and urinary excretion. Carnitine is found in a variety of foods. In particular, red meat, fish and dairy products are rich sources; however, plants contain much lower amounts of carnitine. (6,7) Endogenous biosynthesis was examined in strict vegetarians and was estimated to be approximately 1.2 μmol of carnitine/kg of body weight/day, (8) whereas omnivorous humans generally ingest 2 to 12 μmol of carnitine/kg of body weight/day. (7) Thus, most of the body's carnitine sources are derived from the diet in omnivorous humans.
Total parenteral nutrition (TPN) and enteral nutrition (EN) are essential nutrition treatments in patients who cannot orally ingest. In Japan, carnitine is not added to TPN, and ordinary EN appears to contain insufficient amounts of carnitine because most of the nitrogen source of EN is derived from soy and milk proteins. The effects of long-term carnitine-free TPN have been investigated in children (9)(10)(11) and adults, (12) and TPN-dependent patients exhibit significantly decreased concentrations of blood carnitine compared with controls. However, the precise concentrations of carnitine in EN are still unknown, and the effects of long-term EN on the blood carnitine levels have not been studied.
The serum carnitine concentrations in various diseases have been previously reported, and patients with advanced liver cirrhosis, (6) myotonia congenita, Crohn's disease (CD), anorexia nervosa, or hemodialysis (13) show significantly reduced levels. However, these data were reported approximately 40 years ago, and the treatment of these diseases, particularly CD, has been markedly improved recently.
The aim of the present study was to determine the age, gender and diurnal variations of the serum concentrations of carnitine and its derivatives in healthy humans. Furthermore, the concentrations in subjects receiving EN or TPN and in patients with malnutrition due to inflammatory bowel diseases (IBDs), including CD and ulcerative colitis (UC), were measured and compared with the albumin and total cholesterol concentrations. These results showed that long-term EN treatments might cause carnitine deficiency but IBD patients did not. Thus, the serum carnitine concentration may represent a new independent biomarker of malnutrition for nutrition support team (NST).
Subjects, Materials and Methods
Subjects and sample collection. In this study, we evaluated 22 patients who could not ingest orally due to Parkinson's disease or sequela of cerebral infarction, cerebral hemorrhage or meningoencephalitis and who had been receiving TPN (n = 11) or EN (n = 11) for a long period (2-73 months). The baseline characteristics of the patients and sex-and age-matched controls are shown C in Table 1. The patients received an average of 30 kcal/kg/day and 1.0 g of amino acids/kg/day (range 25-35 kcal/kg/day and 0.8-1.2 g of amino acids/kg/day) from TPN or an average of 30 kcal/kg/day and 1.2 g of protein/kg/day (range 25-40 kcal/kg/ day and 0.9-1.5 g of protein/kg/day) from EN. Carnitine was not added to TPN or EN.
Twenty patients with IBDs (10 CD and 10 UC) who were diagnosed by clinical, endoscopic, histopathological and radiological examinations were also evaluated. Among the 10 patients with CD, three were ileum type, and seven were ileum + colon type. One of the patients with CD had a history of ileal resection, whereas the other nine patients had no previous surgeries. Among the 10 patients with UC, one was proctitis type, four were left-side colitis type, and five were pancolitis type. The CD Activity Index score of the CD patients was 171 ± 74, and the Mayo score of the UC patients was 4.9 ± 1.5. All of the CD and UC patients were treated with 5-aminosalicylates, two CD and one UC patient were treated with corticosteroids, five CD and five UC patients were treated with azathioprine, and five CD patients were treated with infliximab. These patients were barely able to eat an ordinary diet and exhibited malnutrition. The baseline characteristics of the IBD patients and sex-and age-matched controls are shown in Table 2.
Blood samples were obtained from the TPN patients in the morning and from the EN and IBD patients in the morning before breakfast after an overnight fast, and the sera were stored at −20°C until further analyses. Control sera obtained after fasting from healthy volunteers without obesity, hyperlipidemia, diabetes or liver dysfunction were collected from another study group (courtesy of Professor T. Teramoto, Teikyo University). Informed consent was obtained from all of the subjects, and the experimental protocol was approved by the Ethics Committee of Tokyo Medical University Ibaraki Medical Center.
Materials. Acetyl-L-carnitine HCl and palmitoyl-L-carnitine HCl were purchased from Sigma-Aldrich Chemical Co. Determination of serum carnitine and acetylcarnitine concentrations. The carnitine and acetylcarnitine concentrations in serum and EN were quantified using liquid chromatographytandem mass spectrometry (LC-MS/MS). The method was adapted from a report by Ghoshal et al. (14) and was modified as follows. Five microliters of serum or EN was placed in a microcentrifuge tube (1.5 ml), and 25 ng of [ 2 H3]carnitine and 12.5 ng of acetyl-[ 2 H3]carnitine in 50 μl of acetonitrile-water (19:1, v/v) containing 0.1% formic acid were added as an internal standard. The sample tube was vortexed for 1 min and centrifuged at 2,000 × g for 1 min, and the liquid phase was collected and evaporated to dryness at 80°C under a nitrogen stream. The residue was redissolved in 65 μl of 0.1% aqueous formic acid solution, and an aliquot (1 μl) was analyzed using LC-MS/MS. The LC-electron spray ionization (ESI)-MS/MS system consisted of a TSQ Vantage triple stage quadrupole mass spectrometer (Thermo Fisher Scientific, Waltham, MA) equipped with an HESI-II probe and a Prominence ultra fast liquid chromatography (UFLC) system (Shimadzu, Kyoto, Japan). Chromatographic separation was performed using a Hypersil GOLD aQ column (150 × 2.1 mm, 3 μm, Thermo Fisher Scientific) at 40°C. The mobile phase consisted of methanol-water (1:9, v/v) containing 0.1% formic acid and was used at a flow rate of 200 μl/min. The MS/MS conditions were as follows: spray voltage, 3,000 V; vaporizer temperature, 450°C; sheath gas (nitrogen) pressure, 50 psi; auxiliary gas (nitrogen) flow, 15 arbitrary units; ion transfer capillary temperature, 220°C; collision gas (argon) pressure, 1 Determination of serum palmitoylcarnitine concentra tion. The serum palmitoylcarnitine concentration was also measured using the LC-MS/MS method previously described with the exceptions that 2 ng of palmitoyl-[ 2 H3]carnitine was used as the internal standard and the liquid phase obtained following centrifugation was directly injected into the LC-MS/MS system. Different LC mobile phases and flow rates were used, and the SRM for palmitoylcarnitine and its variant were m/z 400 → m/z 85 and m/z 403 → m/z 85, respectively. The mobile phase initially consisted of methanol-water (1:9, v/v) containing 0.1% formic acid and was used at a flow rate of 300 μl/min for 1.5 min, and the system was programmed in a linear manner to reach 0.1% formic acid in methanol over a period 4.5 min. The final mobile phase was maintained constant for an additional 4 min.
Statistics. The data are expressed as the means ± SD. The statistical significance of the differences between the results in the different groups was evaluated using Student's two-tailed t test. The correlation was tested by calculating Pearson's correlation coefficient, r. For all of the analyses, significance was determined at the level of p<0.05.
Results
Carnitine and acetylcarnitine concentrations in EN. The concentrations of carnitine and acetylcarnitine in ordinary ENs are shown in Table 3. The first three brands were administered to our patients. All of the ENs contained very low concentrations of carnitine, which was estimated to be less than 0.3 μmol/kg of body weight/day, whereas the endogenous biosynthesis of this compound is estimated to be 1.2 μmol/kg of body weight/day. (8) The acetylcarnitine concentration was markedly lower than the carnitine concentration in all ENs and is not thought to contribute to the amount of carnitine intake even if it is hydrolyzed into carnitine in the intestine.
Circadian rhythm of serum carnitine and its derivatives in a healthy control subject. The circadian rhythm of the serum concentrations of carnitine, acetylcarnitine and palmitoylcarnitine in a healthy male is shown in Fig. 1. Pre-prandial increases and postprandial decreases were observed in the acetylcarnitine and palmitoylcarnitine concentrations, which suggests that the diurnal variation of serum acetylcarnitine and palmitoylcarnitine concentrations is controlled mainly by food intake. In contrast, the serum carnitine concentrations were relatively stable and not affected by food intake.
Serum carnitine and acetylcarnitine concentrations in healthy control subjects. The relationships between age and serum concentration of carnitine or acetylcarnitine in healthy subjects are shown in Fig. 2. In both males and females, no agerelated changes in the serum carnitine concentrations were observed, whereas a significant age-related increase in the serum acetylcarnitine concentrations was observed in both genders. Serum carnitine and acetylcarnitine concentrations in long term EN or TPN patients. Serum carnitine and acetylcarnitine concentrations were compared among EN and TPN patients and age-matched healthy controls (Fig. 3a). Both the carnitine and acetylcarnitine levels in the EN and TPN patients were significantly lower compared with those of the controls. There were no significant differences in the carnitine or acetylcarnitine levels between the EN and TPN patients. The relationships between the duration of no oral intake (EN and TPN) and the serum concentrations of carnitine compared with the albumin and total cholesterol levels were studied. In EN and TPN patients, both the carnitine and albumin levels were significantly reduced compared with those of the controls ( Table 1). The total cholesterol levels in TPN patients were also reduced, but the levels in EN patients were not significantly different from those of the controls. As shown in Fig. 4a, there was no significant change in the serum carnitine, albumin or total cholesterol levels based on the duration of no oral intake, but the carnitine concentrations tended to decrease over time (p = 0.148).
Serum carnitine and other markers of nutrition. The relationships between the serum carnitine levels and other markers of nutrition, i.e., serum albumin and total cholesterol concentrations, are shown in Fig. 4b. Neither the serum albumin nor the total cholesterol level was correlated with the carnitine concentration, although a significant positive correlation was found between the serum albumin and total cholesterol levels.
Serum carnitine and acetylcarnitine concentrations in IBD patients. The concentrations of carnitine and acetylcarnitine in CD and UC patients and age-matched controls are depicted in Fig. 3b. As shown in Table 2, the serum albumin concentrations were significantly lower in both CD and UC patients, and the serum total cholesterol concentration was significantly lower in CD patients. Thus, these IBD patients' nutritional states were not good because these individuals were barely able to eat a normal diet. Nevertheless, the serum concentrations of carnitine and acetylcarnitine were not significantly different between CD or UC and the corresponding control subjects.
Discussion
In previous investigations, carnitine derivatives were usually measured enzymatically as acylcarnitine. In the present study, we used the LC-MS/MS method and quantified the levels of acetylcarnitine and palmitoylcarnitine, which are the two major acylcarnitines after and before fatty acid β-oxidation in the mitochondria, respectively. Both the acetyl-and palmitoylcarnitine concentrations changed in parallel, and the levels increased at pre-prandial times, which suggests that the serum acylcarnitine concentration may reflect β-oxidation activity rather than the nutritional state. In contrast, carnitine (free carnitine) is much more abundant than acylcarnitine, and its levels were not affected by food intake. Thus, carnitine is proposed to be a better marker than acylcarnitine for the estimation of carnitine deficiency.
Little is known regarding the effects of aging and gender on the blood and tissue carnitine or acetylcarnitine concentrations. An age-dependent decrease in carnitine and acetylcarnitine concentrations in mice and human muscles was shown by Costell et al. (15) These researchers also showed that the blood carnitine concentrations in humans remained unchanged with age in males, whereas an age-dependent increase was observed in females. These results suggest that the blood carnitine levels are maintained despite the slight decrease in the tissue carnitine concentrations. Conversely, the decreased concentration of serum carnitine may represent a marked reduction of carnitine in the tissues. In our study, although the serum carnitine concentrations in females tended to increase with age (Fig. 2b), significant age-related changes were not observed in both genders, which is similar to the results of a previous report. Furthermore, significant agedependent increases in the serum acetylcarnitine concentrations were observed in both genders. Although the mechanism underlying the latter observations remains unclear, this finding also supports the contention that carnitine is a better nutritional marker than acetylcarnitine.
Although reduced blood carnitine concentrations due to long-term TPN have been observed in children, (9)(10)(11) the blood Table 2. The data are expressed as the means ± SD. ns, not significant.
concentrations were followed up for only two months in adult TPN patients. (12) In addition, the blood concentrations in long-term EN patients have not been studied. Our data demonstrated that the serum carnitine levels are significantly decreased to approximately half in patients who could not intake food orally and received TPN or EN for a long term (Fig. 3a). In addition, the carnitine levels tended to decrease in proportion to the duration of no oral intake, although this change was not statistically significant (Fig. 4a). These results suggest that sufficient amounts of carnitine were not supplied to our patients by EN and TPN. This finding was supported by additional data that only trace amounts of carnitine are contained in the ENs that were administered to our patients (Table 3). Recently, injectable carnitine and EN containing sufficient amounts of carnitine became available in Japan. Although carnitine is endogenously biosynthesized to some extent, its supplementation is needed for patients unable to orally intake food.
The serum carnitine concentrations in UC patients have been reported as normal, (16) and our results supported this previous observation. In addition, decreased serum carnitine concentrations in CD patients were demonstrated approximately 40 years ago. (13) However, the treatment of CD is now markedly improved, and our CD patients who were barely able to eat an ordinary diet did not show significantly reduced serum carnitine concentrations. Thus, carnitine supplementation was not needed for a majority of the tested CD patients with the exception of those with severe malabsorption. A patient with short bowel syndrome showed severe malabsorption, and oral carnitine supplementation was not sufficient to restore the low serum carnitine levels. (17) These results suggest that the intravenous administration of carnitine appeared to be necessary to supply a sufficient amount of carnitine to patients with severe malabsorption.
It has been reported that encephalopathy, myopathy and cardiomyopathy are included in the complication of carnitine deficiency, which is known as carnitine deficiency syndrome. (18) It was difficult to evaluate carnitine deficiency syndrome in our carnitine-reduced cases because most of our patients had disuse syndrome caused by underlying diseases, such as Parkinson's disease, sequela of cerebral infarction, cerebral hemorrhage and meningoencephalitis. A previous report suggested that the degree of carnitine deficiency in muscle tissue is greater than that in plasma. (15) Thus, the disuse syndrome in our cases might have been affected by profoundly reduced carnitine levels in the muscle tissue. In addition, the serum carnitine concentrations in various diseases were previously reported, and patients with advanced liver cirrhosis showed a significant reduction presumably due to reduced biosynthesis. (6) Thus, we should pay specific attention to carnitine deficiency in patients with liver cirrhosis who are receiving EN or TPN.
The relationships of serum carnitine concentrations with other nutritional markers, such as the serum albumin and total cholesterol concentrations, have not been previously investigated. Our IBD patients showed normal serum carnitine concentrations but had significant malnutrition with low serum albumin and/or low total cholesterol levels. Our examination of the relationships between the serum carnitine levels and the albumin or total cholesterol concentrations in EN and TPN subjects did not reveal any significant correlation. These results suggest that the serum carnitine concentrations should be evaluated even if other nutritional markers, such as serum albumin or total cholesterol concentrations, are within the normal limits.
In summary, the serum carnitine concentrations in long-term EN and TPN subjects were significantly low, regardless of the serum albumin or total cholesterol levels. In contrast, the IBD patients showed malnutrition but had normal levels of carnitine because they were able to ingest some ordinary food. Thus, we must be cautious regarding carnitine deficiency in patients with complete artificial feeding even if their other nutritional markers are normal. The serum carnitine concentration could be an independent biomarker of malnutrition for NST.
|
2018-04-03T06:01:13.041Z
|
2014-10-04T00:00:00.000
|
{
"year": 2014,
"sha1": "5ffec8bf02e1db693e3d77ea4cb6ea6e00169708",
"oa_license": null,
"oa_url": "https://www.jstage.jst.go.jp/article/jcbn/55/3/55_14-77/_pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "5ffec8bf02e1db693e3d77ea4cb6ea6e00169708",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
267256885
|
pes2o/s2orc
|
v3-fos-license
|
Are Authorities Denying or Supporting? Detecting Stance of Authorities Towards Rumors in Twitter
Several studies examined the leverage of the stance in conversational threads or news articles as a signal for rumor verification. However, none of these studies leveraged the stance of trusted authorities . In this work, we define the task of detecting the stance of authorities towards rumors in Twitter, i.e., whether a tweet from an authority supports the rumor, denies it, or neither. We believe the task is useful to augment the sources of evidence exploited by existing rumor verification models. We construct and release the first Authority STance towards Rumors (AuSTR) dataset, where evidence is retrieved from authority timelines in Arabic Twitter. The collection comprises 811 (rumor tweet, authority tweet) pairs relevant to 292 unique rumors. Due to the relatively limited size of our dataset, we explore the adequacy of existing Arabic datasets of stance towards claims in training BERT-based models for our task, and the effect of augmenting AuSTR with those datasets. Our experiments show that, despite its limited size, a model trained solely on AuSTR with a class-balanced focus loss exhibits a comparable performance to the best studied combination of existing datasets augmented with AuSTR, achieving a performance of 0.84 macro-F1 and 0.78 F1 on debunking tweets. The results indicate that AuSTR can be sufficient for our task without the need for augmenting it with existing stance datasets. Finally, we conduct a thorough failure analysis to gain insights for the future directions on the task. ∗∗ ∗∗ This article presents a major extension of a previous work published at ECIR 2023 [1]. Extensions include (1) expanding the dataset by doubling the number of examples, (2) proposing a new semi-automated approach for collecting the data, (3) studying the usefulness of two more Arabic stance datasets, (3) using in-domain data for training the models, (4) fine-tuning our BERT models over different hyper-parameters, and (5) investigating various loss functions to alleviate the class-imbalance issue
Introduction
Social media platforms (e.g., Twitter) have become a medium for rapidly spreading rumors along with emerging events [2].Those rumors may have a lasting effect on users' opinion even after it is debunked, and may continue influence them if not replaced with convincing evidence [3].Existing studies for rumor verification in social media exploited the propagation networks as a source of evidence, where they focused on the stance of replies [4][5][6][7][8][9], structure of replies [10][11][12][13][14][15], and profile features of retweeters [16].Recently, Dougrez-Lewis et al. [17] proposed augmenting the propagation networks with evidence from the Web, and Hu et al. [18] proposed exploiting both text and images retrieved from the web as sources of evidence.A large body of existing studies in the broader literature have examined exploiting the stance of conversational threads [19,20] or news articles [21,22] towards claims as a signal for verification.
However, to our knowledge, no previous research has investigated exploiting evidence from the timelines of trusted authorities for rumor verification in social media.An authority is an entity with the real knowledge or power to verify or deny a specific rumor [1,23].Therefore, we believe that detecting stance of relevant authorities towards rumors can be a great asset to augment the sources of evidence utilized by existing rumor verification systems.It can also serve as a valuable tool for fact-checkers to automate their process of verifying rumors from authorities.
In this work, we address the problem of detecting the stance of authorities towards rumors in Twitter, defined as follows: Given a rumor expressed in a tweet and a tweet posted by an authority of that rumor, detect whether the tweet supports (agrees with) the rumor, denies (disagrees with) it, or not (other).Figure 1 presents our perception of the role of detecting the stance of authorities in a typical pipeline of rumor verification over Twitter.Given a rumor expressed in a tweet, both the reply thread and the corresponding authority Twitter accounts are retrieved.The replies structure, the replies stance, and the authorities stance in addition to other potential signals will then be exploited by the rumor verification model to decide the veracity of the rumor.In our work, we assume that the authorities for a given rumor are already retrieved [23], and we only target the detection of the stance of those authorities towards the rumors.In particular, our model is supposed to do so over the tweet timelines of the corresponding retrieved authorities.While being very important source of evidence for rumor verification, it is worth mentioning that stance of authorities can compliment other sources, especially if authorities are automatically retrieved, thus not fully accurate.
A closer look at the literature on Arabic rumor verification in Twitter in particular reveals that utilizing signals for verification is under-explored; most existing studies relied on the tweet textual content to detect its veracity [24][25][26][27][28][29].Some notable exceptions are the work done by Albalawi et al. [30] (who exploited the images and videos embedded in the tweet), the study done by Haouari et al. [14] (who used the reply thread structure and reply network signals), and the work done by Althabiti et al. [31] (who proposed detecting sarcasm and hate speech in the replies for Arabic rumor verification in Twitter).
To fill this literature gap, we first introduce the problem of detecting the stance of authorities towards rumors in Twitter.We then construct the first dataset for the task, and release it along with its construction guidelines to facilitate future research.Moreover, we investigate the usefulness of existing Arabic stance datasets towards claims for our task.Finally, we explore the mitigation of the traditional class-imbalance issue in stance datasets by experimenting with various loss functions.Our experiments show that training a model with our dataset solely, despite being relatively very small, exhibits a performance that is (at least) on bar with training with other (combinations of) existing stance datasets, indicating that existing stance datasets are not really needed for the task.The contributions of this paper are as follows: 1. We introduce and define the task of detecting the stance of authorities towards rumors that are propagating in Twitter.2. We release the first Authority STance towards Rumors (AuSTR) dataset for that specific task1 targeting the Arabic language.3. We explore the adequacy of existing Arabic datasets of stance towards claims for our task, and the effect of augmenting our in-domain data with those datasets on the performance of the model.4. We investigate the performance of the models when adopting variant loss functions to alleviate the class-imbalance issue, and we perform a thorough failure analysis to gain insights for the future work on the task.
The rest of this paper is organized as follows.We present our literature review in Section 2 and define the problem we are targeting in this work in Section 3. In Section 4, we present our dataset construction approach.Our experimental approach is presented in Section 5. We discuss the experimental setup in Section 6 and thoroughly analyze the results and answer the research questions in Section 7. We conduct a failure analysis to gain insights for future directions and discuss the limitations of our study in Section 8. Finally, we conclude and suggest some future directions in Section 9.
Related Work
In this section, we briefly review the related studies to our work.Specifically, we review rumor debunking in social media studies in Section 2.1, we give an overview of studies for stance detection for claim verification in Section 2.2, and we review authorities for rumor verification studies in Section.2.3.
Rumor Debunking in Social Media
Several studies on rumors debunking in Twitter suggested exploiting online debunkers, i.e., users who share fact-checking URLs to stop the propagation of a circulating rumor [32][33][34][35][36][37].To encourage online debunkers in Twitter remain engaged in correcting rumors, some studies proposed fact-checking URLs recommender systems [32,36].Vo and Lee [33,35] proposed a fact-checking response generator framework to stop the propagation of fake news, and exploited the replies of users who usually debunk rumors in Twitter to implement their model.Vo and Lee [34] on the other hand introduced a multimodal framework to retrieve fact-checking articles to be incorporated into rumor spreaders conversations threads to discourage propagating rumors in social media.Differently, in our work we consider authorities as credible debunkers who may post tweets supporting or debunking a specific rumor circulating in Twitter.
Stance Detection for Claim Verification
A myriad of studies have investigated detecting the stance towards claims to identify its veracity [38].Some focusing on detecting the stance of conversation threads in social media [19,20,39], and others on the stance of news articles [21,22,40,41].Existing studies either considered the stance as an isolated module in the verification system [19][20][21]39], or considered the stance of the evidence towards the claim as the veracity label [42][43][44][45].Multiple approaches were proposed recently considering verification as stance detection, mainly targeting stance of articles towards claims, by either exploiting transformer-based models [22,45,46], or graph neural networks [47][48][49].In the other hand, studies considering stance detection as a standalone component in the verification pipeline are mainly targeting of stance of conversation threads towards rumors in social media.A plethora of models were proposed to detect the stance of conversation threads such as tree and hierarchical transformers proposed by Ma and Gao [50] and Yu et al. [7] respectively.
A few studies addressed stance detection for Arabic claim verification recently, where the evidence is either news articles [22,41] or manually-crafted sentences from articles headlines [46].In contrast, in our work, we define the task of detecting the authorities stance towards Arabic rumors where we consider it as a standalone component in the rumor verification pipeline, and we release the first dataset for the task.We study the usefulness of existing Arabic stance towards claims datasets for the task, and we evaluate the performance of the stance models when incorporating in-domain data for training the models.Finally, we investigate two loss functions who showed promising results to alleviate the class-imbalance issue identified as a major challenge for stance detection for rumor verification [51].
Authorities for Rumor Verification
A closer look to the literature on rumor verification in social media reveals that no study to date has examined exploiting evidence from authorities.Existing studies for rumor verification in social media exploited evidence from the propagation networks [8,9,13,14,16], Web [17], and stance of conversational threads [19,20,39].
Recently, Haouari et al. [23] introduced the task of authority finder in Twitter which they define as follows: given a tweet stating a rumor, retrieve a ranked list of authority accounts from Twitter that can help verify the rumor, i.e., they may tweet evidence that supports or denies the rumor.The authors released the first Arabic test collection for the task, and proposed a hybrid model that exploits both lexical, semantic, and user networks signals to find authorities.The authority finder task was then introduced as part of the CheckThat!2023lab shared tasks [52,53], and it was deployed as a system component as part of a live system for Arabic claim verification [54].Differently, in our work we assume that the authority is already retrieved, and the task is to detect the stance of her tweets towards a given rumor.
Overview of Our Work
Figure 2 shows an example of a rumor about an establishment of a new railway to connect the Sultanate of Oman and the United Arab of Emirates (UAE).We assume that the authorities for this rumor are retrieved by an "authority finding" model (here some of the highly relevant authorities are the ministry of transport in Oman, the Omani government communication center, and both Oman's and UAE's rails projects).The figure shows an example tweet from each of the timelines of the authorities that actually supports the rumor. 2n this work, we introduce the task of detecting the stance of authorities towards rumors in Twitter.Due to the lack of datasets for the task, we construct and release the first Authority STance towards Rumors (AuSTR) dataset (Section 4).We exploit both fact-checking articles and authority Twitter accounts to manually collect debunking, supporting, and other (rumor tweet, authority tweet) pairs.Additionally, we propose a semi-automated approach utilizing the Twitter search API to further expand our debunking pairs.Due to the limited size of our dataset, we investigate the usefulness of existing datasets of stance towards Arabic claims (Section 7.1 and Section 7.2).Adopting a BERT-based stance model, we perform extensive experiments using 5 variant Arabic
Stance of authorities' detector
Rumor Authorities Authorities' supporting tweets Fig. 2 An example of a rumor along with its corresponding authorities and a set of supporting tweets detected from the authorities timelines (The example is from our constructed AuSTR dataset).
stance datasets, where the target is a claim but the context is either an article, article headline, or a tweet, to investigate if the stance model trained with each of them is able to generalize to our task.We then explore the effect of augmenting our in-domain data with each of the Arabic stance datasets on the performance of the model (Section 7.3).To mitigate the class-imbalance issue, we explore variant loss functions replacing the cross-entropy loss (Section 7.4).Finally, we conduct a thorough error analysis to gain insights for the future improvements (Section 8.1).
Constructing AuSTR Dataset
To address the lack of datasets of authority stance towards rumors, in this work, we introduce the first Authority STance towards Rumors (denoted as AuSTR) dataset.Our focus is on Arabic, as it is one of the most popular languages in Twitter [55], yet it is under-explored for rumor verification.Our dataset consists of 811 pairs of rumors (expressed in tweets) and authority tweets related to 292 unique rumors.Tweets of authorities are labeled as either disagree, agree, or other, as defined earlier.To construct AuSTR, we collected the debunking pairs manually (details in Section 4.1) by exploiting fact-checking articles and adopting a semi-automated approach.Supporting pairs were collected by manually exploring authority accounts and the Twitter search interface, in addition to utilizing the fact-checking articles (details in Section 4.2).Finally, to collect our other pairs we manually examined the timelines of the authorities of our debunking and supporting pairs to select tweets that are neither agreeing nor disagreeing with the rumor, in addition to exploiting fact-checking articles (details in Section 4.3).
Collecting Debunking Pairs
Figure 3 depicts an overview of our approach to construct the debunking pairs of AuSTR.We leveraged both the fact-checking articles and a semi-automated approach which we propose in this work.
Exploiting Fact-Checking Articles
Fact-checkers who attempt to verify rumors usually provide, in their fact-checking articles, some examples of social media posts (e.g., tweets) propagating the specific rumors, along with other posts from trusted authorities that constitute evidence to support their verification decisions.For AuSTR, we exploit both examples of tweets: stating rumors and showing evidence from authorities as provided by those factcheckers.Specifically, we used AraFacts [56], a large dataset of Arabic rumors collected from 5 fact-checking websites.From those rumors, we selected only the ones that are expressed in tweets and for which the fact-checkers provided evidence in tweets as well. 3For false rumors, we selected a single tweet example of the rumor and all provided evidence tweets for it, which are then labeled as having disagree stances.
Adopting this approach, we ended up with 118 debunking pairs.
Scanning fact-checking articles
Fig. 3 Our approach for collecting AuSTR debunking pairs.
Exploiting Twitter Search
Additionally, we adopted a semi-automated approach to collect more debunking pairs using Twitter search.First, we used the Twitter Academic API 4 to collect potentiallydebunking tweets, i.e., tweets with denying keywords and phrases such as "fake news," "fabricated, rumors," and "denied the news."Specifically, we used 21 keywords/phrases5 to search Twitter to retrieve Arabic tweets from the period of July 1, 2022 to December 31, 2022.To narrow down our search and reduce the noisy tweets, we excluded retweets and the tweets of non-verified accounts.Given that fact-checkers usually use most of these keywords to debunk rumors, we also excluded tweets from verified Arabic fact-checking Twitter accounts.By adopting this approach, we were able to collect either debunking tweets from authorities themselves, or just pointer tweets from journalists or news agencies.For both types, we retrieved the rumor tweets by searching Twitter user interface using the main keywords in the debunked rumor by the authorities.For the later type, we manually examined the timelines of authorities to get the debunking tweets.Table 1 presents examples of debunking tweets from authorities along with the search keywords used to retrieve them.An example of automatically-retrieved pointer tweet and the manually-collected disagree pair is presented in Table 2.
Table 1 Examples of debunking authority tweets (and their English translations) collected using the semi-automated approach along with the search keywords.
Search keywords
Example of a collected tweet Incorrect @AymanNour: Statement from #Ghad El Thawra: One of the sites published incorrect news about the party's decision to call for the 11/11 movement ... Fake news @LebISF: Denying a fake news published by a Lebanese newspaper about the arrest of Major General Othman's brother Untrue @IraqiSpoxMOD: ... news about (the disappearance of an American citizen in central or southern Iraq, under mysterious circumstances, who works as a journalist).We confirm that this news is untrue ... Fabricated @AlAhlyTV: ...Al-Ahly's objection speech about Zamalek club uniforms in the super is fabricated... Rumors @DGSGLB: #Statement: rumors are circulating that the General Directorate of General Security arrested Sally Hafez, who broke into a bank in Beirut...
Table 2 An example of an automatically collected pointer debunking tweet along with its manually collected debunking pair (with their English translation).
Tweet type Tweet text
Pointer @naharkw: The Qatari Embassy in Tunisia: Incorrect..A Qatari was killed in the ancient city of Bizerte.[11-
Collecting Supporting Pairs
To collect supporting pairs, we adopted two approaches as presented in Figure 4. Given that fact-checkers focus more on false rumors than true ones, exploiting fact-checking articles was not sufficient to collect supporting tweets, as adopting this approach, we were able to collect only 4 agree pairs as opposed to 118 disagree pairs.Thus, we manually collected a set of governmental Arabic Twitter accounts representing authorities related to health and politics, such as ministries and ministers, embassy accounts, and Arabic Sports organizations accounts (e.g., football associations and clubs).Starting from 172 authority accounts from multiple Arabic countries, 6 we manually checked the timelines of those authorities from the period of July 1, 2022 to December 31, 2022.We selected check-worthy tweets, i.e, tweets containing verifiable claims that we think will be of general interest [57], and consider them as authority supporting tweets.We then used the main keywords in each claim to search Twitter through the user interface and selected a tweet propagating the same claim while avoiding near-duplicates.We ended up with 148 agree pairs in total.Table 3 shows an example of a supporting authority tweet along with a relevant rumor.
Collecting Other Pairs
For some rumors, fact-checkers provide the authority account in their fact-checking article, but they state that no evidence was found to support or deny the rumor.For this case, we selected one or two tweets from the authority timeline posted soon before the rumor time, and assigned the other label to those pairs.In reality, most of the tweets in authority timelines are neither supporting nor denying a given rumor.To get closer to that real scenario, for each agree and disagree pair, we manually examined the timeline of the authority within the same time period of the rumor, and selected at most two tweets, where we give higher priority to tweets related to the rumor's topic or at least have an overlap in some keywords with the rumor.A tweet of those is then labeled as other if it is either relevant to the rumor but is neither disagreeing nor agreeing with it, or it is completely irrelevant to it.We ended up with 466 other pairs.
It is worth noting that the evidence from authorities is not always expressed in the textual body of the tweet.We considered the case when some authorities may post evidence as an announcement embedded in an image or video.
Data Quality
We present our dataset statistics in Table 4.Our data was annotated by one of the authors, a PhD candidate and native Arabic speaker working on rumor verification in Twitter.To measure the quality of our data, we randomly picked 10% of the pairs and asked a second annotator, a PhD holder and native Arabic speaker, to label them.The computed Cohen's Kappa for inter-annotator agreement [58] was found to be 0.86, which indicates "almost perfect" agreement [59].
Experimental Design
Due to the limited size of AuSTR, one of the main objectives of this work is to study the adequacy of using existing datasets of stance towards claims in training models for our task.Specifically, the goal is to first study whether models trained with existing stance datasets perform well on detecting the stance of authorities in particular, then investigate whether augmenting them with AuSTR improve the performance of those models.Moreover, since a major challenge of stance classification is the class-imbalance problem in the data [51], we also aim to explore whether incorporating different loss functions can mitigate that issue to further improve the performance of the models.Accordingly, we aim to answer the following research questions: To what extent will stance models trained with existing stance datasets be able to generalize to the task of detecting the stance of authorities?• RQ2: What is the effect of combining all existing stance datasets for training?• RQ3: Will training a stance model with AuSTR solely be sufficient?will augmenting AuSTR with existing stance datasets for training improve the performance?• RQ4: Will adopting different loss functions mitigate the class-imbalance problem thus improve the performance?
To address those research questions, we design our experiments as follows: • Cross-domain experiments denote the case where existing datasets of stance towards claims are exploited for training.Each of the stance datasets is first used solely for training our models, then all datasets were aggregated and used for training.We refer to the datasets of stance towards claims as cross-domain datasets in the rest of the paper.• In-domain experiments denote the case where AuSTR is used solely for training.
We refer to AuSTR as in-domain dataset.• In-domain augmented experiments denote the case where AuSTR is augmented with existing datasets of stance towards claims.In those experiments, we study the effect of augmenting AuSTR with each of the cross-domain datasets separately, in addition to augmenting it with all of them.
• Class-Imbalance experiments denote the case where we adopt different loss functions, that showed promising results earlier in the literature, to alleviate the class-imbalance problem.
Experimental Setup
In this section, we present the setup we adopted to conduct our experiments.
Datasets
To study the adequacy of existing Arabic datasets of stance detection toward claims for the task of detecting the stance of authorities, we adopted the following five existing datasets in training: • ArCOV19-Rumors [14] consists of 9,413 tweets relevant to 138 COVID-19 Arabic rumors collected from 2 Arabic fact-checking websites.We considered the tweets expressing the rumor as supporting (agree), the ones that are negating the rumor as denying (disgree), and the ones discussing the rumor but neither expressing nor negating it as other.
• STANCEOSAURUS [60] consists of 4,009 (rumor, tweet) pairs.The data covers 22 Arabic rumors collected from 3 Arabic fact-checking websites along with tweets, collected by the authors, that are relevant to the rumors.The relevant tweets were annotated by their stance towards the rumor as either supporting (agree), refuting (disgree), discussing, querying, or irrelevant.In our work, we considered the last three labels as other.
• ANS [46] consists of 3,786 (claim, manipulated claim) pairs, where claims were extracted from news article headlines from trusted sources, then annotators were asked to generate true and false sentences towards them by adopting paraphrasing and contradiction respectively.The sentences are annotated as either agree, disagree, or other.• ArabicFC [41] consists of 3,042 (claim, article) pairs, where claims are extracted from a single fact-checking website verifying political claims about the war in Syria, and articles collected by searching Google using the claim.The articles are annotated as either agree, disagree, discuss, or unrelated to the claim.In our work, we considered the last two labels as other.
• AraStance [22] consists of 4,063 (claim, article) pairs, where claims are extracted from 3 Arabic fact-checking websites covering multiple domains and Arab countries.The articles were collected and annotated similar to ArabicFC.
Figure 5 presents the per-class statistics for each dataset (including AuSTR), and Table 5 shows an example of a debunking text from each of them.
Data Splits
Given that AuSTR constitutes only 811 pairs, we adopt cross-validation for evaluating our models.We randomly split it into 5 folds while assigning all pairs that are relevant to the same rumor to the same fold to avoid label leakage across folds.
ANS
The Moroccan judiciary issued a 20-year prison sentence for Zefzafi.
AraStance
The circulating video entitled "a mobile phone explosion in a person's pocket in a Dubai mall" is not true.Rather, it happened a few days ago in the city of Agadir in Morocco...For all of our models, whether AuSTR is exploited for training or not, we both tune and test only on folds from AuSTR; a single AuSTR fold (dev fold) is used for tuning the models and another (test fold) was used for testing.If AuSTR is used for training, the remaining 3 folds (training folds) are used for that purpose.When the cross-domain datasets are used for training, they are fully used for that purpose (and none of them are used for tuning nor testing).For each experiment, we train 5 models to test on the 5 different folds of AuSTR, and finally report the average performance of the five models.
Stance Models
To train our stance models, we fine-tuned BERT [61], following recent studies that adopted transformer-based models for stance detection [22,60,62,63] to classify whether the evidence agrees with the claim, disagrees with it, or other.We feed BERT the claim text as sentence A and the evidence as sentence B (truncated if needed) separated by the [SEP] token.Finally, we use the representation of the [CLS] token as input to a single classification layer with three output nodes, added on top of BERT architecture to compute the probability for each class of stance.Various Arabic BERT-based models were released recently [64][65][66][67][68]; we opted to choose ARBERT [68] as it was shown to achieve better performance on most of the stance datasets adopted in our work [22].All models were trained with a maximum of 25 epochs where 5 was set as an early stopping threshold.We tuned our models by adopting three variant learning rates (1e-5, 2e-5, 3e-5).The sequence length and batch size were set to 512 and 16 respectively.
Preprocessing
We processed all the textual content by removing non-Arabic text, special characters, URLs, diacritics, and emojis from the tweets.For STANCEOSAURUS, we extended the tweets with their context as suggested by the authors [60] who showed that extending the tweets with parent tweet text and/or embedded articles titles can improve the performance of the stance models. 7oss Functions We adopted the Cross Entropy (CE) loss in all our experiments.However, due the imbalanced class distribution, we also experimented with the Weighted Cross Entropy (W CE) loss, and Class-Balanced Focal (CBF ) loss [69] adopted by Baheti et al. [70] and Zheng et al. [60] to mitigate the issue for stance detection.For CBF , we set the Fig. 6 The performance of models trained using cross-domain vs. in-domain datasets.
Evaluation Measures
To evaluate our models, we report the average of macro-F 1 scores across the 5 folds of AuSTR, in addition to average per-class F 1 .Macro-F 1 is recommended to evaluate stance models [71] due to the class-imbalance nature of stance datasets.
Experimental Evaluation
In this section, we present and discuss the results of our experiments that address the research questions introduced in Section 5.
Leveraging Cross-domain Datasets for Training (RQ1)
To address RQ1, we used the five cross-domain datasets listed earlier for training.
For each of them, we train on the full cross-domain dataset, then fine-tune 5 stance models; each is tuned on one fold from AuSTR and tested on another fold.We report the average performance on testing on the 5 folds of AuSTR in Figure 6.
The figure reveals several observations.First, the performance on the Disagree class is notably worse that the other two classes in four out of the five training datasets.This indicates that detecting the disagreement is generally more challenging than the agreement or irrelevance.
Second, comparing the performance across the individual cross-domain datasets, it is clear that we have two categories of performance.The first, including AraStance and ArCOV19-Rumors, is performing much better than the other one, including the remaining three datasets.Among the superior category, the model trained on AraStance exhibits the best performance.
As for the inferior category, we speculate the rationale behind their performance.We note that ArabicFC is severely imbalanced, where the disagree class represents only 2.86% of the data, yielding a very poor performance on that class.Moreover, it covers claims related to only one topic , which is the Syrian war, making it hard to generalize.A similar conclusion was found by previous studies that used ArabicFC [22,41].As for ANS, evidence was manually/artificially crafted, which is not as realistic as tweets from authorities.As for STANCEOSAURUS, it covers tweets relevant to only 22 claims.
As for the superior category, we observe that AraStance and ArCOV19-Rumors achieved the highest F 1 on the disagree class compared to the other cross-domain datasets.ArCOV19-Rumors covers 138 COVID-19 claims in several topical categories.AraStance covers 910 claims, which are extracted from three fact-checking websites, covering multiple domains and Arab countries, similar to AuSTR, and the evidence is represented in articles written by journalists, not manually crafted.To further investigate their performance, we manually examined 20% of AraStance and ArCOV19-Rumors disagreeing training pairs.We found that about 68% and 59% of the examined examples of AraStance and ArCOV19-Rumors respectively share common debunking keywords, such as "rumors," "not true," "denied," and "fake;" similar keywords appear in some disagreeing tweets of AuSTR.
To further investigate the relation between the datasets and the performance of the corresponding models, we analyzed the lexical similarity between the datasets.We first constructed a 2-gram vector representation for each dataset (including AuSTR) using the preprossessed context8 (excluding the claims), then we performed a pairwise cosine similarity between the vectors to get insights about the similarity between the corresponding datasets.Figure 7(a) and 7(b) present heatmaps of similarity between the debunking contexts and overall contexts of the datasets respectively.It is clear that the performance of the cross-domain models is strongly related to the dataset similarities.In particular, AraStance has the highest similarity with AuSTR on debunking context (0.20) and overall context (0.25) respectively.That resulted in the best performing cross-domain model achieving a macro-F 1 of 0.771 and F 1 (disagree) of 0.687.Moreover, ArCOV19-Rumors has the second highest similarity with AuSTR on debunking context (0.10) and the second best performing cross-domain model achieving F 1 (disagree) of 0.621.It is worth noting that although ArabicFC has the second highest similarity on the overall context, the model trained on it did not perform well especially on the disagree class, with F 1 of 0.332, due to the severe imbalance as mentioned earlier.
In summary, we found that AraStance is the best existing stance dataset for training a model for the task, as it covers a large number of fact-checked claims spanning multiple Arabic countries and topics compared to the other datasets.To answer RQ1, we conclude that some cross-domain stance datasets are somewhat useful for detecting the stance of authorities.However, motivated by the findings of Ng and Carley [63] who highlighted the potential benefit of aggregating datasets to enhance the stance detection, we were encouraged to conduct our subsequent experiments, in which we combine all cross-domain datasets for training.
Combining Cross-domain Datasets for Training (RQ2)
To address RQ2, we combined all cross-domain datasets and adopted the same setup mentioned previously, where we tune and test on AuSTR folds.As presented in Figure 6, we note that, overall, the combined model achieved a veryslightly better performance in terms of macro-F 1 over the best individual model, i.e., the model trained with AraStance only.However, considering the individual classes, it exhibited the best performance for the agree class with a big margin compared to AraStance model; but it fell short for the disagree class.We speculate the reason is that some of the datasets, namely ANS and ArabicFC, achieved low performance on the disagree class, thus when combined with other datasets it affected negatively the overall performance on the same class.
Finally, we observe that there is a clear discrepancy in the performance across different classes; considering the combined model, F 1 (agree) is 0.793, while F 1 (disagree) is 0.653.Moreover, it is clear that detecting the disagree stance is still challenging, for which we expect to benefit from introducing our in-domain data.We believe that one of the major reasons behind such results is the imbalanced nature of the combined data, where only 14.24% are disagree examples vs. 27.66%agree examples.
To answer RQ2, we found that combining all cross-domain datasets can slightly improve the overall performance compared to the best performing individual model (AraStance), but could not beat it on detecting debunking tweets.
Introducing In-domain Data for Training (RQ3)
To address RQ3, we first trained a stance model with in-domain data only, i.e., AuSTR.We then trained a model with in-domain data augmented with each of the cross-domain datasets separately and also with all cross-domain datasets combined.
As expected, the model trained with AuSTR only outperforms all models trained with cross-domain datasets across all evaluation measures, as shown in Figure 6.More specifically, it outperforms their best (i.e., the model trained with AraStance) by 15.3%, 7.1%, and 7.9% in F 1 (disagree), F 1 (agree), and macro-F 1 respectively, showing a clear need to in-domain data.
What if we augment AuSTR with the cross-domain datasets in training?Figure 8 illustrates that effect.For every single cross-domain dataset, when augmented with AuSTR, the resulted model outperforms the model trained only on the cross-domain data by a big margin, ranging from 6.8% to 35.6% in macro-F 1 .This re-emphasizes the effect of in-domain data.However, only the model trained on AuSTR+AraStance was able to outperform the AuSTR-only model in macro-F 1 and F 1 (agree) but not F 1 (disagree).It turned out that augmenting AuSTR with AraStance made the disagree class minority, constituting only 13.3% of the training examples compared to 24.3% of AuSTR training examples, which negatively affects the performance on that class.
Contrary to the results presented in Figure 6, augmenting AuSTR with all crossdomain datasets achieved the lowest macro-F 1 compared to augmenting AuSTR with individual cross-domain datasets.In fact, the combined training data becomes clearly dominated with the cross-domain data (24,313 vs. 811 examples), which leads to negligible effect of the in-domain data.
To answer RQ3, we conclude that in-domain data is needed for better detecting the stance of authorities.Moreover, augmenting AuSTR with AraStance improved the overall performance but at the expense of degrading the performance on detecting debunking tweets, which, we argue, is more crucial for the task.
Addressing the Class-Imbalance Problem (RQ4)
To address RQ4, we selected the best two models presented in Figure 8, namely the one trained with AuSTR only and the one trained with AuSTR augmented with AraStance.We then fined-tuned the stance models with the same previous setup but with two other loss functions, W CE and CBF , as described in Section 6.
As presented in Table 6, we observe that adopting W CE loss function could not improve the performance of the models compared to adopting CE.However, for the model trained with AuSTR, adopting CBF notably improved the performance over CE with about 4.2% on the agree class, which is the minority class in AuSTR data.However, it slightly degraded the performance on the disagree class.Overall, it improved macro-F 1 performance getting it closer to the performance of the model trained on AuSTR augmented with AraStance (0.843 vs. 0.845).
Surprisingly, that positive effect of CBF was not extended to the model trained on AuSTR augmented with AraStance; in fact, the performance degraded in all measures.We will leave the investigation of such result to future work.To answer RQ4, we conclude that adopting CBF in addition to training on AuSTR solely is on bar with the model trained on both AusTR and AraStance, nullifying the need for augmenting AuSTR with any cross-domain data for training.
Discussion
In this section, we discuss our evaluation results in terms of failure cases (Section 8.1) and limitations (Section 8.2).
Failure Analysis
We conducted a detailed error analysis on the 113 examples (constituting 14% of the data) that failed to be predicted correctly by the model trained with AuSTR and adopting CBF loss.We categorize the reasons behind these errors based on a thorough examination of the failed pairs.We found that the failures can be attributed to six main reasons which we discuss below.Some failed examples are presented in Table 7.
1. Implicit stance: When an authority indirectly agree or disagree with the rumor.For example, P 1 is an example of a rumor about the infection of Mahmoud Al-Khatib, the director of Al-Ahly Egyptian football club, with COVID19, and an authority tweet implicitly debunking the rumor mentioning that he is attending the training session of the team in the stadium.This failure type is the cause of 30.09% of all failures, which motivates the need to address this challenge using stance models that take this into consideration.2. Writing style: Where an authority is speaking about herself, e.g., P 2 .Based on our examination, 12.39% of the failures are due to this reason.3. Misleading debunking keywords: when an authority is either debunking another rumor that is relevant to the topic of the target rumor, or just including some debunking keywords in his tweets even when supporting a rumor.For example, in P 3 , the authority tweet mentions that the "information being posted on it today is false.",although it is agreeing with the rumor.We found that this constitutes 10.62% of the failures.4. Misleading relevant keywords: when an authority post tweets relevant to the topic of the rumor, the model may fail to predict the stance correctly, e.g., in P 4 .This constitutes 25.66% of the failed examples.5. Lack of context: when an authority debunks or supports a rumor by an announcement embedded in an image or a video, e.g, in P 5 .This motivates the need to consider the tweet multi-modality [30,72] at the processing step.Moreover, some rumors may need additional context in order to be considered relevant to the authority tweet.We observed that 6.19% of the failures are of this type.6. Arabic MSA by authorities vs. dialects by normal users: As opposed to English, working with Arabic language is very challenging as different dialects, i.e., informal languages, are used in different Arabic countries [73].These dialects may have different vocabulary than the Modern Standard Arabic (MSA) which is usually used in formal communications [74].Authority tweets are usually in formal language and written in MSA Arabic, while normal users may use their informal Arabic with variant dialects, e.g, in P 6 , which make detecting the stance more challenging.
We also observed other reasons, such as having multiple claims in the same tweet, which is causing the stance model to predict the authority tweet as other.Moreover, we noticed that some failures can be attributed to one or more of the reasons mentioned above.These challenges motivate further work on tweet pre-processing to consider embedded content within the tweets, and the need to propose stance models specific for the task.
Limitations of our study
The limitations of our work are related to both our data and the adopted stance models.We discuss these limitations below.
Data
For a portion of our data, we adopted a semi-automated approach, where we collected the disagree pairs starting from a collection of tweets containing debunking keywords.Although most of the debunking tweets automatically collected where just used as pointers to collect implicit debunking tweets, some were already posted by authorities themselves and hence were considered as part of our data.This may cause some kind of bias towards these keywords.Moreover, although AuSTR with its relatively small size yielded good performance, we believe enlarging the data with more rumors covering more topics can help the models generalize better on new emerging rumors.
Stance Models
In our work, we adopted a BERT-based stance model, but we did not experiment with other models, e.g., [75] which might improve the performance we achieved.Moreover, we only experimented with ARBERT [68] as it showed to perform well for Arabic stance detection on most of our adopted cross-domain datasets [22]; however, we did not experiment with other Arabic BERT models [76].[Agree] @malkassabi: Today, I had the pleasure of meeting with the Moroccan Prime Minister, Aziz Akhannouch, and we discussed strengthening our economic and commercial cooperation to meet the aspirations of the leadership of our two countries and our two brotherly peoples.[04-10-2022] [P 3 ] @USER: Hacking the account of the Libyan Ministry of Foreign Affairs on Twitter.[22-12-2022] [Agree] @USEmbassyLibya: The US Embassy understands that the Twitter account of the Libyan Ministry of Foreign Affairs has been hacked, and we confirm that the information being posted on it today is false.[20-12-2022] [P 4 ] @USER: A railway network to connect the port of Sohar in the Sultanate of Oman with the city of Abu Dhabi in the UAE.[15-10-2022] [Other] @Etihad Rail: Etihad Rail has made significant progress in expanding the network by successfully connecting the emirates of Sharjah and Ras Al Khaimah to the main line of the UAE National Rail Network.With this achievement, the network will extend from Sharjah and Ras Al Khaimah to Al Ghuwaifat.[12-10-2022] [P 5 ] @USER: World Cup 2022: Morocco officially protests the arbitration in the semi-finals against France.[15-12-2022] [Agree] @FRMFOFFICIEL: Announcement from the Royal Moroccan Football Federation [Embedded image with the content of the announcement].[15-12-2022] [P 6 ] @USER: The first person to have monkeypox in Egypt is 39 old .. we need two nuclear bombs to close the game.
[09- [Agree] @mohpegypt: The Ministry of Health and Population announces a positive case of monkeypox virus (Mpox) for a 39-year-old person, taking preventive measures against the infected person and his close contacts, and transferring the patient to receive treatment in one of the hospitals affiliated with the Ministry... [08-12-2022]
Conclusion
In this work, we introduced the task of detecting the stance of authorities towards rumors in Twitter, which can be leveraged by automated systems and fact-checkers for rumor verification.We constructed (and released) the first Arabic dataset, AuSTR, for that task using a language-independent approach, which we share to encourage the construction of similar datasets in other languages.Due to the relatively limited size of our dataset, we explored the adequacy of existing Arabic datasets of stance towards claims in training models for our task, and the effect of augmenting our data with those datasets.Moreover, we tackled the class-imbalance issue by incorporating variant loss functions into our BERT-based stance model.Our experimental results suggest that adopting existing stance datasets is somewhat useful but clearly insufficient for detecting the stance of authorities.Moreover, when augmenting AuSTR with existing stance datasets, only the model trained with AuSTR augmented with AraStance outperformed the model trained with AuSTR solely, except on detecting the debunking tweets.However, when adopting the class-balanced focal loss instead of the cross entropy loss, the model trained with AuSTR solely achieved comparable results to that augmented model, indicating that AuSTR solely, despite the limited size, can be sufficient for detecting the stance of authorities.
Finally, out of our extensive failure analysis, we recommend further work on tweet pre-processing to consider context expansion, and exploring other stance models that can detect the implicit stance and take the authorities writing style into consideration.Since our study focused on Arabic data, examining the task in other languages is clearly a potential path for future work.
Fig. 1
Fig. 1 Positioning the stance of authorities detection task (highlighted in yellow) in the rumor verification pipeline.
Fig. 5
Fig.5Per-class statistics of cross-domain datasets adopted in our work, as well as AuSTR for comparison.
Fig. 8
Fig.8Performance of models trained using in-domain vs. in-domain-augmented data.
[
Pair] Rumor tweet [Post date] [Gold stance] Authority tweet [Post date] [P 1 ] @USER: Mahmoud Al-Khatib was infected with Corona!Is the Al-Ahly administration still insisting on completing the league?Or will it change its mind after Khatib was infected... [24-06-2020] [Disagree] @AlAhlyTV: Captain Mahmoud Al-Khatib is watching our morning team's training session at the Tetch Stadium.[25-06-2020] [P 2 ] @USER: On an official visit of 4 days.Commerce Minister Majid bin Abdullah Al-Kassabi heads a Saudi government delegation to the Kingdom of Morocco to discuss strengthening trade and investment relations.With the participation of officials from the government sector for 12 government agencies and representatives of the private sector for more than 60 Saudi companies.[03-10-2022] The Embassy of the State of Qatar in the Republic of Tunisia denies what was reported by the media that the victim in the Bizerte incident holds Qatari nationality, and expresses its condolences to the victim's family and relatives.
Table 3
An example of manually collected supporting authority tweet and a relevant rumor tweet expressing the same claim.
Authority@Moi kuw: A resident who tried to commit suicide by stabbing himself inside a mosque was first aided, and the person was kept and the necessary legal measures are being taken in the incident.[04-
Table 5
Debunking examples (and their English translations) from the cross-domain datasets.
Table 6
Training with different loss functions.Boldfaced and underlined numbers are the best and second best respectively per measure.
Table 7
Sample examples failed to be predicted correctly by our best model.Failure types are implicit stance, writing style, misleading debunking keywords, misleading relevant keywords, lack of context, and non-MSA Arabic in order.
|
2024-01-27T14:19:02.389Z
|
2024-01-26T00:00:00.000
|
{
"year": 2024,
"sha1": "f2a4c6e3e62aee692cfcf8bf6f08efe8fb44a751",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s13278-023-01189-3.pdf",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "8ac827f920e93851435bb50aa7c582674f624cae",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": []
}
|
36325597
|
pes2o/s2orc
|
v3-fos-license
|
Quantification of Antimalarial Quassinoids Neosergeolide and Isobrucein B in Stem and Root Infusions of Picrolemma sprucei Hook F. by HPLC-UV Analysis
Rita C. S. Nunomura1, Ellen C. C. Silva1, Sergio M. Nunomura2, Ana C. F. Amaral3, Alaíde S. Barreto3, Antonio C. Siani3 and Adrian M. Pohlit2 1Department of Chemistry, Amazon Federal University (UFAM), Amazon, 2Coordenation of Research in Natural Products, Amazon National Institute (INPA), Amazon, 3Laboratory of Natural Products, Farmanguinhos, Oswaldo Cruz Institute Foundation (FIOCRUZ), Rio de Janeiro, Brazil
Introduction
Natural products have been very important to ensure the survival of the man, since the ancient times, especially as remedies to treat different diseases. Today, despite the development of new therapies and new ways of drug development (combinatorial chemistry, ie); natural products continue to play a highly significant role in the drug discovery and development process. (Newman, Cragg, 2007).
Even though fewer drugs have been approved as therapeutical agents lately, nature still inspires the drug development for neglected diseases (malaria, tuberculosis and leishmania) and alternative therapies such as phytotherapy. In both cases, medicinal plants, plants that have been used by the folk medicine for years, are mostly studied. The World Health Organization (WHO) recognized the importance of phytotherapy and the conservation of medicinal plants that stated "the importance of conservation is recognized by WHO and its Member States and is considered to be an essential feature of national programmes on traditional medicines" (Akerele, 1991).
The successful use of some medicinal plants by local population for years, in many cases for centuries, in the treatment of diseases or symptoms associated to some diseases is the basis of the development of drugs or other therapeutical products from them. For instance, artemisinin, a very potent antimalarial, including for drug-resistant malaria strains, was isolated from Artemisia annua L., a plant from the traditional Chinese medicine used as remedy for chills and fever for more than 2000 years (Agtmael et al. , 1999).
On the other hand, there is an increasing interest for medicines from nature. This interest in products of plant origin is due to several reasons as possible side-effects from synthetic drugs and the awareness that "natural products" are harmless. The world market for phytomedicinal products was estimated in U$ 10 billion in 1997, with an annual growth of 6.5%. In Germany, 50% of phytomedicinal products are sold on medical prescription and the cost being refunded by health insurance. This includes pharmaceutical formulations as plant extracts or purified fractions called phytomedicines or herbal remedies. In many countries, phytomedicines or herbal remedies are controlled as synthetic drugs and they have to fulfill the same criteria of efficacy, safety and quality control (Rates, 2001).
However the quality control of phytomedicines poses a significant challenge due to the complexity of a vegetable extract and column chromatography has proved to be a very helpful and powerful technique. The quality control of Gingko biloba L. formulations is good example of this challenge. Gingko leaves contains as active compounds flavonoids and terpene lactones (gingkgolides and bilobalide) along with long-chain hydrocarbons, alicyclic acids, cyclic compounds, sterols, carotenoids, among others. Most of the quality control of Gingko preparations are based on column chromatography and that was reviewed elsewhere already ( Sticher, 1992;van Beek, 2002).
Column chromatography, especially high performance liquid chromatography (HPLC), has been extensively used in the quality control of plant extracts and phytomedicines formulations, because of its characteristics. The chosen technique must be able to identify the interested compounds (active principles) that are normally not volatile and, in some cases, occur at very small concentrations. Ideally this technique should also be capable to quantify the interested compounds, so one can establishes dosages for the phytomedicine formulation. The required efficiency and selectivity for qualitative and quantitative analysis of the effective components can be achieved by HPLC. Li et al. (2011) have recently reviewed the use of different chromatography techniques, such as HPLC, in the quality control of Chinese medicine.
Although, HPLC is a very powerful technique applied in the quality control of medicinal plants, it is necessary to properly identify the active principles of the medicinal plant. This is achieved combining the use of HPLC or other separation technique with a biological test. The search for antimalarials from medicinal plants is one of the most successful examples of this combination as mentioned earlier. In the Amazon region, there are a large number of plants popularly used against malaria or associated symptoms (fever for instance). Milliken (1997) has identified over hundreds of antimalarial plants used by local population in the Amazon region. Many of these plants remained up to now without a study that could confirm their antimalarial activity.
From the fewer plants studied so far, Picrolemma sprucei Hook. f., has been studied by our research group. Herein we described the use of HPLC in the quality control of the antimalarial quassinoids, neosergeolide and isobrucein B, the active principles of this species.
Picrolemma sprucei Hook. f. (P. pseudocoffea Ducke is a commonly cited pseudonym) is a widely distributed and important Amazonian medicinal plant. It is known in the Amazon region by common names which call attention to its resemblance to the coffee plant: sachacafé in Peru (Duke & Vasquéz 1994), caferana in Brazil (Silva et al. 1977) and café lane or tuukamwi in French Guiana (Grenand et al. 1987). Infusions of roots, stems, and leaves of P. sprucei are traditionally used in different dosages and preparations for the treatment of Quantification of Antimalarial Quassinoids Neosergeolide and Isobrucein B in Stem and Root Infusions of Picrolemma sprucei Hook F. by HPLC-Uvanalysis 189 malaria fevers (Bertani et al. 2005, Vigneron et al. 2005, Milliken 1997), gastrointestinal problems and intestinal worms (Moretti et al. 1982, Duke & Vasquéz 1994. Also, the sale of this plant is sometimes restricted by local vendors due to its use in provoking spontaneous abortions. Studies on the biological activity of infusions and other derivatives of P. sprucei have shown that extracts of this plant have important antimalarial and antihelminthic activities. Bertani et al. (2005) reported that a P. sprucei leaf infusion inhibited 78 % of Plasmodium yoelli rodent malaria growth in vivo at a dosage of 95 mg/kg. Furthermore, these same authors reported that of a total of 36 preparations from 25 traditionally used antimalarial plants from French Guiana, P. sprucei leaf infusion had the greatest in vitro activity against the human malaria parasite Plasmodium falciparum (median inhibition concentration, IC 50 =1.43 μg.mL -1 ). These results indicate P. sprucei leaf extracts have potential as antimalarials.
In 2006, Nunomura et al. showed that water and ethanol extracts of P. sprucei at concentrations of 1.3 g.L -1 were lethal (90-95 % mortality) in vitro towards larvae of the nematoide species Haemonchus contortus (Barber Pole Worm), a gastrointestinal nematode parasite found in domestic and wild ruminants. These studies lend support to popular assertions that infusions and other derivatives of P. sprucei have important antimalarial and antihelminthic activities. Two quassinoids have been isolated from P. sprucei roots, stems and leaves and identified as isobrucein B (1) (Moretti et al. 1982) and neosergeolide (2) (Schpector et al.1994, Vieira et al. 2000. Quassinoid is the name given to any of a number of bitter substances found exclusively in the Simaroubaceae family (Polonsky 1973). Early reports on P. sprucei composition from French Guiana (Moretti et al. 1982) described the isolation of sergeolide (3), a structural isomer of 2 and a derivative, 15-deacetylsergeolide (4) (Polonsky et al. 1984), from the leaves. Since confirmation of the structure of 2 by x-ray crystallography (Schpector et al. 1994) and the systematic application of two-dimensional NMR techniques to the identification of components of P. sprucei (Vieira et al. 2000, Andrade-Neto et al. 2007), neither sergeolide nor its derivative have ever again been described and may be erroneous structures.
Chemically, quassinoids are degraded triterpene compounds which are frequently highly oxygenated. Many quassinoids exhibit a wide range of biological activities in vitro and/or in vivo, including antitumor, antimalarial, antiviral, anti-inflammatory, antifeedant, insecticidal, amoebicidal, antiulcer and herbicidal activities. For instance, bruceantin (5), brusatol (6), simalikalactone D (7), quassin (8) and glaucarubinone (9) are some of the most well-studied quassinoids and exhibit a wide range of biological activities (Guo et al. 2005 (Fandeur et al. 1985) and neosergeolide (Andrade-Neto et al. 2007) display significant in vitro antimalarial activity to the human malaria parasite P. falciparum. Recently, the in vitro antimalarial activities of isobrucein B and neosergeolide were shown to be comparable to antimalarial drugs quinine and artemisinin (Silva et al. 2009). According to this same in vitro study, isobrucein B and neosergelide are as cytotoxic or as much as an order of magnitude more cytotoxic than the antitumor drug doxorubicine towards several human tumor strains. Additionally, isobrucein B has been shown to have important antileukemic, antifeedant and leishmanicidal (Moretti et al. 1982;Nunomura, 2006). Bertani et al. (2005) conveyed concern about the toxicity of infusions and other preparations based on different parts of P. sprucei which is recognized in Amazonian traditional medicine in general. Additionally, these authors were critical of the absence of knowledge of the toxicity of infusions prepared from this species and lack of information available on the www.intechopen.com Quantification of Antimalarial Quassinoids Neosergeolide and Isobrucein B in Stem and Root Infusions of Picrolemma sprucei Hook F. by HPLC-Uvanalysis 191 quassinoid composition of these infusions in the study on toxicity published by Fandeur et al. (1985), which focused only on the acute toxicity and antimalarial activity of isolated quassinoid components o P. sprucei and not on toxicity and antimalarial activity of infusions. Additional studies are needed to prove the in vivo efficacy and pharmacological activity of these infusions as antimalarials with focus on the dose-effect and dose-response to define the levels of toxicity. The aim of the present study was to develop a method for the quantification of isobrucein B and neosergeolide in P. sprucei root and stem infusions based on reversed-phase high performance liquid chromatography (HPLC) and ultraviolet detection (UV).
Reagents and solvents
Acetonitrile, HPLC grade, was purchased from Mallinckrodt Baker, Inc. (Xalostoc, Mexico). The water used in all experiments was purified on a Milli-Q Plus System (Millipore, Bedford, MA, USA).
www.intechopen.com
Chromatography and Its Applications 192
Preparation of root and stem infusions
P. sprucei infusions were prepared based on a popular recipe which is used to provoke spontaneous abortions and with which toxic effects are associated according to locals. Stems are the part most commonly used in these remedies. Shade-dried, ground root or stem (9.0 g) was placed in a beaker and boiling deionized water (1.0 mL) was added. The beaker was covered and allowed to stand for 10 min. After this time, the contents of the beaker was filtered hot in a funnel with filter paper which resulted in root and stem infusions. A single infusion was prepared from powdered, dried roots and another from powdered stems obtained from mature plants.
Calculation of extractives
Infusion as prepared above was totally evaporated using rotary evaporation under vacuum and a heated bath (< 50 ºC), then freeze-drying. The resulting dry extract was weighed and divided by the mass of plant material used (9.0 g) in the preparation of each infusion and expressed as a percentage (w/w) of extractives.
Preparation of samples of infusions for HPLC analysis
Freeze-dried extracts were dissolved in water to yield final concentrations of stem and root extracts of 445 and 911 mg.L -1 , respectively.
Preparation of standard solutions of isobrucein B (1) and neosergeolide (2)
Stock solutions of 1 and 2 were prepared at 0.63 g.L -1 and 0.50 g.L -1 , respectively, in methanol. Calibration standards were obtained by appropriate dilution of the stock solutions with methanol. For 1, the concentrations used in calibration were 100, 50, 25, 10 and 5.0 mg.L -1 . For 2, the concentrations used in calibration were 20, 10, 5.0 and 2.5 g.L -1 . All standard solutions were stored at -20 °C until analysis and protected from light, remaining stable for at least three months.
Apparatus and chromatographic conditions
The liquid chromatography system consisted of an LC-10 Shimadzu, with a SPD-10A UV detector, LC-10AVp quaternary pump, SIL-10A autosampler and a CBM-10A system controller (Kyoto, Japan). A Supelcosil LC-18 analytical column (250 mm × 4.6 mm i.d., 5 μm particle size) from Supelco (Bellefonte, PA, USA) was used for separation of 1 and 2. The mobile phase consisted of a gradient of acetonitrile:0.05 % aqueous trifluoroacetic acid delivered at 1.0 mL.min -1 as follows: initial (t i =0 min) 10:90, then linear gradient over 20 min to 25:75, and this composition was maintained (isocratic) until the end of each run (t f =30 min). Flow rate was 1 mL.min -1 . Quantification was performed using the detector set at a wavelength of 254 nm. Injection volume was 50 μL.
Analysis of Infusions by HPLC-UV and calibration curve
Chromatograms of pure 1 and 2 presented retention times of approximately 14 and 25 min, respectively. The peaks corresponding to 1 and 2 were identified in each chromatogram of the infusions with the help of injection of the standard solutions of 1 and 2 or with coelution (figure 3). Several injections of standard solution were performed and then average areas were calculated for each individual concentration injected for isobrucein B (1) and neosergeolide (2). The calibration curves in the determination of 1 and 2 in P. sprucei stem and root infusions ( Figure 4A and 4B, respectively) used in the determination of these components in P. sprucei stem and root infusions were obtained by linear regression performed on the average areas versus standard sample concentrations Y and X, respectively (figure 3) at 254 nm.
After calibration with standard samples of isobrucein B and neosergeolide, P. sprucei root and stem infusions were analyzed. Samples of infusions were analyzed in triplicate and the average values of the areas corresponding to the quassinoids neosergeolide and isobrucein B were calculated. From these average areas, the concentration of each quassinoid was calculated in the root and stem infusions using the linear equation generated during calibration of each quassinoid.
Results and discussion
The quassinoids isolated from P. sprucei were identified by NMR techniques and compared to literature (Moretti, et al. 1982, Vieira, et al. 2000. The chemical shifts of NMR 1 H and 13 C of 1 and 2 are presented in tables 1 and 2 respectively. The authenticity of standards is a key-step in quantitative analysis, especially in plant extract analysis. In most cases, authentic standards are not available commercially and this strengths the importance of liquid chromatography. Liquid chromatography enables the isolation of authentic standards at different scales (from microgram until gram scale) and at very high purity that can be used later to perform quantitative analysis. In our study, combining open-column and planar chromatography, we were able to isolate several milligrams of each pure standard, as can be observed at figure 3, that were then used in the quantitative analysis of the quassinoids 1 and 2 in root and stem infusions of P. sprucei by HPLC.
The structural authenticity of each standard can be confirmed by the use of modern spectroscopy techniques as MS and NMR. Although these techniques are considered complementary, normally NMR is much more informative. For instance, in our study, HMBC experiments furnished conclusive information that not sergeolide, but neosergeolide was isolated.
As described in the experimental section, samples of stem and root infusions were prepared using approximately 9 g of crushed, dried stems are infused with 1 L of boiling water. HPLC analysis of P. sprucei stem and root infusions resulted in the concentrations presented in table 3. Table 3. Concentrations of isobrucein B (1) and neosergeolide (2) in P. sprucei stem and root infusions determined by HPLC-UV at 254 nm.
Root infusion
Consistent with the data presented in table 1 the concentrations of both 1 and 2 are at least twice as large in root infusions as in stem infusions. Interestingly, the percentage of extractives of roots during infusion (5.1 %) is twice that of stems (2.5 %) which would seem to be related to the greater concentration of these constituents in the root infusion.
Comparison of root and stem infusions shows that 1 is about 40 times as concentrated as 2 in both stem and root teas on a molar basis. These data suggest that the more relevant active principle in stem and root infusions analyzed is 1.
Conclusion
The HPLC analysis of infusions (aqueous extracts) of stems and roots of P. sprucei revealed higher quantities of isobrucein B than neosergeolide, 40 fold, for both infusions. Considering this information and in vitro activity of both compounds, it is very likely that isobrucein B plays more important role for the antimalarial activity than neosergeolide.
More research is needed to describe seasonal, regional and specimen specific variation in P. sprucei quassinoid composition which should have a direct influence on the composition of stem and root infusions prepared from samples of different origins. Knowledge of the extent of these variations, especially as they influence quassinoid composition in infusions, is of fundamental importance given the valuable medicinal and dangerous toxic properties of these widely used Amazonian remedies.
The high performance liquid chromatography has proved to be a powerful tool in the plant extract analysis. The possibility to perform qualitative and quantitative analysis, by HPLC, enables the development of new phytoterapeutical products from the Amazon biodiversity.
Acknowledgment
The autors wish to thank Dr Wanderli Pedro Tadei and CNPq/ FAPEAM -PRONEX, Rede Malária for financial support to the publication of this chapter, Massuo Kato for use of analytical HPLC apparatus and Profs. Norberto P. Lopes and Valquiria P. Jabor for helpful comments regarding this manuscript.
|
2017-09-16T14:55:58.013Z
|
2012-03-16T00:00:00.000
|
{
"year": 2012,
"sha1": "a4f2d743a5d073d838494f62b7796856ae7daf87",
"oa_license": "CCBY",
"oa_url": "https://www.intechopen.com/citation-pdf-url/32746",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "a4f2d743a5d073d838494f62b7796856ae7daf87",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": []
}
|
215668656
|
pes2o/s2orc
|
v3-fos-license
|
Antibiotic resistance and molecular characterization of enteroaggregative Escherichia coli isolated from patients with diarrhea in the Eastern Province of Saudi Arabia
Aim To investigate presence of Enteroaggregative Escherichia coli (EAEC) in patients suffering with diarrhea by targeting the pCVD432 (pAA) gene using PCR. Methods There were 63 non-duplicate isolates of E. coli isolated from diarrheal cases in teaching hospital in Eastern Province of Saudi Arabia between May 2013 to July 2014. All E. coli strains were examined for antibiotic susceptibility testing and polymerase chain reaction (PCR) for detection of virulence gene markers for EAEC. Results Of the 63 E coli strains that were reported with diarrheal cases, 35 (55.6%) EAEC were tested positive for pCVD432 gene and aggR gene was present in 19 (54.3%) strains. All strains tested positive for pCVD432 and aggR genes were classified as typical EAEC (tEAEC). EAEC revealed resistance to tetracycline, ampicillin, nalidixic acid, trimethoprim sulfamethoxazole, ciprofloxacin, streptomycin, noroxin, and piperacillin. Conclusion EAEC was detected for the first time, among Saudi patients with diarrhea in this region of Saudi Arabia. The reported antibiotic resistance in this study is considered high among isolated EAEC strains to routinely prescribed antibiotics in our area.
Introduction
Enteroaggregative Escherichia coli (EAEC) is among the most remarkable heterogeneous strains of emerging E. coli that are responsible from causing persistent watery diarrheal cases in children and adults worldwide [1]. EAEC was determined to be a diarrheal enteric pathogen due the distinctive features of aggregative adherence (AA) pattern to HEp-2 cells in culture as well as its ability to form a "stacked-brick" adherence pattern, which is harboring a 60-MDa plasmid (pAA) [2]. However, the HEp-2 cell assay test is remaining as the gold standard technique for the confirmation of EAEC [3,4,5]. Therefore, this test is a diagnostic assay that is not suitable for most laboratories worldwide because it is too laborious and need to be carried out only in reference research laboratories with cell culture, it requires cell culture setup, [6]. At the molecular level, polymerase chain reaction (PCR) is most suitable and reliable techniques for identification of virulence gene markers of EAEC in suspected diarrheal cases.
The pAA plasmid in EAEC is encoding AA fimbriae (AAF) from I to IV genes [7]; antiaggregating protein dispersin (app); entroaggregative heat stable enterotoxin 1 (EAST-1) also known as astA) [8] and the gene encoding the transcriptional activator of virulence genes (AggR) [9]. The aggR gene is playing significant role in adherence and pathogenesis of EAEC, however, any strain harboring aggR gene is considered typical EAEC (tEAEC) strain [9]. The AggR promotes the expression of plasmid harboring the virulence factors, since its role as a transcriptional activator [7]. The surveillance of antibiotic resistance is very important in bacterial isolates due to continuous application of antibiotics in treatment of enteropathogenic infection and cephalosporins and fluoroquinolones are the most common used. Continuous surveillance and documentation of antibiograms resistance patterns will provide useful information to treatment and control the spread of antibiotic resistance.
In Saudi Arabia, there is no information on the prevalence of EAEC and other diarrheagenic E. coli (DEC) pathotypes isolates and their antimicrobial resistance pattern. Therefore, the aims of this study were as follows: 1) to investigate the pAA plasmid gene in E. coli strains isolated from inpatients and out patients with cases of diarrhea by using PCR; 2) to analyze the strains harboring pAA plasmid for the prevalence of virulence genes that are associated with EAEC; and 3) to carry out antibiotic susceptibility to find out the resistance pattern in all E. coli strains, and to analyze all strains using enterobacterial repetitive intergenic consensus (ERIC-PCR) to track clonal relationship.
Fecal samples strains
There were 63 strains of E. coli isolated and collected from children and adults patients with diarrheal episodes in King Fahd teaching hospital in Al-Khobar between May 2013 and July 2014. Among the 63 E coli strains included in this study, 52 and 11 strains were isolated from adults and children, respectively. All data concerning the patients age, gender, and month of isolation were recorded in this study. The collected strains of E. coli isolated from diarrheal specimens were identified using the VITEK 2 system (Bio-Merieux) and standard methods [10]. PCR was used to screen all E. coli strains for EAEC virulence gene markers.
EAEC assay by PCR
The obtained strains of E. coli isolated from patients with diarrheal episodes were further cultured in on McConkey agar. The lactosefermenting Gram-negative colonies of obtained E. coli were tested using standard biochemical tests for confirmation of E. coli. The isolated purified colonies on selective agar were cultured onto Tryptic soy agar plates (TSA, Oxoid) and tested for indole production using API 20 E strips and others biochemical tests. Bacterial genomic DNA was extracted by using boiling method as described elsewhere [11]. Three colonies of each E. coli strain from the TSA plates were mixed with 300μl of nuclease free water in micro-centrifuge tubes and boiled in water bath for 10 min. The boiled bacterial suspensions were centrifuged at 10,000 g for 5 min and the supernatants were used as DNA templates for PCR [11]. Primers to examine the virulence markers are presented in Table 1. All strains of E. coli strains were examined for the pCVD432 gene [12] using PCR. All positive strains of E. coli for pCVD432 were screened for aggR [13], aap [14], astA [15], and aaf/II [16] and positive controls were included in each PCR run.
Molecular typing
The ERIC-PCR DNA fingerprinting method with repetitive primer sequence and amplification conditions, as described elsewhere [17,18] was used to analyze E. coli strains for clonal relationship. The obtained ERIC fingerprint profiles were clustered by using Dice coefficients and the unweighted average pair group method (UPGMA) with position tolerance of 1%. The ERIC electrophoresis agarose gels images of DNA fingerprints were constructed with the use of Gel Compar II software (Applied Maths, St-Martens-Latem, Belgium).
Statistical analysis
A chi-square test was performed to analysis the difference in proportion between antimicrobial resistance results of EAEC and Non-EAEC strains isolated from patients with diarrhea. All calculations were done by using MedCalc software version 17.9.2 (MedCalc Software, BVBA, Ostend, Belgium).
Results
There were 63 patients who were included in the present study; 11 (17.5%) samples were from children under 15 years of age and 52 (82.5%) samples were from adults aged 21-85 years. Out of 63 examined isolates of E. coli by using PCR, 35 (55.6%) were harboring pCVD432 (pAA) and 82.9% of these infections occurred during warmer months from May through October, as shown in Table 2. All isolates of E. coli were tested positive for the pCVD432 gene were examined using PCR to detect aggR, aap, aaf/II, and astA virulence genes of EAEC. Among the 35 patients with pCVD432 gene, only 19 (54.3%) isolates were positive for the aggR gene and were classified as tEAEC. In this study, all the 35 isolates were found positive for the astA gene, and the aap gene was found in 33 (94.3%) patients; no single isolate was found positive for the aafII gene, as shown in Tables 2 and 3. The most prevalent combination of the virulence factor profile in strains of EAEC were (aap, astA) and (aap, aggR, astA) as shown in Table 3.
All the 63 diarrheogenic E. coli (DEC) isolates that were analyzed using ERIC-PCR were typeable and displayed a unique genotypic pattern, as presented in Figures 1 and 2 (see also Figures S1, S2 and S3). Using the cluster cutoff method, six distinct clusters with 60 strains that were closely related to ERIC fingerprints were identified while a similarity cutoff value of 90% was applied, except for three strains (Non-EAEC6, EAEC15 and EAEC75) that were present in a single cluster (SC), as shown Table 1. Primers used for detection of virulence markers in this study. in Figure 1 (see also Figure S1). Grouping of DNA fingerprints similarities revealed by ERIC-PCR for all E. coli isolates were analyzed constructed and calculated according to algorithm for the unweighted pair group method with arithmetic mean (UPGMA). As shown in Figure 2 (see also Figures S2 and S3 for full images), 12 strains of tEAEC fingerprint profiles were grouped together in clusters 2 and 3 compared with the other clusters Subsequently, 13 strains of tEAEC were isolated from patients' fecal specimens during 2013 while only three strains of tEAEC were isolated during 2014, as shown in Figure 1. Two strains of tEAEC (EAEC71 and EAEC74) were isolated during June and September 2013 and were grouped in cluster ET-6 with strains that were isolated during 2014. Of the 63 strains of E. coli, 21 (33.3%) with identical fingerprints were grouped in cluster 6 and most of these strains were isolated in June 2014.
The results of antimicrobial susceptibility testing of all 63 strains were presented in Table 4. EAEC revealed the following high percentages of resistance: tetracycline, 68.6%; ampicillin, 60%; nalidixic acid, 60%; trimethoprim sulfamethoxazole, 42.9%; ciprofloxacin, 40%; streptomycin, 40%; and noroxin, 37.1%. The high percentages of resistance in non-EAEC clinical isolates were ampicillin, 64.3%; tetracycline, 50%; trimethoprim sulfamethoxazole, 46.4%; cefotaxime, 28.6%; nalidixic acid, 28.6%; piperacillin, 28.6%; and cephalothin, 25%. All EAEC isolates were susceptible to amikacin, while all non-EAEC isolates were susceptible to amikacin and colistin. Overall, 68.3% of tested EAEC and non-EAEC isolates were multidrug resistant (MDR) and EAEC showed greater resistance than non-EAEC isolates. As presented in Table 4, most of the p-values were more than 0.05 meaning that there is no statistical difference between the proportions. The only exceptions are for cefepime for sensitive (S) category. The two other significant p-values are very small with only one case and this is not meaningful. The highest antibiotic resistance patterns were noticed among 68.6% of EAEC isolates and were grouped together in cluster ET-2 and ET-3 while all 63 isolates of EAEC and non-EAEC were typed using ERIC-PCR DNA fingerprinting, as shown in Figure 2.
Discussion
The EAEC pathotype is one of subgroup of diarrheagenic E. coli (DEC) and its well-known worldwide for causing severe diarrhea in infected children and adults [2]. Nevertheless, EAEC its pathogenicity is No. of positive 0 3 3 1 9 3 5 % of positive 0% 94% 54% 100% * SC, single cluster.
remaining controversial and that due to heterogeneity and clinical relevance of E. coli strains [1]. In meta-analysis study conducted by Huang et al., (2006) [20] was reported that EAEC is responsible of causing persistent and chronic diarrhea in different group of populations in developing and industrialized countries.
In meta-analysis study conducted by Huang et al. (2006) [20] was reported that EAEC is responsible of causing acute and persistent diarrhea in different group of populations in developing and industrialized countries, in addition to that, this study confirmed the heterogenicity of EAEC strains and inadequate studies were reported the role of emerging EAEC strains in acute diarrheal illness. The present study also documented the EAEC causes sporadic cases of diarrhea in the Eastern Province of Saudi Arabia. Globally, EAEC shows an alarming increase in resistance to different ranges of antibiotics [21] (Kong et al., 2015). In Saudi Arabia, no published study has reported the situation of EAEC that is associated with diarrhea in terms of molecular characterization and antimicrobial resistance. This study is the first study to report the detection of virulence factors and high rate of antibiotic resistance among the isolated EAEC strains in Saudi patients infected with diarrhea in Saudi Arabia and to provide insight on the current DEC situation.
In this study, 63 fecal specimens from outpatients and inpatients with diarrhea were examined for EAEC. Thirty-five (55.6%) were positive for the pCVD432 gene and were identified as EAEC. The DNA probe pCVD432 has been developed from the 60-MDa plasmid pAA to detect EAEC using PCR [22]. Several studies have reported a high specificity of the pCVD432 probe for identification of EAEC [22,23,24,25]. In this study, the HEp-2 cell adherence assay was not used for confirmation of EAEC because this assay experiment is currently performed only in reference research laboratories and it is also laborious to perform. However, in this study, the PCR assay targeting the pCVD432 gene was shown to be a more sensitive and reliable molecular assay for the identification of EAEC. Several epidemiological studies have shown that strains that are positive for the pCVD432 gene-positive strains and harboring the aggR regulon are characterized as tEAEC pathogens [26]. In this study, 54.3% of strains were identified as tEAEC and they showed different variations to other virulence genes. In this study, there are different combinations of virulence gene markers were detected in examined strains of EAEC, as presented in Table 3. Among the virulence gene markers, the astA gene was was detected in all 63 strains of EAEC and non-EAEC. A recent and similar study in China reported a high prevalence rate of the astA virulence gene with a detection rate (88%) among EAEC strains that were isolated from clinical fecal specimens [27]. However, our results disagreed with some other studies, which reported that the astA gene is rarely detectable in EAEC strains [24,28]. The astA gene that encodes the EAST-1 toxin is responsible for causing diarrhea and chloride secretion [29], and also is playing a significant role it was found to be important in the development of prolonged acute and persistent diarrhea [30]. However, the EAST-1 is associated with EAEC and it has been also detected in enteropathogenic E. coli (EPEC), enterotoxigenic E. coli (ETEC) and enterohaemorrhagic E. coli (EHEC) strains [28]. In this study, 19 (54.3%) tEAEC were found to be positive for three distinct combinations of virulence genes (aap, aggR, and astA), and most of the strains were isolated from adult patients, as shown in Table 3. However, our results agreed with those of Huang et al. (2006) [20] who reported and proven that EAEC strains harboring aggR, aap, and astA virulence genes are definitely responsible of causing acute diarrhea in adults.
The adherence of EAEC to the intestinal epithelium is the first step in gut colonization, which requires fimbrial structures that are known as AA factors (AAF) [31,32]. There are four major variants of the AAF structures in pilin subunits (AAF/I to AAF/IV), and all of them are regulated by the transcriptional activator aggR, which is situated on the EAEC virulence plasmid pAA [16]. In the present study, the aaf/II gene was not detected among all strains of EAEC isolates. The current study results are consistent with other published epidemiological studies, which reported that most EAEC strains do not express any of the four known AAF variants [33,34]. This suggests that strains of EAEC with a "stacked-brick" AA lack AAF and might have other adhesion mechanisms [2]. Boisen et al. (2008) [16] demonstrated that a novel aggregative adhesion pilin that was regulated by the aggR regulon in the EAEC strains lacks a known AAF and that might be distantly linked to adhesins of Dr family. Several studies have suggested that other additional undiscovered AAF variants may exist and need to be explored [16]. From this study and other results that were reported worldwide, and because of the EAEC heterogeneity, there are no specific common combinations of virulence factor sets that have been found to identify all EAEC strains using the PCR assay [4,24,35]. However, the pathogenicity of these bacteria remains controversial [1]. ERIC is a useful chromosomal DNA fingerprinting PCR-based technique for molecular epidemiological characterization of several bacterial pathogens of medical importance. However, the obtained ERIC patterns are easy to evaluate. In this study, the ERIC-PCR typing revealed six cluster patterns, as shown in Figure 2 and Table 3. Our results indicate that ERIC is useful as a molecular tool for typing microbial outbreaks. In this study, we detected high levels of antibiotic resistant EAEC strains that were isolated from Saudi patients with diarrhea, but the percentages of MDR patterns among EAEC and Non-EAEC were 68.6% and 67.9%, respectively, as presented in Figure 2 and Table 4. Overall, EAEC isolates showed greater antibiotic resistance than non-EAEC. Having knowledge about the antimicrobial susceptibility of enteric bacteria that are associated with causing frequent diarrhea such as EAEC and other E. coli pathotypes will help to develop useful information for treatment. Our study documented a high level of quinolone resistance among EAEC isolates toward nalidixic (60%) and ciprofloxacin (40%) ( Table 4). Therefore, our results are in agreement with a recent study that was performed in Denmark, which reported that 34% of ciprofloxacin resistance in adult patients with diarrhea was caused by EAEC [30].
In this study, the investigated strains of EAEC showed a high rate of resistance to most of the commonly used antibiotics such as tetracycline (68.6%), ampicillin (60%), nalidixic acid (60%), trimethoprim sulfamethoxazole (42.9%), streptomycin (40%), ciprofloxacin (40%), and noroxin (37.1%). Additionally, 68.3% of EAEC and DEC strains showed MDR resistance patterns to more than three antibiotics, as shown in Figure 1. Therefore, our study results are consistent with the findings of similar studies that reported the recent dramatic increase in antibiotic resistance towards most commonly used antibiotics that are used to treat EAEC [30,36,37,38].
Conclusion
The high number of MDR EAEC strains that were detected in our study is a cause for concern, and therefore, the establishment of longterm surveillance programs are required to find changes in the spectrum of antimicrobial resistance patterns of EAEC in Saudi Arabia. In this study, resistance of EAEC strains to ampicillin (60%), nalidixic acid (60%), trimethoprim sulfamethoxazole (42.9%), and ciprofloxacin (40%) were extremely high in Saudi patients with diarrhea. Therefore, antibiotics should be prescribed only for patients with severe and persistent diarrhea. In conclusion, the findings of this study confirm that EAEC was detected in children and adult patients with diarrhea and highlights the importance of continuous monitoring for antibiotic resistance in EAEC and another DEC pathotypes.
Declarations
Author contribution statement Nasreldin Elhadi: Conceived and designed the experiments; Performed the experiments; Wrote the paper.
|
2020-04-09T09:18:49.337Z
|
2020-04-01T00:00:00.000
|
{
"year": 2020,
"sha1": "eb97d663411c6460de34ad89143fc56e3f50e681",
"oa_license": "CCBYNCND",
"oa_url": "http://www.cell.com/article/S2405844020305661/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "5d0092b949e9d75cab1984cf0bc6ccbf4ab7abf8",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
}
|
157810153
|
pes2o/s2orc
|
v3-fos-license
|
Fluctuation of USA Gold Price - Revisited with Chaos-based Complex Network Method
We give emphasis on the use of chaos-based rigorous nonlinear technique called Visibility Graph Analysis, to study one economic time series - gold price of USA. This method can offer reliable results with fiinite data. This paper reports the result of such an analysis on the times series depicting the fluctuation of gold price of USA for the span of 25 years(1990 - 2013). This analysis reveals that a quantitative parameter from the theory can explain satisfactorily the real life nature of fluctuation of gold price of USA and hence building a strong database in terms of a quantitative parameter which can eventually be used for forecasting purpose.
Introduction
In the modern science of finance, the application of financial physics has been developed recently. Many studies have found the financial time series to exhibit some non-linear properties such as long-memory in volatility [1,2,3,4], a Multi-fractal nature [5,6,7,8,9,10,11], and fat tails [12,13,14,15]. Extensive methods have been adopted to extract the empirical Multi-fractal properties in financial data sets, for instance, the Wavelet Transform Module Maxima [WTMM] as per [16,17,7] and the Multi-fractal Detrended Fluctuation Analysis (MF-DFA) [18].
Self-similar processes such as fractional Brownian motion (fBm) are currently used to model fractal phenomena of different nature, ranging from physics, biology, economics or engineering. fBm has been used in models of electronic de-localization, as a theoretical framework to analyze turbulence data, to describe geologic properties, to quantify correlations in DNA base sequences, to characterize physiological signals such as ECG, EEG, network traffic and even to categorize music signal emotion-wise. Fractional Brownian motion B H (t) is a non-stationary random process with stationary self-similar increments (fractional Gaussian noise, fGn) that can be characterized by the Hurst exponent(H), where 0 < H < 1. The onestep memory Brownian motion is obtained for H = 1/2 , whereas time series with H > 1/2 shows persistence and anti-persistence if H < 1/2.
Though Hurst exponent has been used extensively for financial analysis and it has successfully detected long range correlations in financial time-series, its computation is still a problem. The main problem about the Hurst exponent is the effect of finite data length for the estimation of Hurst exponent. The Hurst exponent yields to most precise and accurate result for random processes such as Brownian motion time-series with an infinite number of data points. But in real life situation we use finite time-series to estimate the Hurst exponent, longrange correlations in the time-series are partially broken into finite series and local dynamics corresponding to a particular temporal window are overestimated. Hence, the Hurst exponent calculated for real financial data inevitably deviates from its real value.
It is a common practise to use MF-DFA technique for this type of analysis due to its obvious advantage of having highest precision in the scaling analysis. However, as has been discussed earlier this method suffers from one lacuna. This theory demands that the length of the timeseries to be analysed has to be infinite, whereas in real life this time-series is always finite because there is no other option. In this regard another radically different rigorous methods-Visibility network analysis -has been reported by Lacasa et.al. [19,20]. Recently this method is extensively used over finite time-series data set and has produced reliable result in several domains of science and social science.
Lacasa et.al. [19,20] has introduced the visibility algorithm, based on graph theoretical techniques. A visibility graph is obtained from the mapping of a time series into a network. As already mentioned the advantage of visibility graph technique is that it gives more accurate estimate of Hurst exponent compared to other method(MFDFA) since MFDFA theory demands that the calculation must be done on infinite series, but in practise it is done on fine time series, resulting in wrong estimation of Hurst exponent. This method is very much suitable for analysis finite time series (real life situation). The reliability of this novel and new methodology is confirmed with exhaustive numerical simulations as well as with analytical developments [19,20].
It has been found from empirical study that during financial deregulation the stock markets of a country become sensitive to both domestic and peripheral financial factors. One such factor is gold price. Gold has been used as money and as a relative standard for currency throughout history. The price of gold and stock fluctuates in an opposite direction, globally. People decrease their investment in gold when its price is low and increase the same for the case of stock price. This process increases the value of stock price due to this huge investment. Also, when the stock prices are low, people invest more in gold while waiting for the crisis to fade away. This again increase the demand for gold and in turn the price of gold. In effect gold is a substitute investment option for investors. When the gold price is in rising trend, investors go for investing in gold and lessen investment in stock market. This makes stock prices to fall. Hence we can expect an negative relationship between gold and stock price [21]. Gaur et al. [22] also documented the historical evidences on the simultaneous fluctuation of gold price and stock price in India. He also concluded that when the stock market crashes or when the dollar weakens, gold continues to be a safe haven investment because gold prices rise in such circumstances.
Recently a few works have been reported where interesting attempts to study fluctuation of USA gold price as well as Indian stock market BSE using MF-DFA tech- nique. However, as briefed above due to small sample size the results may not be reliable as expected from the theory. Also Yu Long et al. [23] have mapped the gold price time series into a visibility graph network, and have explored the mechanism underlying the gold price fluctuation from the perspective of complex network theory and also analysed the nature of the gold price fluctuation. In view of this in the present investigation we propose to perform analysis using Visibility graph technique with prime objective of 1. Studying the fluctuation pattern of USA gold price.
2. Analysing scope of the application of quantitative visibility graph analysis as a pre-cursor of financial crisis of course with proper validation.
The rest of the paper is organized as follows. The method of analysis is explained in the Section 2, then in Section 3 the details of data is elaborated. The result is analysed and the inferences from the test results are presented in Section 4. Finally the paper is concluded in Section 5.
Method of analysis
We would briefly describe the Visibility graph technique in this section.
Visibility Graph Algorithm
The visibility graph algorithm maps time series X to its Visibility Graph. Suppose the i t h point of the time series, X i . Two vertices (nodes) of the graph,X m and X n , are connected via a bidirectional edge if and only if the below equation is valid. where ∀j ∈ Z + and j < (n − m) In Fig.1 it is shown that X m and X n can see each other if the Eq. 1 is valid. As per the VG algorithm two sequential points of the time series can see each other hence all sequential nodes are connected together.
Note:We should be converting the time series to positive planes as the above algorithm is valid for positive X values in the time series.
Power of Scale-freeness of VG -PSVG
The degree of a node in a graph -here VG is the number of connections or edges the node has to other nodes. The degree distribution P (k) of a network is then defined to be the fraction of nodes with degree k in the network. Thus if there are n nodes in total in a network and n k of them have degree k, we have P (k) = n k /n.
A power law is a functional relationship between two quantities, where one quantity varies as a power of another. The scale-freeness property of Visibility Graph states that the degree distribution of it's nodes follow Power Law :P (k) ∼ k −λp , where λ p is a constant and it is called the power of the scale-freeness.
As per Lacasa et al. [19,20] the power of the scale-free structure (λ p ) of the VG corresponds to the amount of fractality of the time series, and the slope of log 2 [P (k)] versus log 2 [1/k], indicates the FD -Fractal Dimension of the signal. This value of the slope known as Power of Scale-freeness in Visibility Graph (PSVG) as a measure of complexity and fractality of the time series. PSVG is denoted by λ p here. As trend of P (k) w.r.t. k is confirming to the Power Law, Power of Scale-freeness in Visibility Graph (PSVG) is calculated for the graph from the slope of log 2 [P (k)] versus log 2 [1/k] as shown in the Fig. 3.
Our analysis
The slope is denoted by λ pg for USA Gold price fluctuation. Then we have analysed the trend of λ pg for all the dataset and inferences drawn from there.
Results
The Fig. 5 shows the year-span wise comparison of PSVGs for USA gold price fluctuation(P SV G = λ pg ). The Table 1 shows the values of λ pg calculated for the fluctuation of USA gold price.
The results of our analysis of the values of λ pg from the
2.4
Year-span PSVG-λpg values 2. If one examines the real scenario of gold price of USA during the period under investigation, it is observed that the gold price(in $/OZ) was minimum during the period (1998)(1999)(2000)(2001) and the fluctuation of the price is also insignificant. The obvious interpretation is a high value of PSVG assumes low rate of fluctuation of gold price along with the exact price also.
The same behaviour is also reflected if one conforms the PSVG value with gold price during the period (2006-2009) and also (2010-2013), when PSVG values assume low values including the minimum one. The gold price(in $/OZ) during those period is not only significantly high, the fluctuation rate is also on the higher side.
3. Thus this analysis clearly manifests one important and significant information about and inverse re-lationship between PSVG values and gold price(in $/OZ) in USA, including its fluctuation.
4. This remarkable agreement of PSVG values and real life scenario prompts us to propose that the PSVG parameter may be used as a reliable pre-cursor for instability of the financial market. We are further encouraged to suggest to analyse further data and the trend of PSVG parameter of gold price, can be studied in detail continuously to forecast financial crisis since it has been globally accepted that the price of gold and stock has a negative relationship [21]. Validation can then easily be done with the help of real life scenario as and when it becomes available.
The observation encourages us for further refinement of the analysis including more and more data from other country and further subdivision of duration of analysis.
Conclusion
Since the visibility graph technique gives the most reliable result even with very short-length and finite time-series we have re-visited the study of USA gold price fluctuation to extract most reliable results in terms of quantitative parameters for the first time, so far as our knowledge goes. To conclude we may highlight the importance of present investigation precisely in the following: 1. Our analysis manifests satisfactory agreement between the PSVG parameter and the real life scenario of USA gold price fluctuation.
2. We may analyze with detailed data of gold prices of different countries.
The continuous of the This study encourages further similar analysis of different financial series using this technique which may yield to a optimal methodology for tackling risk management in investment. Finally we emphasize that the chaos based random fractal techniques may prove itself more appropriate method of analyzing financial time series with a possibility of forecasting.
|
2016-08-03T01:05:35.000Z
|
2016-08-03T00:00:00.000
|
{
"year": 2016,
"sha1": "c424af84ebe56c296012b68de1c2b2721015eb77",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "c424af84ebe56c296012b68de1c2b2721015eb77",
"s2fieldsofstudy": [
"Economics"
],
"extfieldsofstudy": [
"Economics",
"Physics"
]
}
|
119063219
|
pes2o/s2orc
|
v3-fos-license
|
Fluctuations and topological transitions of quantum Hall stripes: nematics as anisotropic hexatics
We study fluctuations and topological melting transitions of quantum Hall stripes near half-filling of intermediate Landau levels. Taking the stripe state to be an anisotropic Wigner crystal (AWC) allows us to identify the quantum Hall nematic state conjectured in previous studies of the 2D electron gas as an {\em anisotropic hexatic}. The transition temperature from the AWC to the quantum Hall nematic state is explicitly calculated, and a tentative phase diagram for the 2D electron gas near half-filling is suggested.
Introduction -Following theoretical predictions by Koulakov et al. [1] that the ground state of the 2D electron gas near half filling of intermediate Landau levels (LLs), with index N ≥ 2, is a striped state, and the subsequent experimental observation by Lilly et al. [2] of strongly anisotropic dc resistivities in the above mentioned range of fillings, it has been suggested [3,4] that the striped ground state of a two-dimensional electron gas (2DEG) at low temperature may be viewed as a "quantum Hall smectic" (QHS), consisting of a weakly coupled stack of one-dimensional Luttinger liquids. This is a state that would only be stable at zero temperature, and which, by analogy with conventional liquid crystals [5], would give way through the proliferation of dislocations (see panels (a) and (b) of Fig. 1) to a "nematic" state at nonzero temperatures [6], in which translational symmetry is restored but rotational symmetry is still broken. This electronic "nematic" would then undergo a disclination unbinding transition into a fully isotropic fluid as temperature is raised above a critical temperature which has been estimated [7,8] following standard Kosterlitz-Thouless (KT) arguments [5].
In this paper, we want to examine an alternative picture, in which the ground state of the 2DEG near half filling of intermediate LLs is taken to be an anisotropic Wigner crystal (AWC), as suggested by Hartree-Fock (HF) [9,10] and renormalization group (RG) [11] calculations. In this case, we find that dislocations melt the AWC at a nonzero temperature (that we shall explicitly evaluate below) into a "nematic" state with quasi-longrange orientational correlations, which we argue is nothing more than an anisotropic hexatic. Our results for the melting temperature of the AWC are consistent with experiments and with the idea of quantum Hall "nematics" Fluctuations of quantum Hall Wigner crystals -In what follows, we shall be interested in the elastic fluctuations of quantum Hall Wigner crystals. To fix ideas, we shall focus on the AWC which was found to minimize the cohesive energy of the 2DEG near half filling if Ref. 10, and which is described by the lattice vectors R n1n2 = n 1 a 1 +n 2 a 2 , where a 1 = 2αŷ and a 2 = αŷ+βx, and where α = a √ 1 − ε/2 and β = √ 3a/2 √ 1 − ε (n 1 and n 2 being integers). In these expressions, ε is a positive parameter such that 0 ≤ ε < 1 which quantifies the degree of anisotropy of the lattice at a given partial filling factor ν * , and which was determined through minimization of the cohesive energy of the system in Ref. 10; and a = ℓ(4π/ √ 3ν * ) is the average spacing of a hexagonal lattice with ε = 0 at the same value of ν * . The elastic properties of such an anisotropic crystal can be described by an elastic Hamiltonian of the form (α, β = x, y): where u αβ (r) = 1 2 (∂ α u β + ∂ β u α ) is the linear strain tensor, (u(r) being the displacement field). For the particular case of a two-dimensional AWC, there are three compression moduli, c 11 ≡ C 1111 , c 22 ≡ C 2222 and c 12 ≡ C 1122 , and a single shear modulus c 66 ≡ C 1212 .
The elastic fluctuations of the above AWC, taking into account the Lorentz-force dynamics imposed by the external magnetic field, can be described by the Gaussian action: where the dynamical matrix D αβ (q, ω n ) is given by: In the above expression, ω n = 2πnk B T / is a Matsubara frequency (k B being Boltzmann's constant and T being temperature), ρ m is a mass density, ω c is the cyclotron frequency, ε αβ is the two-dimensional version of the antisymmetric Levi-Civita tensor, and we have introduced the elastic matrix Φ αβ (q), which has the following matrix elements: From Eq. (2), we can easily derive the following expression for the two-point correlation function u α (q, ω n )u β (q ′ , ω l ) : where δ n,l is the Kronecker symbol, and where the propagator G αβ has the following elements (we sum over repeated indices): with In real space, the mean squared displacement u α (r, τ )u β (r, τ ) can be written in the form fluctuations of the quantum system at finite temperatures can be described using the partition function Z cl = [du(r)] e −H/kB T , with the effective classical Hamiltonian: Knowledge of the effective propagatorG(q) allows us to study the effect of Lorentz-force dynamics on the elastic properties (and hence on possible topological transitions) of the system. Calculating the inverse effective propaga-torG −1 (q) and expanding the resulting expression near q = 0, we find that, up to terms of order O(q 2 ), the form of the elastic propagator is identical to its zero field expression. This has the important consequence that the long wavelength elastic properties of the AWC will be qualitatively the same as in the absence of magnetic field. We therefore expect the topological melting of the AWC to proceed in a standard (two-stage) way [16], as we now are going to describe. Topological melting of anisotropic Wigner crystals -A major difference between isotropic and anisotropic Wigner crystals in two dimensions is that, while in the former all six elementary dislocations differing by the orientation of their Burger's vectors are equivalent, in the latter two elementary equivalent dislocations (labeled type I) have their Burger's vectors along a reflection symmetry axis (i.e. along ±a 1 in Fig. 1), while four dislocations, equivalent to each other but inequivalent to the first type, lie at angles of ±θ 0 from the reflexion axis (θ 0 here is the angle between a 1 and a 2 , see Fig. 1). At any nonzero temperature, the solid phase has a finite density of tightly bound dislocation pairs. As the temperature is raised past a critical temperature T c1 , the pairs unbind and destroy the crystalline order. Since the two types I and II of dislocations are unequivalent, the defect mediated melting (DMM) process will be governed by the type which has a lower nucleation energy. We therefore shall need to determine the elastic constants of the AWC in order to find the energies of the two types of dislocations, so as to determine which dislocation type unbinds first.
For the AWC which is the object of study in this paper, the compression moduli c ij (q), i, j = 1, 2, can be written in the form c ij (q) = c(q) +c ij , (i, j = 1, 2), where we separated out the leading (plasmonic) contribution [15] c(q) = e 2 /2πκℓ 3 /qℓ, e being the electronic charge, ℓ the magnetic length, and κ the dielectric constant of the host medium. For a one-dimensional compression of the form u 1 = u 0 xx, with the number of electrons N e in Landau level N kept fixed, if we denote by ν * 1 = ν * /(1 + u 0 ) the partial filling factor of the compressed crystal, it can be shown [14] that the constant partc 11 (ν * ) is given by: In the above expression, G(ν) is the HF cohesive energy per electron (in units of e 2 /κℓ), and is given by: where Q is a reciprocal lattice vector, and ρ (Q) is the guiding-center density operator, which is determined selfconsistently using the approach of Ref. 9, and is related to the real density operator n(Q) through the equation (here N ϕ is the Landau level degeneracy and L 0 N (x) is a generalized Laguerre polynomial): On the other hand, the Hartree and Fock interactions are given by [9] (J 0 (x) is the Bessel function of order zero): The finite contributionsc 22 andc 12 to the compression moduli c 22 and c 12 can be obtained in a similar way by considering 1D and 2D uniform compressions of the form u = u 0 yŷ and u = u 0 (xx + yŷ), respectively. The results of these procedures, the details of which will be published elsewhere [14], are shown in Fig. 2, where we plot the constant partsc ij (i, j = 1, 2) of the compression moduli of the stripe crystal near half filling of LL N = 2.
Let us now introduce the compliance tensor S ijkl such that S ijkl C klmn = 1 2 (δ im δ jn + δ in δ jm ), with the four independent elements s 11 (q) ≡ S 1111 , s 22 (q) ≡ S 2222 , s 12 (q) = s 21 (q) ≡ S 1122 , and s 66 ≡ S 1212 . In the long wavelength limit, these take the values: In terms of these compliances, we find that the leading contribution to the energy of a dislocation of type α (α = I or II) is given by with [16]: where we defined For the problem at hand, s 11 = s 22 in the long wavelength limit, and hence we see that the ratio K II /K I is always larger than unity for 0 < ε < 1. We thus see that dislocations of type I are energetically less costly than type II dislocations, and will unbind at the melting temperature T c1 such that [16] K I a 2 /k B T c1 = 4 (the value 4 being universal), from which we obtain the melting temperature The resulting melting line of the AWC is plotted in Fig. 3. For temperatures higher than T c1 , the presence of type I dislocations screens the logarithmic interaction between type II dislocations, such that both dislocation types are free at long length scales [16]. We expect, however, that the average separation ξ II of type II dislocations will be much larger than the average separation ξ I between type I dislocations, with [16]: where t ∝ (T − T c1 ) and p is a nonuniversal number between 0 and 2. Given that type I dislocations destroy translational order mainly along the direction of the stripes, we see that we can distinguish between three different regimes (see Fig. 4). At length scales shorter than ξ I , the system retains the properties of a solid. At intermediate scales, ξ I < L < ξ II , the system is smecticlike, and consists of a regular stack of 1D channels of electron guiding centers, with short ranged translational order along the channels, and quasi-long ranged order in the transverse direction. Finally, at length scales longer than ξ II , the system is nematic-like: translational order is destroyed in all directions, but the system preserves quasi long range orientational order. The latter is described by the bond-angle field θ(r), which is defined as the orientation relative to some fixed reference axis of the bond between two neighboring electron guiding centers. Standard analysis shows that the fluctuations of θ(r) are described by an effective Hamiltonian of the form: and decay only algebraically with distance. Since on short length scales, the AWC is only weakly disturbed and each electron is still surrounded on average by six neighbors, he resulting quantum Hall "nematic" state may be more accurately characterized as an anisotropic hexatic.
As the temperature is further raised, a disclinationunbinding transition melts this nematic-like state into an isotropic metallic state, in much the same way as described in Ref. 16, with actual values of the nematic to isotropic melting temperature T c2 of the order of those estimated in Ref. 8 (in this last reference, T c2 ≈ 200 mK near half-filling of LL N = 2). Since the temperature (∼ 25mK) at which the experiments of Ref. 2 were performed lies between T c1 and T c2 , we see that our HF calculation is consistent with the conjecture [3,4] according to which the state probed by these experiments is a nematic state.
Conclusion -To summarize, in this paper we have examined the fluctuations and topological transitions of quantum Hall stripes near half-filling of intermediate Landau levels. Taking the stripe state to be an anisotropic Wigner crystal, as suggested by Hartree-Fock and renormalization group calculations, we find that the quantum Hall nematic conjectured in Refs. [3,4] emerges in a natural way in the topological melting process, and is identified as an anisotropic hexatic. Our calculations are consistent with the idea of quantum Hall nematics, which we predict to be realized over a significant region of the phase diagram near half filling of intermediate Landau levels, and give quantitative support to the qualitative interpretations [3,4] of transport measurements [2] in terms of putative nematic states.
|
2019-04-14T02:12:52.030Z
|
2006-11-24T00:00:00.000
|
{
"year": 2006,
"sha1": "0dfbb2e976f4999099ef78d8b5a57a57da8fceee",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/cond-mat/0611638",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "e4dd465f5a95c1606bec1731e8773a3e1add0d5b",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
214377903
|
pes2o/s2orc
|
v3-fos-license
|
Numerical study of a Solar Absorption Refrigeration Machine
. In this paper, we present a numerical study of a single-stage absorption refrigeration machine, operating with a couple of water-ammonia fluids, equipped with a distillation column and associated with a solar heating system using solar collectors. The study has showed the benefit of using the distillation column which is manifested by: The decrease of the operating temperature, The improvement of the coefficient of performance, Surface reduction of the solar collectors, The improvement of the solar coefficient of performance. The solar study shows that the absorption refrigeration machine equipped with a distillation column is better suited to solar energy with significantly better performance compared to the
Introduction
Environmental pollution and the energy crisis have led in recent years to many studies on the rational use of energy [1]. Especially in the field of industrial refrigeration which exhausts a large part of energy in air conditioning [1.2]. Absorption refrigeration machines use a mixture of fluids (Absorbent fluid -Refrigerant fluid) for their operation. In order to select the best mixture of active fluids, several mixtures of CFCs (chloro-fluorocarbons) with organic absorbers were studied [3][4][5][6]. But because CFCs have a destructive effect on atmospheric ozone, the mixture of water-ammonia fluids remains the best, especially if one is trying to produce cold at temperatures below zero degrees Celsius as well as because of its low price [7][8][9][10]. Since the Montreal Protocol (1987), CFC and HCFC refrigerants have been progressively replaced (most often by HFCs) and the production of these fluids has been discontinued (CFC) or severely restricted (HCFCs). But the Kyoto Protocol (1997) points to these first HFC-type refrigerants (which are classified as greenhouse gases) and therefore aims to reduce the fluid load in refrigeration plants. In addition, Morocco has a significant solar potential. Annual sunshine is always above 20000 kj per m 2 of collector area.it is therefore important to exploit this deposit, free and non-polluting, in the field of cold production, especially in remote rural areas. The market for solar absorption refrigeration machine installations is growing, and the quantity of the energy used for cooling buildings has greatly increased in recent years [11][12][13][14][15][16][17][18][19][20]. The present work deals with the study of a solar absorption (water-ammonia) refrigeration machine equipped with a distillation column. We have realized a program of simulation of the machine, based on the laws of conservation of the mass and the energy applied at the level of each element of the machine and actual operating conditions of the machine, and the thermodynamic properties of the ammonia mixture. And real solar data from the Rabat site(Morocco). The results of the numerical simulation of the absorption machine with and without a distillation column show a gain on the coefficient of performance of 30%. The optimum operating temperature of the absorption machine at the generator has been reduced by twenty-five degrees, which influences the choice of appropriate collectors for the operation of the machine and makes it possible subsequently to use simple flat plate collectors less expensive and available in the market, while the absorption machine without distillation column requires for its operation evacuated tube collectorsexcessively expensive and less available in the markets of developing countries.
2Principle of operation of the solar powered absorption machine
The main components of the solar absorption refrigeration machine are solar thermal collector, an evaporator, an absorber, generator, distillation column, 9 9 a condenser, an expansion valve, a heat exchanger and a pump. A simplediagram of the solar absorption refrigeration machine is shown in Figure 1. Two kinds of working medium are used at the same time in refrigeration and absorption processes. In this machine the solar thermal collector collects heat by absorbing sunlight. The heat collected in the solar collector is transferred to the generator and it is used for heating an ammonia-water solution and send the vapour to the condenser. Remaining weak solution flows to the absorber through heat exchanger where the heat is transferred to the strong solution. Liquid refrigerant from the condenser goes through an expansion valve while the pressure is decreased to an effect is achieved by the vaporization of the refrigerant at a low temperature. Refrigerant vapor from the evaporator continues to an absorber and dissolves in a weak refrigerant solution, and it becomes a stronger refrigerant solution, which is called "rich solution". A pump is the only moving part in this system. The "rich solution" is pumped to a generator. At the generator, the rich solution is heated up; the refrigerant is separated from the solution. The refrigerant is vaporized and goes to the condenser while the weak solution is passed through a heat exchanger and returned to the absorber to absorb the refrigerant vapour. The refrigeration process and the regeneration process operate at the same time as the continuous process, producing a continuous cooling effect. A simple flat plate collector can maintain the operating condition at the generation temperature [21][22]. 3Mass and energy balances of the different elements
Distillation column
The conservation of the total mass is given by: The conservation of mass by element is expressed as follows: The enthalpy balance of the distillation column is given by: The amount of heat supplied to the distillation column This system of equations gives: The resolution of this system of equations gives: (5) From this system of equations we can be deduce the expression of the quantity of heat supplied to the distillation column
3.2We can summarize the heat flow of the machine as follows
Using mass balances, energetic and enthalpic balances on various machine components and operational hypotheses, we find the expression of the heat flux at the levels of the different organs of the machine [22]: 9 9 Thus, by determining the thermodynamic properties (temperature, pressure, composition, enthalpy, entropy, specific volume, vapor and liquid titre) of the water-ammonia fluid pair, the method is based on equilibrium quantities (temperature, pressure, composition), determined using the Peng-Robinson state equation , and the specific volume, entropy and enthalpy at each point of the machine cycle is calculated from the analytical expression of the Gibbs free energy given by Ziegler, The high and the low pressure are calculated by the formula of Antoine , we can determine the coefficient of performance (COP) of the refrigeration machine with absorption [23][24][25][26][27][28][29].
The coefficient of performance of the absorption machine becomes: The efficiency of a collector depends on its temperature.The higher the temperature of a given collector, the greater its heat loss. In designing solar absorption refrigeration, the collector efficiency ( S ), which is defined as the ration of the output ( Q G ) to the incident solar power ( S I ), is very important: Taking into account the temperatures reached by solar collectors, we have selected the evacuated tube collectors.The efficiency of which is given by Hottelwhilier: With the following characteristics:
Test conditions in Morocco
The measurements used in this work were carried out in the laboratory of solar energy and environment station of the Faculty of Sciences of Rabat Figure 2.
The city of Rabat, latitude 34°02' North and longitude 6°51'West. The station has been equipped with complementary instruments for measuring the spectral components.
The station provides three types of measurements: the usual solar components (global, direct and diffuse), the solar spectral components and the climatic variables. These measurements are managed by a data acquisition and storage system. The data acquisition program that we have developed can perform, for each solar component, flow measurements (in W/m²) at a pace of five seconds; it calculates the sum of the measurements made for one hour and finally records a mean hourly irradiation. Thus, each hourly average represents an energy expressed in Wh/m² since it corresponds to the product of an average flow in W/m² per hour of integration [31,32]. 9 9 5Main refrigerants Since the invention of absorption refrigeration machines several refrigerants were used, Table 2 shows some examples of used couples, it is noted that the most common is the H 2 O-LiBr couple and the oldest is the NH 3 -H 2 0 couple. These couples are the best known.
-H 2 O)
The determination of the thermodynamic properties of the water-ammonia solution and enthalpy, the temperature, the pressure, the composition, entropy, specific volume, vapor and liquid titer is based on equilibrium quantities (temperature, pressure, composition ) determined using Peng Robinson's state equation. The specific volume, entropy and enthalpy at each point of the machine cycle are calculated using the analytical expression of the free Gibbs energy given by Ziegler, and the high and low pressure that are calculated by the formula of Antoine [33][34][35][36].
6.1Analytical expressions of enthalpy, entropy and specific volume (NH 3 -H 2 O)
From all the experimental data of the waterammonia pair which it had and from Viriel's equation of state: For the vapor phase, as well as the equation of state: For the liquid phase, Schultz has gaven analytic expressions of Gibbs' free energy as a function of temperature, pressure and title: From these expressions of the Gibbs free energy and the specific heats Cp all the other thermodynamic quantities can be determined by differentiation techniques, notably the enthalpy, the entropy and the volume which are the ones that interest us: To make the equations adimensional, they have introduced the following reduced variables: The values of the constants 0 T , 0 P and R are: (28)
7Results and discussion
We propose in this part, the results obtained from the simulations of the operation of the solar machine with and without distillation column. The area of the solar collectors required to produce a cooling effect of 1 kW is shown in Figure 3. The COPS in terms of Ts is illustrated in Figure 4. It can be seen from this curve that the use of a distillation column reduces the area of the solar collectors by 30%.
It is noted that the curves shown in Figure 4have the same pace, except that the maximum is slightly shifted towards the low values of Ts.
8Conclusions
The comparative study shows that the machine with distillation column is best suited to solar energy because it can be powered by simple flat plate collectors that are available and cheaper.
|
2020-02-20T09:09:13.914Z
|
2020-01-01T00:00:00.000
|
{
"year": 2020,
"sha1": "6a4e52e0acd5a17a28fda5d93dd2da2fc09d5421",
"oa_license": "CCBY",
"oa_url": "https://www.e3s-conferences.org/articles/e3sconf/pdf/2020/10/e3sconf_ede72020_01009.pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "de629774dc7be74cecbd417abe1eab4761f52311",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Materials Science"
]
}
|
245371047
|
pes2o/s2orc
|
v3-fos-license
|
Is Co-Management Still Feasible to Advance the Sustainability of Small-Scale African Inland Fisheries? Assessing Stakeholders’ Perspectives in Zambia
Co-management has been promoted as an alternative approach to the governance of small-scale inland fisheries resources and has been implemented in many African countries. It has, however, not proven to be a simple solution to improve their governance; hence, most African inland fisheries are still experiencing unsustainable overexploitation of their resources. As such, there is a need for reassessing the application of governance strategies for co-management that should strive to strengthen the participation of stakeholders, primarily the local fishers, as they are fundamental in the governance of fisheries resources. Therefore, this study set out to explore the prospects of a comanagement governance approach at a Lake Itezhi-Tezhi small-scale fishery in Zambia. Focus group discussions with fishers and semi-structured interviews with other stakeholders were used to collect data. This study revealed that the stakeholders perceive co-management as a feasible approach to governance of the Lake Itezhi-Tezhi fishery. However, the feasibility of the co-management arrangement would be dependent mostly on the stakeholders’ ability to address most of the ‘key conditions’ criteria highlighted in the study. This study also identified the need to establish a fisheries policy to provide guidelines for the co-management, coming with decentralisation of power and authority to the local fishers.
Introduction
To advance sustainability, most Sub-Saharan African countries with small-scale inland fisheries have been instituting policy and legislative frameworks that promote some decentralisation of power, authority, and responsibilities from the central government to the local community through co-management reforms [1][2][3]. These governance reforms were instituted to address the many failed top-down, central government-controlled governance systems that had been in place in several African countries [1,[4][5][6]. These failed governance systems contributed to the decline in the inland fisheries resources over the past years in most of these African inland water bodies [7]. Since the 1990s, fisheries co-management has been viewed as an alternative and appropriate governance strategy in several African countries to address such a predicament [6,8,9].
There is no uniform definition of the term 'co-management', but in the context of fisheries, it can be understood as "a partnership arrangement in which the community of local resource users, government, other stakeholders, and external agents share the responsibility and authority for the management of the fishery" [10] (p. 7). Integration of stakeholders at multiple levels in the co-management design and implementation process is therefore considered to be a significant component of the process [11]. An essential aspect of the reforms leading towards co-management has been the assumption that the livelihoods of the local resource users could be improved primarily by improving the status of the fisheries resources through their participation in the governance process [12].
Despite this understanding and assumption, the co-management approaches have not proven to be the silver bullet for rectifying governance problems in the African inland fisheries sector but have shown mixed results depending on the different strategies and approaches taken by different countries. Svendrup-Jensen and Nielsen [6] and Béné et al. [4,13] observed that very few of these failures and successes had to do with the status of the fish stock itself but were related to various types of governance flaws. For instance, in their review of fisheries co-management in Cameroon, Niger, Nigeria, Malawi, and Zambia [13] observed that, in the decentralisation process, the power remained to a greater extent with the central government. This scenario was so because the transfer of power and responsibilities was mainly carried out by local government instead of local fishing communities, thereby defeating the original purpose of the reforms. Furthermore, studies on institutions and co-management on Lake Victoria and lakes in Malawi revealed that the relationships between the local fisheries communities, traditional authorities, and government fisheries officials were generally not equal in terms of authority and power-sharing, application of the legislation, and access to resources [3].
It was expected that the introduction of fisheries co-management would have enhanced cooperation among stakeholders and resulted in equal relationships, with trust being critical to the success of collaboration in the governance process, but that has not been the case in several fisheries [3,8,[14][15][16]. Given the challenges co-management has experienced as an approach to governance in African inland fisheries, it becomes important to question the viability of the strategy and explore its present feasibility.
This study took the case of Zambia, a landlocked southern African country, which applied co-management in the 1990s as an approach of governance for its main fisheries. The results then were mixed but unsuccessful for most fisheries, primarily due to weak institutions, lack of effective stakeholder participation, and absence of the legislative framework to support the co-management approach [17]. The enactment of the Fisheries Act (22 of 2011) legislation that supports a fisheries co-management approach was meant to provide a platform to explore a possible resurrection and facilitation of co-management in Zambian fisheries [18]. It is therefore pertinent to assess the perceptions of the key stakeholders with regard to how this change in fisheries legislation can impact possible fisheries management strategies and to follow up on the policy objectives of an effective and functional fisheries co-management.
Since co-management is about sharing power, the perceptions of stakeholders are an important part of the feasibility of the co-management approach. It is furthermore important to understand the conditions required for establishing and sustaining successful co-management of fisheries resources [19]. The 'key conditions' for successful common-pool resources (CPR) such as fisheries, initially developed by Ostrom [20,21] but later expanded and adapted to fisheries resources [22], were used as a framework in this study. These 'key conditions' were used as design principles for assessing the possible success of the fisheries co-management arrangement at Lake Itezhi-Tezhi based on the stakeholders' perceptions.
This study, using the Lake Itezhi-Tezhi fishery of Zambia as a case, contributes to the ongoing debate on the viability and effectiveness of designing and implementing a comanagement approach to enhance sustainability in small-scale inland fisheries [13,23,24]. The objective of this study was to explore the prospects of a co-management approach, inclusive of multiple stakeholders. The following research questions are addressed:
1.
Who are stakeholders and what are their roles in the Lake Itezhi-Tezhi fishery? 2.
What are the stakeholders' perceptions of the feasibility of a co-management arrangement for the Lake Itezhi-Tezhi fishery? 3.
How would the 'key conditions' for successful co-management be able to address the stakeholders' perceived challenges and benefits?
Zambia's Fisheries and Co-Management
The Zambian fisheries sector has eleven main fisheries which contribute 3.2% to Zambia's Gross Domestic Product (GDP) [25]. However, as of 2015, Zambia's estimated fish consumption demand was 185,000 metric tonnes compared with a local production of about 100,000 metric tonnes, which meant that the deficit was still being imported [26]. From the British colonial era, governance and management of the fisheries has primarily been done by the central government through various strategies-namely, closed fishing seasons, closed breeding areas, the prohibition of particular methods and gear, restrictions on mesh sizes, and limiting the number of fishers in any given fishery through the issuance of fishing licences [27][28][29]. However, these strategies have not been successful in preventing overexploitation of resources in almost all the fisheries [30].
Given this scenario, Zambia has been working on fisheries co-management during the last decades with mixed results [17]. This goes back to fishing sector reforms in the 1990s in response to its underperformance and decline in fisheries resources. The reforms instituted new governance frameworks in fisheries intending to promote more effective, sustainable, and legitimate fisheries governance by changing and sharing responsibilities between the central government and local actors and institutions. The fisheries co-management reforms were initiated at Lakes Mweru, Bangweulu, Kariba, and Tanganyika, but faced several challenges which mostly led to their unsuccessful implementation and sustainability [13,17,31]. Some of these challenges encountered included lack of legislation to support the execution of co-management reforms, poorly equipped extension services to design locally accountable devolved institutions, the prevalence of conflicts of interest among different stakeholders, and reluctance by the central government to relinquish certain responsibilities and pass them on to local resource users [13,31]. As such, with fisheries being commonpool in nature and government-owned by law, the resources in these lakes continued to be overexploited [17,32,33].
Given this predicament, the Zambian government decided to review and enact some legislative frameworks and policies to incorporate local community participation and stakeholders' engagement in the governance of the fisheries resources in the inland small-scale fisheries. Some of the legislative frameworks and policies instituted which covered the fisheries sector to achieve this purpose included the Fisheries Act (22 of 2011) [18], Wildlife Act (14 of 2015) [34], National Policy on the Environment of 2007, National Decentralisation Policy of 2017, National Development Policy of 2012, and the National Agriculture Policy of 2015. The Department of Fisheries (DoF) adopted the National Agriculture Policy of 2015 as an applicable and practical policy guide in its operation. Despite the availability of legislative and some policy provisions for the sector, implementation of a functional co-management governance process and structure has still been a challenge in the Zambian fisheries sector [31,[35][36][37]. For instance, as of 2016, Lake Itezhi-Tezhi had no co-management in place but a dual governance approach in the form of a fishing community-based approach and central government-controlled approach. Both approaches were ineffective, mainly due to a lack of adherence to the legislation for local community participation in fisheries governance and an inadequate policy framework to guide the governance process [23]. Therefore, this study explored further a legitimate and functional co-management governance approach for the Zambian fisheries sector which would incorporate different stakeholders in its operation.
Framework for Analysis of Successful Common-Pool Institutions
The locus of this study is within the scholarship of governance of common-pool resources (CPR), natural resources that are characterized by rivalry in consumption and by being costly to exclude other users [38]. Such a resource environment is characterised by an open-access problem, hence a risk of tragic outcomes of overuse if unattended to by an effective governance strategy [20,39]. The Lake Itezhi-Tezhi fishery of Zambia, a case of African inland CPR, has been under the governance of a centralised government system that has not been effective in preventing overexploitation of the common fisheries resources [23]. The CPR theory focuses on the ability of stakeholders to collaborate in overcoming governance challenges inherent to common-pool resources [38,40].
The criteria of 'key conditions' for successful CPR institutions was employed for this study. These 'key conditions' criteria were initially developed by Ostrom [20,21] as design principles to help in understanding the attributes of effective CPR governance systems and gaining compliance with the rules over generations. The 'key conditions' criteria, based on the work of Ostrom, were further elaborated and expanded by Pomeroy, Katon, and Harkes [22] (Table 1) for assessing the success of co-management arrangements in various inland and coastal fisheries. Different scholars have since used these 'key conditions' as an analytical framework (Table 1) for that purpose [19,[41][42][43]. This study assessed the stakeholders' perceptions of the feasibility of an envisaged co-management at the Lake Itezhi-Tezhi fishery using these 'key conditions' criteria. The 'key conditions' were linked to the stakeholders' perceived challenges and the benefits of co-management. There is an incentive and willingness on the part of local fishers to actively participate in fisheries management.
x Decentralisation of authority The government has established a formal policy for decentralisation of administrative and management responsibilities and authority to local group organisation levels.
xi Coordination between government and community A coordinating body is established, with representation from the fisher group and government, to monitor the fisheries management arrangements.
Stakeholders Identification and Analysis
Co-management is one approach to solving CPR management problems through partnerships among different stakeholders [11,36]. In the context of natural resource management, Pomeroy and Rivera-Guieb [10] defined stakeholders as "individuals, groups or organisations who are, in one way or another, interested, involved or affected (positively or negatively) by a particular project or action toward resource use". Stakeholders may originate from geographical proximity, historical association, dependence for livelihood, institutional mandate, economic interest, or a variety of other concerns [10,44]. In the co-management of fisheries resources, they may include fishers and their households, government agencies, boat owners, fish traders, community-based groups, local business owners, local traditional authorities, representatives of non-governmental organisations (NGOs), private firms, and others [10]. However, not all stakeholders have the same level of interest in the co-management of fisheries resources. There are primary stakeholders who assume a more active role in the governance and management of the resources. There are also secondary stakeholders who simply play consultative roles and provide other needed resources in the process [45]. In this study, the primary and secondary stakeholders were identified around the Lake Itezhi-Tezhi fishery, and their general roles at the fishery were analysed.
Study Site
The human-made Lake Itezhi-Tezhi lies on the Kafue River in the Southern province of Zambia at 15 • 44 19 S-26 • 02 17 E in the Itezhi-Tezhi district ( Figure 1; See also [46]). It was created by a large dam that was built in 1977 [47,48]. A large portion of the lake is in the Kafue National Park ( Figure 1) and under the jurisdiction of the Department of National Parks and Wildlife (DNPW), as stipulated under the Wildlife Act (14 of 2015) [34]. Four chiefdoms are within the vicinity of the lake-namely, Kaingu, Shimbizi, Musungwa, and Shezongo. The fishers that ply their trade on the lake reside in these chiefdoms. The district houses different government and private offices with an interest in the wellbeing of the Lake Itezhi-Tezhi fishery.
Data Collection and Analysis
Qualitative data were collected in the study area between March and July 2016. A participatory approach through focus group discussions (FGDs) with fishers and semi-structured interviews with other stakeholders in the fishery was used [49]. Since the characteristics of fishers and the set-up of the fishery are heterogeneous in relation to distance and accessibility to the fishing sites from homesteads, a proportionate quota sampling method was used [50]. This type of sampling helped to determine relatively homogenous sample sizes of fishers from 3 strata of the fishery that comprised fishing villages and fishing camps.
Focus groups from the 3 strata were purposefully selected based on the availability of fishers in each of the 40 fishing villages and fishing camps ( Table 2). These FGDs, each consisting of about 10 purposely selected adult respondents (≥18 years old), were conducted in all the 3 strata [51]. In stratum three, comprising fishing villages only, 3 of the 4
Data Collection and Analysis
Qualitative data were collected in the study area between March and July 2016. A participatory approach through focus group discussions (FGDs) with fishers and semistructured interviews with other stakeholders in the fishery was used [49]. Since the characteristics of fishers and the set-up of the fishery are heterogeneous in relation to distance and accessibility to the fishing sites from homesteads, a proportionate quota sampling method was used [50]. This type of sampling helped to determine relatively homogenous sample sizes of fishers from 3 strata of the fishery that comprised fishing villages and fishing camps.
Focus groups from the 3 strata were purposefully selected based on the availability of fishers in each of the 40 fishing villages and fishing camps ( Table 2). These FGDs, each consisting of about 10 purposely selected adult respondents (≥18 years old), were conducted in all the 3 strata [51]. In stratum three, comprising fishing villages only, 3 of the 4 FGDs had a mixture of men and women, and the remaining one had males only. All the FGDs in the other strata, 1 and 2, were composed of males only, as they were conducted in the fishing camps. These fishing camps were only accessed by male fishers, hence the composition of the FGDs. The principal researcher was the facilitator for all the FGDs for uniformity purposes in data collection. The FGDs comprised semi-structured questions [51]. Table 2. Composition of the strata for the Lake Itezhi-Tezhi fishery. Furthermore, semi-structured interviews were conducted with 17 participants from 11 stakeholders (organisations) at the fishery to gather additional information on the subject and confirm earlier views gathered from FGDs [51] ( Table 3). The stakeholders within the Itezhi-Tezhi District comprised the central government ministry and departments (fisheries, livestock, wildlife, and agriculture), Fishermen and Fish Traders Association (FFTA), local government, the District Commissioner's office, a Non-Governmental Organisation, private firms, traditional leaders, and ex-fishers. Purposive sampling, based on their expertise and experience on the subject under discussion, was used to select the stakeholders for interviews [51]. The overarching themes for interviews and FGDs were stakeholders' current roles at the Lake Itezhi-Tezhi fishery, their perceptions on the feasibility of co-management arrangement, and their expected challenges of and benefits from the co-management governance arrangement.
Number of Focus Group Discussions in Each Stratum
Furthermore, a demographic profile of 451 fishers, from a population of 1800 fishers that plied their trade at Lake Itezhi-Tezhi fishery, was captured to determine the characteristics of the fishing community under discussion. The quantitative data collected, through a semi-structured questionnaire, included their education levels, marital status, age groups, ethnic groups, residential status, and sources of livelihood.
Qualitative data collected were analysed through the development of themes and sub-themes from the transcribed scripts, coding the participants' responses and linking them to the different themes created, and then analysing the content qualitatively and quantitatively [52]. Quantitative data were analysed using the Statistical Package for Social Sciences software (SPSS), and percentages were produced for each parameter. Reliability and validity were addressed through methodological triangulation-that is, using different sources of data (focus group discussion with fishers and stakeholder interviews) [53] and the quota sampling technique.
Demographic Profile of Fishers in the Lake Itezhi-Tezhi Fishing Community
Based on the demographic profile of the 451 fishers captured, the Lake Itezhi-Tezhi fishing community had characteristics as shown in Table 4. Furthermore, of the 71% immigrant fishers from different parts of the country, 78% comprised fishers who had permanently settled in the fishing community, while 22% had not, as they had homes elsewhere. Of all the ethnic groups among fishers in the area, only 8% were the indigenes, and these were the Ila. The rest were immigrant ethnic groups. Almost all the fishers (98%) depended on fishing in the lake as their major source of livelihood.
Stakeholders at the Lake Itezhi-Tezhi Fishery and Their Roles
The primary stakeholders identified included fishers, government agencies (namely, the Department of Fisheries (DoF) and the Department of National Parks and Wildlife (DNPW)), traditional authorities, the Fishermen and Fish Traders Association (FFTA), and a non-governmental organisation (Game Rangers International (GRI)) ( Table 5). One of the roles of the DNPW was to ensure that no person accessed the fisheries resources in the lake without a park entry permit; this was intended to prevent indiscriminate harvesting of the resource. The DoF was also mandated to manage and conserve the fisheries resources of Lake Itezhi-Tezhi under the Fisheries Act (22 of 2011) of the Laws of Zambia. The mandate was mainly carried out through enforcement of the closed fishing season every year between December and February and the prohibition of the use of illegal fishing gear and methods during the fishing season. Therefore, the two government departments were expected to collaborate in the conservation and management of the fisheries resources, especially during the closed fishing season. Table 5. Main stakeholders and their roles at the Lake Itezhi-Tezhi fishery.
Primary Fishers
Fishing and fish trading.
Department of Fisheries
Management and conservation of fisheries resources. Enforcement of fisheries laws and regulations.
Department of National Parks and Wildlife
Management and conservation of wildlife in protected areas (Kafue National Park and Game Management Areas). Enforcement of wildlife laws and regulations.
Fishermen and Fish Traders Association
Concerned with the fishing activities and welfare of fishers and fish traders.
Non-Governmental Organisation (GRI)
Assisting the wildlife authorities and communities in the Kafue National Park area to better protect this valuable resource and its environment.
Traditional Leaders
Dispute settlement, enforcement of customary laws, arrangement of ceremonies, organisation of communal labour, and promotion of socio-economic development.
Itezhi-Tezhi District Council
Delivering services in relation to roads, planning, housing, economic and community development, environment, recreation, and amenity services.
Ministry of Agriculture
Providing technical guidance to farmers in the crop production sector.
Department of Livestock Development
Providing technical guidance to farmers to enhance sustainable development in the livestock sector.
Zanaco
National commercial bank offering financial services for the Itezhi-Tezhi district.
Zesco
Producer and supplier of hydroelectricity at Lake Itezhi-Tezhi dam and provision of community services in the district. In addition, providing community services in the district.
District Commissioner's office
District administration of various activities in the Itezhi-Tezhi district.
The fishing villages along the lake were under the traditional governance of four prominent chiefs-namely Kaingu, Musungwa, Shimbizi, and Shezongo. Several headmen (i.e., a man who is a leader of a village in a chiefdom) in these chiefdoms assisted the chiefs in the running of the daily affairs in these villages. Therefore, under customary laws, all the fishers were accountable to the chiefs and headmen in these villages where they resided, as they conducted their fishing activities in the lake to earn a living. The fishing community comprised immigrant and resident fishers (Table 4) who conducted their fishing and fishing-related activities based on access rights they had to the fishing sites on the lake during the fishing season (March to November). Access to fishing sites and withdrawing of fish from those sites was only possible through the park entry permits and fishing licences issued by DNPW and DoF, respectively. No person was permitted to catch fish during the closed fishing season. The fishing community had a Fishermen and Fish Traders Association (FFTA), registered with the Zambian Registrar of Societies. The intention of the association was for every fisher and fish trader to be a registered member to attend to their wellbeing effectively.
Secondary stakeholders (Table 5) included government agencies, such as the Department of Agriculture and the Department of Livestock, the Itezhi-Tezhi District Council (local government), the District Commissioner's office, and two private firms. Their roles at the fishery are also shown in Table 5. The stakeholders' roles generally range from fishing and fish trading to fisheries resource conservation and provision of technical support and services, among others.
Stakeholders' Perceptions of the Feasibility of a Co-Management Arrangement
Fishers' perceptions through all the FGDs were that co-management was a welcome approach to advance sustainable fishing of the fishery's resources and livelihood improvement. They expressed the view that neither the government nor the fishers were able to govern the fishery effectively on their own because of the limited resources and capabilities. They indicated that they were in a strategic position to participate, as they were knowledgeable about each other and the fishery.
In agreement with the fishers, the DoF officials through interviews stated that it had been a great challenge, because of their limited resources, to enhance sustainable fishing of fisheries resources, hence the overexploitation of the fishery's resources over the years. A need for collaboration with other stakeholders through a co-management initiative was expressed as an option to prevent further resource overexploitation. Their focus was to have the full participation of the fishers, being the primary resource users.
The other stakeholders (local government, the Non-Governmental Organisations (NGOs), private firms, and some government ministries and departments), through interviews, also expressed the need for them to be part of the co-management initiative, as fish from the lake was the primary source of income, employment, and food and nutrition for the fishing community and the other inhabitants of the Itezhi-Tezhi district.
The justification by the stakeholders, as regards co-management being an alternative approach in the governance of the Lake Itezhi-Tezhi fishery, was based on the success in deriving certain benefits from the co-management arrangement. There are also challenges that can be addressed through certain 'key conditions' being in place.
Analysis of 'Key Conditions' Criteria That Address Expected Challenges for Successful Co-Management
Through the focus group discussions and interviews, fishers and other key stakeholders (DoF, DNPW, FFTA, NGO, and traditional authorities) highlighted some expected challenges that needed to be addressed during the development and implementation of co-management (Table 6). Some challenges identified by the fishers and the key stakeholders were the need for capacity building among fishers, conflicts or lack of cooperation among fishers, and lack of cooperation between fishers and other stakeholders during the implementation process. They also identified the possible lack of financial input for the co-management implementation to be a likely challenge to address. Additionally, the other key stakeholders perceived the lack of visible benefits accruing to fishers during the co-management undertaking to be a source of discouragement for their full participation.
The co-management challenges identified by the primary stakeholders would be addressed by fulfilling certain 'key conditions' criteria, thus enhancing the possible success of the co-management arrangement (Table 7). For instance, (i) the lack of cooperation among fishers and fishery's stakeholders would be addressed by fulfilling the 'key condition' in defining clear fishing boundaries on the lake between the fishing area for fishers and the Kafue National Park (a no-fishing area unless issued a national park permit); (ii) the lack of an effective voice for the fishers' needs would be addressed by fulfilling the 'key condition' of having a clearly defined membership registration and monitoring system for fishers. Similarly, 'key conditions' (iii), (v), (viii), (x), and (ix) would help to address the other expected co-management challenges (Table 7). Table 6. Expected challenges in co-management: fishers' and other primary stakeholders' perspectives at the Lake Itezhi-Tezhi fishery.
Expected Challenges
Fishers' Priority a
Other Primary Stakeholders' Priority b
Need for a voice for fishers +++ +++ Need for awareness to participate in law enforcement +++ + Need for capacity building among fishers +++ +++ Need for visible benefits to fishers 0 +++ Conflicts and lack of cooperation among fishers (if co-management arrangement not correctly understood) +++ +++
Conflicts and lack of cooperation between fishers and other stakeholders +++ +++
Conflicts among stakeholders (not with fishers) 0 + Presence of elite capture 0 + Need for financial input for co-management implementation ++ + Mistrust among stakeholders + + Increased immigrants among fishers 0 + Note: a : Based on the extent to which a role was expressed in the strata and the FGDs. +++-Expressed in all the strata (100%) and among most FGDs (>50%). ++-Expressed in all the strata (100%) but in fewer FGDs (<50%) in the strata OR in two strata (>65%) but among most FGDs (>50%) in all the strata. +-Expressed in one or two strata (<65%) and in less of the FGDs (<50%) in a stratum. 0-No comment. b : Based on comments from key stakeholders directly attached to the fishery (DoF, DNPW, traditional authorities, NGO, and FFTA): +++-Comments from at least four stakeholders. ++-Comments from three stakeholders. +-Comments from one or two stakeholders. 0-No comment. Table 7. 'Key conditions' to help address all the primary stakeholders' perceived challenges for the possible success of co-management at the Lake Itezhi-Tezhi fishery.
Serial No. 'Key Conditions' Perceived Challenges by Fishers Perceived Challenges by Other Primary Stakeholders
i Clearly defined lake boundaries Conflicts and lack of cooperation between fishers and other stakeholders because of undefined lake boundaries. ii
Membership clearly defined
Need for an effective FFTA to be a voice for all registered fishers.
Need for a reliable FFTA to be a voice for all registered fishers; the need for proper registration and monitoring of fishers. iii
Group (fishers') cohesion
Conflicts and lack of cooperation amongst fishers themselves if co-management arrangement is not understood correctly.
Conflict and lack of cooperation amongst fishers themselves if co-management arrangement is not understood correctly. v Benefits exceed costs Need for financial input to operationalise co-management may lead to high transaction costs.
Likely failure to realise benefits accruing to the fishers because of high transaction costs.
viii Legal rights to organise co-management Need for awareness for fishers to participate in law enforcement through co-management.
Need for awareness for fishers to participate in law enforcement through co-management. x
Decentralisation of authority
Lack of capacity to govern the fishery by themselves; the need for stakeholders' assistance.
Lack of capacity to govern the fishery by themselves; the need for stakeholders' assistance.
ix Cooperation and leadership at the community level Lack of cooperation amongst fishers themselves if co-management arrangement is not understood correctly.
Lack of cooperation amongst fishers themselves if co-management arrangement is not understood correctly.
Need for building capacity among the majority of fishers resulting from their low educational levels.
Need for capacity building among fishers in leadership skills and other aspects.
Note: Serial numbers in this table are aligned with those in Table 1 for consistency's sake.
Analysis of 'Key Conditions' That Highlight Benefits for the Success of the Co-Management
Through FGDs and interviews, all the primary stakeholders envisaged some benefits that would filter down to fishers' households, the other fishery stakeholders, and the fishery at large (Table 8). Some benefits identified by all the primary stakeholders were that co-management could provide a voice for fishers through the FFTA and increased stakeholder support of fisheries governance. Additionally, the fishers and a few primary stakeholders identified effectiveness in law enforcement, increased fish stock, and increased fish catches as other critical benefits. Table 6 for the meaning of these superscripts, the plus signs and 0 signs.
The expected benefits would be realised by fulfilling the appropriate 'key conditions' for enhancing the success of the co-management (Table 9). For instance, (iv) the FFTA had been in existence at the fishery representing the fishers since 2009 and was therefore related to a 'key condition' of an existing organisation (association) at the fishery-an indication of fishers' ability to mobilise themselves for co-management; (v) increased fish catches, increased fishing income, increased alternative sources of income, and improved livelihoods were related to a 'key condition' of ensuring these benefits exceeded investment and transaction costs during implementing co-management. Similarly, 'key conditions' (vii), (ix), and (xi) would help to realise the other expected benefits (Table 9). Table 9. 'Key conditions' for co-management that would help realise all the primary stakeholders' expected benefits at the Lake Itezhi-Tezhi fishery.
Serial No. Key Conditions Fishers' Perspectives Other Primary Stakeholders' Perspectives
iv Existing organisations FFTA-has been representing all fishers and can still play that role if well organised.
FFTA-has been representing all fishers and can still play that role if well organised.
v Benefits exceed costs Promote increased fish catches by fishers.
Promote increased fishers' household income from several sources due to stakeholders' input.
Promote increased income sources as other stakeholders would ensure fishers were assisted. Improve the livelihoods of fishers' households expected. vii
Management rules enforced
Collective enforcement of fisheries laws and regulations by fishers and other responsible stakeholders (DoF and DNPW).
ix Cooperation and leadership at the community level Cooperate between fishers and other stakeholders to address governance challenges currently being faced (i.e., fishery governed primarily by the government).
xi Coordination between government and community Proposed organisational structure to increase stakeholders' support with their expertise.
Note: Serial numbers in this table are aligned with those in Table 1 for consistency's sake.
Stakeholders' Roles and Perceptions of Fisheries Co-Management
The inclusion of multiple stakeholders due to their different roles in the governance of the Lake Itezhi-Tezhi fishery seemed to be critical to the feasibility of an effective comanagement governance if adopted. This is because the primary stakeholders were already involved in the governance and management of the fishery, hence their suitability in contributing greatly to the fisheries co-management approach in terms of technical knowledge, administrative capabilities, and law enforcement skills. The inclusion of secondary stakeholders would be beneficial in providing financial and material support towards the fishers' alternative livelihoods during the co-management governance arrangement. These findings are in line with the study by Kapembwa et al. [46] on the Lake Itezhi-Tezhi fishery, who suggested the development or enactment of the right livelihood-tailored fisheries policies and legislative frameworks that would compel the incorporation of appropriate stakeholders in fishers' livelihoods to promote sustainable fishing. The study by Kapembwa et al. [46] is supported by that of Chama and Mwitwa [35] on the Lake Bangweulu fishery in the northern part of Zambia, who recommended the formulation of a policy on fisheries management that should focus on uplifting the livelihood of local communities while conserving the fisheries resources.
The stakeholders' perceptions largely entail that co-management is applicable for the governance of Lake Itezhi-Tezhi fishery, given that it will be adequately guided by the provisions of the legislation and the engagement of stakeholders. This finding is in line with the arguments put forward by Pomeroy and Williams [54] and d'Armengol et al. [55] that the different structural components in a co-management arrangement should be entrenched through the necessary legislation to make operational and collective decisions in the fishery. The current study also agrees with the argument by Carlsson and Berkes [11] that, in order to foster the success of co-management, it should be defined in formalised arrangements, where multiple stakeholders share governance functions and responsibilities on a given fishery. Wilson et al. [12] added that a centralised government approach has resulted in a significant barrier to integrating decision-making from other stakeholders in fisheries governance and management. As such, the different stakeholders in this study advocated for multiple stakeholder participation in the co-management governance arrangement of the Lake Itezhi-Tezhi fishery.
There is a further need for an appropriate policy to guide such a co-management arrangement. The current National Agriculture Policy (2015-2030) adopted by the Department of Fisheries (DoF) does not provide adequate guidelines, as it does not provide details on how co-management should be organisationally structured and implemented. If such a policy is not in place, there will be a great risk for conflicts and confusion around defining and delineating the roles and mandates of key actors [56]. This study also argues that the lack of a properly defined policy framework on co-management could be a further reason why the government, through DoF, has been struggling to make progress on the issue of co-management implementation as demanded by the Fisheries Act (22 of 2011) [18]. To date, there has been no proper co-management arrangement on any Zambian fishery that is operating based on the requirements of the Fisheries Act, though there have been collaborative or participatory management arrangements between government and fishing communities on some fisheries [57,58]. Some of such fisheries are the Lake Mweru-Luapula fishery and Lake Bangweulu fishery in the northern part of Zambia, whose performance in terms of collaborative or participatory fisheries governance was still not pleasant. This was so because the local fishers and other key stakeholders were still not engaged in decision-making about the governance of the fishery [35,37]. Furthermore, power and authority still resided with the central government on both fisheries [35,37].
Relating the Stakeholder Perceptions on Perceived Challenges and Benefits to the 'Key Conditions' Criteria for Successful Fisheries Co-Management
Studies on existing co-management arrangements in Asia, the South Pacific, and Africa have shown that small-scale fishers can manage fisheries resources sustainably by fulfilling certain 'key conditions' [22,54]. This study conducted a 'pre-assessment of co-management' based on stakeholders' perceptions aligned to the 'key conditions' in order to ascertain the feasibility of undertaking what they would regard as a successful co-management at the Lake Itezhi-Tezhi fishery. The study indicates the need to fulfil most of the eleven 'key conditions' in undertaking co-management in order to address the challenges and realise the benefits highlighted by the stakeholders. These 'key conditions' should be fulfilled because none of them exists in isolation, but each one supports and links to another to make the process and arrangements for the co-management work [22].
(i) Clearly defined boundaries: Having clearly defined physical boundaries around a fishery is essential in preventing conflicts between fishers and government authorities. Although a large part of Lake Itezhi-Tezhi was well defined in terms of physical boundaries, the boundary between the lake portion inside the Kafue National Park and the portion outside the park was still unclear and was a source of conflict. To avoid further conflict which may jeopardise co-management goals, the Department of National Parks and Wildlife (DNPW) would need to demarcate the contentious boundary.
(ii) Membership clearly defined: Membership of fishers on the fishery was not clearly defined because of the open-access nature of the fishery and the inefficiency of the Fishermen and Fish Traders Association (FFTA) in organising the fishers. Therefore, one option for defining membership would be to strengthen the fishing licensing process for fishers by the DoF, which would act as an inventory and monitoring tool for active fishers. A fisher is not permitted to fish in the lake without a fishing licence issued by DoF yearly, in accordance with the Fisheries Act of 2011(22 of 2011) [18]. Fishers would be required to cooperate and collaborate with the DoF to make this operational. As was the case with the Beach Management Unit (BMU) on Lake Victoria [25], the Fisheries Management Committee (FMC) earmarked for establishment would also be required to have a well-monitored fishers' register for taking stock of the fishers' population at any given time.
(iii) and (ix) Group (fishers) cohesion, cooperation, and leadership at the community level: Cooperation among all stakeholders, motivated by incentives, is crucial for the success of a co-management arrangement [10]. Lack of cooperation among stakeholders was one of the reasons for the failure of the current governance system at Lake Itezhi-Tezhi fishery. Incentives such as increased individual fish catches, high household income levels, low dependence on fishing, and decreasing numbers of immigrant fishers would be expected to enhance cooperation from the fishers. Incentives such as the reduced threat of overexploitation of the fishery's resources, increased compliance with regulations, and increased resources for enforcement and monitoring would also promote cooperation from the government.
To improve leadership, the FMC would be expected to organise capacity-building and knowledge transfer programmes for fishers through the proposed sub-committees in the fishing villages and fishing camps. These programmes (workshops and seminars) would have to cover topics such as responsibility, accountability, and effectiveness. Such programmes were also being recommended for the BMU for Lake Victoria, Kenya, after the experience of elite capture at the expense of the less educated local fishers [59].
(iv) Existing organisations (associations): The FFTA has been in existence since 2009. Because of its weak governance arrangement, it has not been effective in representing the fishers to other stakeholders on socio-economic matters. As such, the FMC would be expected to effectively represent the fishers on such matters. The proposed creation and inclusion of sub-committees in the co-management structure, apart from the FMC, would enhance effective representation and participation of fishers from the grassroots level.
(v) Benefits exceeding cost: The co-management system would be expected to provide benefits, especially at the fishers' household level [10]. Fishers would expect increased fish catches, increased incomes, and improved livelihoods for their input into the comanagement operations. This expectation is in line with Pomeroy and Rivera-Guieb's [10] argument that benefits from a co-management arrangement usually promote collective responsibility among fisheries resource users. That would also be an ideal situation in the governance of the Lake Itezhi-Tezhi fishery. Furthermore, the Fisheries Act (22 of 2011) [18] provides for the establishment of the Fisheries Development Fund for the FMC operations, including participation in law enforcement by fishers, and this would also enhance benefit realisation towards the fishers' livelihoods. However, government funding for co-management operations might not be reliable; additional sources, such as a portion of fishing licence fees, may be required for effective implementation [2].
(vi) Participation by those affected: The results of the current study show that all the stakeholders were negatively affected by the current state of governance and fisheries resources and were accordingly willing to participate in the co-management arrangement. Enactment of the Fisheries Act (22 of 2011) [18] was meant to incorporate fishers and other stakeholders in the decision-making processes of co-management. The incorporation of stakeholders is in line with the arguments proffered by Charles [60] and d'Armengol et al. [55] that engagement of a diversity of stakeholders in a co-management initiative of small-scale fisheries usually enhances the governance and management of fisheries resources.
(vii) Management rules enforced: To reduce unsustainable fishing practices, enforcement of or adherence to laws and regulations would be critical in co-management. According to Van Hoof [61], the success of co-management mainly depends on cooperation and collective action among participating stakeholders, particularly the fishers, in law enforcement. The proposed formation of sub-committees in the co-management structure would encourage fishers at the grassroots level to get involved since they know the lawbreakers and how to best deal with them. Furthermore, with the current limitation of human resources by the government to enforce the law, it would even be necessary to legally empower some fishers with the authority to apprehend and prosecute offenders. Such legal empowerment of fishers may require providing them with training and financial incentives, and this undertaking should be specified in the policy framework.
(viii) Legal rights to organise co-management: As far as the Lake Itezhi-Tezhi fishery is concerned, the Fisheries Act (22 of 2011) [18] provides a platform for stakeholders' participation in the governance process of fishery through the FMC. The presence of legislation is in line with d'Armengol et al. [55], who argue that a supporting legal and institutional framework is essential in facilitating the emergence of co-management. The same Fisheries Act of 2011 mandates the FMC to incorporate six fishers (to be selected through the proposed sub-committees) and at least seven other stakeholders of the fishery into its operations. However, most fishers were not aware of their legal right to participate in the prudent management of fishery's resources. As such, the fisheries policy would be required to elaborate on specific guidelines and responsibilities for fishers and the other stakeholders of the fishery in the co-management, including those responsibilities suggested in this study.
(x) Decentralisation of authority: Allison and Badjeck [62] argued that if empowering stakeholders in a co-management arrangement is the goal, then the process should be connected to the decentralisation of power and authority to the local community. However, the Fisheries Act (22 of 2011) [18] does not elaborate on how the government intends to decentralise its power and authority and transfer it to local fishers and other stakeholders. According to Pomeroy and Berkes [63], this lack of elaboration could be because the decentralisation of power was considered an evolving process that was adjusted and matured over time. Therefore, there was no better form of decentralisation, either delegation or devolution, to support a particular co-management [63]. Moreover, the government needs to develop more knowledge, experience, and political will to implement an appropriate form of decentralisation. This scenario is what usually breeds bureaucracy in the co-management implementation by the governments. However, based on the recommenda-tion of Pomeroy and Berkes [63], the government of Zambia would have to give direction on the power-sharing and decision-making arrangements to participating stakeholders through the fisheries policy, which was not yet in place at the time of the current study.
(xi) Coordination between government and community: The establishment of the FMC, as demanded by the Fisheries Act (22 of 2011) [18], would play a pivotal role in coordinating the governance and management of the fishery, resolving conflicts, mobilising the enforcement of fisheries laws and regulations, and enhancing fishers' livelihoods. Its establishment would be done through engaging and mobilising all the stakeholders of the Lake Itezhi-Tezhi fishery.
Conclusions
Perceptions of key stakeholders at the Lake Itezhi-Tezhi fishery find co-management a feasible approach to advance. However, the co-management implementation and success would largely be dependent on the stakeholders' ability to align to the highlighted 'key conditions' criteria that would help to address the challenges and realise the benefits identified in this study.
When assessing the perceptions of the key stakeholders of the Lake Itezhi-Tezhi fishery, we find that most, if not all, of the 'key conditions' criteria are supposed to be met. This is because none of these 'key conditions' exists in isolation, but each one supports and links to another to make the process and arrangements for the success of co-management. However, the likelihood that not all the 'key conditions' would be fulfilled is also there, hence the need for the stakeholders to be prepared to relook at the 'key conditions' if such a thing happens.
Furthermore, there is a great need for the establishment of a fisheries policy to give guidelines on some aspects to enhance the success of the implementation of co-management. Developing such policy would need to be inclusive to key stakeholders, including the fishers, but with a clear mandate and anchoring in the recent fisheries legislation. This would define roles and mandates and would secure its legitimacy and the notion of ownership among the relevant actors. This study provides insights into what such policy needs would entail. One aspect is the establishment of the fisher-centred sub-committees in fishing villages and fishing camps to enhance decision-making by fishers on matters of socioeconomics, enforcement, monitoring, and conflict resolution around the fishery. The decentralised power authority and the suggested responsibilities for all the stakeholders of the fishery are additional aspects. The policy should explain the type of decentralisation to employ for the co-management arrangement and how to address the challenges in the implementation process. The type of decentralisation would either be devolution or delegation, depending on the capacities and capabilities of fishers and other stakeholders for each fishery. However, the delegation approach would be more appropriate, for a start, for the co-management at the Lake Itezhi-Tezhi fishery, considering the capacity of the fishers and the government seeking to achieve true co-management that joins forces with relevant stakeholders.
Author Contributions: S.K.: Conceptualisation, writing original draft preparation, methodology, data curation, and formal data analysis and interpretation; J.G.P.: Supervision, visualising and review and editing; A.J.G.: Supervision, visualising, validation, and review and editing. All authors have read and agreed to the published version of the manuscript.
Funding:
The research received financial support towards data collection logistics from the Norwegian Agency for Development Cooperation (NORAD) through the Norwegian Programme for Capacity Development in Higher Education and Research for Development (NORHED) project (Funding number ZAM-13/0009). The project was called "improving the governance and economics of protected areas, ecosystem services and poverty eradication through HEI capacity-building and transdisciplinary research". The Article Processing Charge was funded by the University of Iceland.
Institutional Review Board Statement: Not applicable.
Informed Consent Statement: Informed consent was obtained from all subjects involved in the study. Data Availability Statement: Not applicable.
|
2021-12-22T16:48:22.639Z
|
2021-12-17T00:00:00.000
|
{
"year": 2021,
"sha1": "2aa7fc7b12c1a2aeeb785a0e7158bf7f30067377",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2071-1050/13/24/13986/pdf?version=1639756111",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "ae2d7e8fa4970ae89a36c57b121ec6604d60c180",
"s2fieldsofstudy": [
"Environmental Science",
"Political Science"
],
"extfieldsofstudy": []
}
|
119257936
|
pes2o/s2orc
|
v3-fos-license
|
Branching Rules For Splint Root Systems
A root system is splint if it is a decomposition into a union of two root systems. Examples of such root systems arise naturally in studying embeddings of reductive Lie subalgebras into simple Lie algebras. Given a splint root system, one can try to understand its branching rule. In this paper we discuss methods to understand such branching rules, and give precise formulas for specific cases, including the restriction functor from the exceptional Lie algebra $\mathfrak{g}_2$ to $\mathfrak{sl}_3$.
Background
Branching rules in group representation theory are the mathematical counterpart of the phenomenon of "broken symmetry" in physics. Gelfand-Tsetlin patterns [1] yield a very transparent algorithm to describe the spectrum of the restriction of an irreducible representation of the "big" group G(n), which is either the unitary group U(n) or the orthogonal group O(n), to the "small" group G(n − 1). 1 The second author has formulated and popularized numerous concrete problems and approaches related to Gelfand-Tsetlin patterns. This resulted in the discovery of an analog of these patterns for symplectic groups Sp(n) [3,9] (but not for exceptional groups) and also provided the foundation for the present collaboration.
In the following, we give some context and motivation for our approach. Experimental data shows that for some H ⊆ G, the multiplicity coefficients m Λ,λ in the restriction formula coincide with the weight multiplicities of some irreducible representation of an auxiliary group K in a natural way. Gelfand-Tsetlin patterns are a special case of this phenomenon; here K is the direct product of several copies of SU (2). This could be expanded as follows: Since the Weyl character formula for a representation Π of G describes the restriction of Π to the maximal torus T ⊂ G, the observation above is reminiscent of the chain rule for the derivative of the composite map F = f • g, where we have DF (x) = Df (g(x))Dg(x).
In our case the role of the composite function is played by the restriction functor which satisfies Res G T = Res H T • Res G H . Moreover, the restriction functor is compatible with natural operations on representations (such as sums, tensor products, and symmetric and exterior powers). This suggests a possible direction for future research: to show that any functor with these properties and some "boundary conditions" must satisfy an analog of the chain rule in the form proposed in this paper.
There are several other ways to prove the formula: from a change of variables in the Weyl formula to using the integral formula for the character and geometry of co-adjoint orbits.
A case study
Consider the following two tables of integers. Figure 1a shows the table of dimensions of irreducible representations of sl 3 indexed by highest weight (α, β), and Figure 1b is the corresponding table for the exceptional Lie algebra g 2 indexed by highest weight (k, l). Let A α,β be the integer at the (α, β)-entry of Date: December 27, 2018. 1 In [1], Gelfand and Tsetlin published their formulas without proof, possibly because the paper was intended as a contribution to mathematical physics, and their proof may have been of a computational nature. the left table, and let G k,l be the integer at the (k, l)-entry of the right table. Then the explicit formulas for A α,β and G k,l are as follows: A α,β = (α + 1)(β + 1)(α + β + 2) 2 , G k,l = (k + 1)(k + l + 2)(2k + 3l + 5)(k + 2l + 3)(k + 3l + 4)(l + 1) 120 . Figure 1. A α,β and G k,l for small values By embedding sl 3 into g 2 via the long roots, we can ask how an irreducible representation of g 2 decomposes when restricted to sl 3 . We can conjecture the decomposition rule, also called the branching rule, by matching up dimensions, i.e. picking a number d from the right table, and finding a consistent array of numbers from the left table that sums to d.
Note that G k,0 is the sum of A α,β over the triangle with vertices A (0,0) , A (k,0) , A (0,k) . Similarly, G 0,l is the sum of A α,β over the triangle with vertices A (l,l) , A (l,0) , A (0,l) . If we look at the nondegenerate example G 3,2 = 1547, it is the sum of the pointwise product of the following two hexagons on the (α, β)-plane, where the second hexagon is a subset of the array of numbers A α,β . In other words, G 3,2 is the weighted sum of A α,β on the hexagon with vertices where the outer layer is counted with multiplicity one, the middle layer is counted with multiplicity two, and the inner triangular layer is counted with multiplicity three. After some experimentation, we can derive the following rule: where (α, β) are integral points on and inside of the hexagon and n α,β are positive integers determined as follows.
• If (α, β) lies on the first layer of H (which are points adjacent to the perimeter), then n α,β = 2.
• Iterating, if (α, β) lies on the j th layer of H, and if this j th layer is still a hexagon, then n α,β = j + 1.
• The hexagon H degenerates at the m th = min{k, l} th layer to a triangle with vertices (l, k), (k, l), (l, l) (or possibly the single point (l, l) if k = l). Set n α,β = m + 1 for all points (α, β) on this triangle. In Section 5.2 we will show that this decomposition of G k,l into A α,β works on the representation theoretic level as well.
We now raise a few questions about the branching rule of the restriction functor on simple Lie algebras.
Question 1. Given an embedding of a simple Lie algebra a into g, can we give an explicit branching rule for Res g a like the one for Res g2 sl3 above?
What governs the coefficients of the branching rule? For example, the coefficients for Res g2 sl3 is the weighted hexagon illustrated above.
Question 3. How many irreducible factors of a are there in Res g a Π λ , where Π λ is an irreducible representation of g? In particular, what is the sum of the coefficients of the branching rule?
In this paper we will work with splint root systems. Then Question 1 is related to Weyl group symmetric functions and the Littlewood-Richardson rule if viewed combinatorially, and Question 2 is related to the weight diagram of a sub-root system corresponding to the splint root system. A solution to Question 3 falls out from a satisfactory answer to Question 1, and is related to the dimension of a particular irreducible representation of an auxiliary Lie algebra. For example, in the above case with g 2 and sl 3 we have the curious identity α,β n α,β = A k,l .
Splint root systems
Let ∆ be a simple root system. We want to study the root systems for which ∆ is splint, i.e. ∆ = ∆ 1 ⊔ ∆ 2 is a disjoint union of two root systems ∆ 1 and ∆ 2 , each of which is embedded into ∆ as an additive group, with ∆ 1 embedded metrically and ∆ 2 embedded in such a way that the length of roots are scaled uniformly. The notion of a splint was introduced by David Richter in [8], and he gave a classification of possible splints of root systems (including the cases where ∆ 1 may not be embedded metrically, for which we do not consider). The table below lists all possible splint root systems, and we label them Types (I) to (V).
We note that the last four types of splint root system have ∆ 2 embedded metrically into ∆. Now write a to be the Lie algebra of ∆ 1 , corresponding to a Lie subalgebra of g. Letting Π λ be an irreducible representation of g of highest weight λ, we have a decomposition We are interested in computing the branching coefficients b λ,ν . The branching coefficients for Types (I) and (II) are well known examples of Gelfand-Tsetlin patterns [1], which we now state. For Type (I), every irreducible representation of sl r+1 is indexed by a Young tableau Y with at most r rows, and its restriction to sl r is the direct sum of irreducible representations of sl r corresponding to those Young tableaux obtained from Y by removing some boxes, each of multiplicity one. Explicitly, if π r λ1,...,λr is a highest weight representation of sl r+1 with λ i ≥ λ i+1 , then Res slr+1 slr π r λ1,...,λr = λi≥µi≥λi+1 π r−1 µ1,...,µr−1 .
We would like similar explicit branching rules for the other three types of splint root system listed above. A computationally intensive heuristic for the branching coefficients b λ,ν exists in [6]. In this heuristic, the computation of b λ,ν relies on the roots ∆ \ ∆ 1 .
This theorem, together with Freudenthal's Multiplicity Formula [2, Section 22] tells us all the branching coefficients in principle. However, this is not easy to compute in practice. Our goal in this paper is to give a framework to understand the branching coefficients directly using the Weyl character formula and give an explicit formula for the Type (IV) branching rule, as well as conjecture formulas for Types (III) and (V) branching rules.
4.1.
The Weyl character and dimension formulas. We recall some computational tools from representation theory. These are very classical results (see [4] for an exposition, for instance), and our main purpose is to fix notation.
Let G be a compact simply connected Lie group, and let T be a maximal torus of G. Then the Lie algebra g of G can be written as g = t ⊕ p, where t = Lie(T ) and p = Lie(G/T ) is the subspace of eigenvectors for the roots.
For any irreducible representation L λ of G with highest weight λ, we can decompose L λ into its weight decomposition Define its character to be the finite sum Theorem 2 (Weyl Character Formula). Let W be the Weyl group of G, and let l(w) be the length of an element w ∈ W . Then and ρ is the half-sum of the positive roots R + .
The formula below allows us to compute the dimension of any irreducible representation of G.
Theorem 3 (Weyl Dimension Formula). Let L λ be the irreducible representation of G with highest weight λ. Then
A strategy.
Let us return to the notations introduced in Section 1. Our strategy to write down explicit branching coefficient b λ,ν is as follows. We check that our branching coefficients are plausible by first verifying that dim Res g a Π λ and dim ν b λ,ν π ν agree. Then we will use the Weyl character formula to make sure that the weight multiplicities check out. A way to do this is as follows. Write the denominator δ g of the Weyl character formula for Π λ as where δ ′ corresponds to the roots of g inside g \ a. Observe that the function δ ′ χ(Π λ ) is a symmetric function on the Weyl group W a of the root system of a. Hence we can write both δ ′ χ(Π λ ) and δ ′ as a polynomial in χ(π µ ) and compute branching coefficients by comparing If W a is the symmetric group, then the latter product can be understood using the Littlewood-Richardson rule; we will see this when we prove the Type (IV) branching rule in Section 5.2. In general one would need to employ a suitable Littlewood-Richardson rule for W a . In our computations we are led to the following conjecture.
where ω λ is a highest weight representation (depending on λ) for the root system Ξ of an auxiliary simple Lie algebra.
The Gelfand-Tsetlin patterns for Types (I) and (II) imply that Ξ can be taken to be ∆ 2 . For example, if we index the irreducible representations of B 3 and D 3 by three positive integers after choosing the standard fundamental weights, then the Gelfand-Tsetlin pattern for Res B3 D3 can be written as In this case, the sum of coefficients equals dim ω a,b,c , where ω a,b,c is the highest weight representation of A ⊕3 1 corresponding to the integers a, b, c. The branching rule for Type (IV) proven in the next section will imply that Ξ = ∆ 2 as well, and we conjecture this is also the case for Type (III). However, the discussion in Section 6 tells us this is not the case for Type (V).
Branching rule for Type (IV)
In this section we work out Res G2 A2 explicitly. We first give an explicit formula for the functor Res B2 D2 without using Gelfand-Tsetlin patterns in order to illustrate the ideas used in understanding Res G2 A2 .
5.1.
Branching rule for Type (II) with r = 2. As D 2 embeds into B 2 via the long roots, it is natural to ask how their irreducible representations are related. The starting point is to compute their Weyl character formulas. To do this we label roots L 1 , L 2 , and all the positive roots, as below.
This corollary is an immediate consequence of Proposition 5, so we just need to prove the above theorem. For this case we can simply use a telescoping sum argument to compute the Weyl character formula on both sides, but in general we want approaches that will allow us to deduce the decomposition from our computations. To this end we give two approaches to the proof: the first approach is bare-hands computation, and the second approach is an explicit computation using the strategy described in the previous section. ).
Then we observe that ).
We can view the above expressions as sums over the polynomial By comparing this expression with the Weyl character formula for D 2 we get what we want.
Proof 2. Write χ α,β = χ(π α,β ). By factoring A k,l,B2 as above, we observe that where the second term exists only when β > 0, and the last term exists only when α > 0. On the (α, β)-plane, this amounts to taking the weighted sum of the following four vertices, with sign as below.
We can now easily check that as desired.
5.2.
Branching rule for Type (IV). Again A 2 embeds into G 2 via the long roots. We need to compute the Weyl character formula for G 2 and A 2 . To do this we label roots L 1 , L 2 , L 3 , and all the positive roots, as below.
We chose the labeling above because the action of the Weyl group W A2 ∼ = S 3 on L 1 , L 2 , L 3 is simply by permuting the indices. The fundamental weights ω 1 , ω 2 and Ω 1 , Ω 2 for G 2 and A 2 are and the half sum of the positive roots are Define Π k,l to be the highest weight representation of G 2 with weight kω 1 + lω 2 = (k + 2l)L 1 + (k + l)L 2 , and define π α,β to be the highest weight representation of A 2 with weight αΩ 1 + βΩ 2 = (α + β)L 1 + αL 2 .
By writing x i = e Li , we have the following explicit formulas for the characters of Π k,l and π α,β : and x 1 , x 2 , x 3 satisfies the relation x 1 x 2 x 3 = 1.
• If (α, β) lies on the first layer of H (which are points adjacent to the perimeter), then n α,β = 2.
• Iterating, if (α, β) lies on the j th layer of H, and if this j th layer is still a hexagon, then n α,β = j +1.
• The hexagon H degenerates at the m th = min(k, l) th layer to a triangle with vertices (l, k), (k, l), (l, l) (or possibly the single point (l, l) if k = l). Set n α,β = m + 1 for all points (α, β) on this triangle.
In the remainder of this section we prove Theorem 8. Again, it is enough to show that the characters of the two sides are equal. In the numerator A k,l,G2 of the character formula for G 2 , by separating the terms with positive exponents from those with negative exponents, we may check that One might recognize now in χ(Π k,l ) something resembling the well-known determinant-based definition of Schur functions on three variables: .
Clearly the first summand in the numerator combines with the first three factors in the denominator to make s k+2l+1,k+l+1,0 (x 1 , x 2 , x 3 ). To simplify the other summand, we use the fact that x 1 x 2 x 3 = 1 to write ) and so we can now recognize the full equation as .
Lemma 10. In the case that x 1 x 2 x 3 = 1, whenever α ≥ β ≥ γ are positive integers, we have Now, we expand the right-hand side of the equation by Pieri's Rule [10,Chapter 7.15], which states that, for any partition µ, where the sum is taken over all partitions λ whose Young diagram is formed from the Young diagram of µ by adding k boxes into k distinct rows. We thus see (after using the previous lemma to simplify) that Rather than deal with the casework of sometimes excluding terms, in using both sums we will still use all three summands. However we still interpret s a,b,c = 0 whenever we do not have a ≥ b ≥ c ≥ 0.
We have reduced our goal to showing that For the remainder of the proof we will assume k ≥ l for ease of notation; the case for k < l is exactly analogous. It will be helpful to extend the notion of H α,β to collections of points (α, β). Let L i (k, l) denote the i th layer of the hexagon corresponding to k and l as described in the statement of Theorem 8. Note then that L i (k, l) for 0 ≤ i < l is the boundary of the hexagon joining the six vertices and L l (k, l) consists of the boundary and interior of the triangle with vertices (k, l), (l, l), (l, k) (or possibly the single point (l, l) if k = l). We then define To better visualize all of this, let us define f (α, β) = (α + β, α). Then H α,β consists of six Schur function summands whose corresponding points in the (α, β)-plane are orthogonally or diagonally adjacent to f (α, β), with signs given by the following figure. α + β − 1 α + β α + β + 1 In particular, f (L i (k, l)) is the boundary of the hexagon with vertices (k + 2l − i, k + l − i), (k + l, k + l − i), (l + i, l), (l + i, i), (k + l, i), (k + 2l − i, l), and f (L l (k, l)) is the boundary and interior of the triangle with vertices (k + l, k), (2l, l), (k + l, l) (or just the single point (2l, l) if k = l). Note that, for any 0 < i ≤ l, the summands of H Li(k,l) correspond to the vertices of f (L i−1 (k, l)).
Lemma 11. For k ≥ l, Proof. Let k = l + j. We proceed by induction on j. The cases j = 0 and j = 1 are easy to verify for any l. For the inductive step, suppose that for a fixed j = J we have established the lemma. We now wish to show that (α,β)∈L l (l+J+1,l) H α,β = H L l (l+J+1,l) .
Adding these terms to the sum in the inductive hypothesis and cancelling gives We now need to prove an analogous lemma for the hexagonal layers.
(In the case i = l − 1 we define H L l+1 (k,l) = 0.) Proof. The sum on the left-hand side can be viewed as taking the hexagon in our figure and sliding it along the hexagon defined by f (L i (k, l)). For any given i, note that any summand produced by the sum on the left hand side must be on or adjacent to f (L i (k, l)), so the only Schur functions that can occur correspond to points on one of f (L i+1 (k, l)), f (L i (k, l)), or f (L i−1 (k, l)) (where when i = 0, we define L −1 (k, l) to be the hexagon surrounding L 0 (k, l) in the appropriate way). All of these are hexagons except when i = l − 1, in which case f (L l (k, l)) is a triangle.
The proof strategy is to split the points (a, b) in these three layers into cases, and determine how often and with what sign each point occurs in some H α,β in our sum.
Case 1 : The point (a, b) lies on f (L i+1 (k, l)). Subcase 1.1: i = l − 1. In this case, if k = l, then f (L l (l, l)) is the single point (2l, l). Then there are six H α,β terms of our sum that produce s 2l,l,0 , one for every point on the hexagon L l−1 (l, l), and the summand s 2l,l,0 will appear in three of these terms with a positive sign and in three with a negative sign, and thus will get a total coefficient of 0.
If k > l then clearly we need only look at points on the boundary of the triangle f (L l (k, l)). Any point (a, b) on the boundary that is not a vertex will be adjacent to three points of f (L l−1 (k, l)), and its corresponding summand s a,b,0 will appear in two of the corresponding H α,β of the sum with opposite signs, and so will vanish. This is easy to check for each side of the triangle f (L l (k, l)) separately; for example, if we take a point (l + j, l) = f (l, j) for l < j < k, this will be adjacent to the three points (l + j − 1, l − 1), (l + j, l − 1), (l + j + 1, l − 1) of f (L l−1 (k, l)), which correspond to f (l − 1, j), f (l − 1, j + 1), f (l − 1, j + 2). Of the corresponding terms in our sum, H l−1,j will contain +s l+j,l,0 , H l−1,j+1 will contain −s l+j,l,0 , and H l−1,j+2 does not contain s l+j,l,0 at all.
If (a, b) is a vertex of f (L l (k, l)), then it is one of f (k, l), f (l, l), or f (l, k). Again, we can check directly that if we take all adjacent points (c, d) ∈ f (L l−1 (k, l)) and sum the appearances of s a,b,0 in H f −1 (c,d) we will get 0. Thus in the case i = l − 1, our sum produces no Schur functions corresponding to points on f (L l (k, l)), justifying our defining H L l+1 (k,l) = 0. Subcase 1.2: i < l − 1. Then f (L i+1 (k, l)) is a hexagon, and we consider any (a, b) lying on it. If (a, b) is not a vertex of f (L i+1 (k, l)), it is adjacent to three points of f (L i (k, l)), and it is easy to check that its summand will occur in two of the corresponding H α,β with opposite signs and thus will vanish; this can be verified for each side of the hexagon separately like in Subcase 1.1.
If (a, b) is a vertex of f (L i+1 (k, l)), we end up with a different result than in Subcase 1.1. These vertices are It can be checked for each of these points (a, b) separately that if we look at the set of adjacent (c, d) in f (L i (k, l)), and sum over H f −1 (c,d) the coefficient of s a,b,0 , and then sum those results together, we get Thus, considering only terms corresponding to points of f (L i+1 (k, l)), we get that (α,β)∈Li(k,l) H α,β yields H Li+2(k,l) .
Case 2 : The point (a, b) lies on f (L i (k, l)). If (a, b) is not a vertex of f (L i (k, l)), then it will be adjacent to two other points of f (L i (k, l)), and will appear in the two corresponding H α,β of the sum with opposite signs and vanish. This can be checked on each side of the hexagon separately as in the previous case.
If (a, b) is a vertex of f (L i (k, l)), it is one of six possibilities. We can again check the contribution of the H α,β to each point separately to determine that, when restricted to f (L i (k, l)), we get −2H Li+1(k,l)) out of (α,β)∈Li(k,l) H α,β . Case 3 : The point (a, b) lies on f (L i−1 (k, l)). If (a, b) is not a vertex of f (L i−1 (k, l)), then as with the other cases, it will appear in two H α,β of the sum with opposite signs and vanish. This can be checked on each side of the hexagon separately.
If (a, b) is a vertex of f (L i−1 (k, l)), then as with the other cases, we can look at the contribution of the H α,β of the adjacent points of f (L i (k, l)) for each vertex separately, sum them, and get a contribution of H Li(k,l) .
Taking all three cases together, we have examined every point (a, b) such that s a,b,0 occurs as a summand (with either sign) of some H α,β in our sum, and determined the coefficient of s a,b,0 arising from summing over all applicable H α,β . Thus, adding the results together from the three cases, we may conclude that (α,β)∈Li(k,l) as desired. Now we can put this all together to finish proving Theorem 8. Let H(k, l) = l i=0 L i (k, l) be the hexagon of Theorem 8 associated with (k, l). Furthermore, using our new terminology n α,β = i+1 when (α, β) ∈ L i (k, l). We wish to evaluate (α,β)∈H(k,l) Simplifying this using our lemmas gives us (l + 1)H L l (k,l) + l−1 i=0 (i + 1) H Li+2(k,l) − 2H Li+1(k,l) + H Li(k,l) .
Conjectures for other cases
In the final section of the paper we provide some conjectural descriptions for the Types (III) and (V) branching rules. These conjectures are combinatorial in nature, in agreement with Conjecture 4, and are derived by comparing dimensions between irreducible representations of ∆ and ∆ 1 . For (A 1 ) 3 , one has dim π a,b,c = ABC. Let Π denote a representation of C 3 , and π one of (A 1 ) ⊕3 . First and most simply we have the formula If we let T k be the set of triples (r, s, t) of integers 0 ≤ r, s, t ≤ k with r + s + t = 2k, then We can in fact write down a branching rule for Res C3 (A1) ⊕3 Π a,b,0 that is very reminiscent of the one from G 2 to A 2 . Assume a, b > 0, so that all three coordinates of (a + b, b, 0) are distinct. Then the formula for Res C3 (A1) ⊕3 Π a,b,0 will be a direct sum of irreducible representations of (A 1 ) ⊕3 with highest weights counted as follows: • Form a hexagon with the six vertices that are permutations of the coordinates of (a+b, b, 0), connected in such a way that the resulting hexagon is convex. (This hexagon will be parallel to the plane x + y + z = 0.) We count with multiplicity 1 every representation on the perimeter of this hexagon, with multiplicity 2 every point that is one coordinate orthogonally from the perimeter, and so on until we reach an inner layer whose points form the perimeter of a triangle; then all points remaining on this perimeter and inside the triangle all receive the same multiplicity. • Now, form a smaller hexagon whose vertices are (a + b − 1, b − 1, 0) and all its other coordinate permutations. (It is possible that some of these values will be the same, so that there are only three permutations; in that case, we just construct the triangle with those vertices.) We now double our multiplicity count by giving those entries on the perimeter multiplicity 2, those on the first layer inside multiplicity 4, and so on until reaching an interior triangle all of whose points get the same multiplicity. • Iterating, we keep forming smaller hexagons (or possibly triangles) with vertices (a + b − k, b − k, 0) and its coordinate permutations as long as b − k ≥ 0. The perimeter of the level k hexagon gets multiplicity k + 1, the first layer inside the perimeter gets multiplicity 2(k + 1), and so on, until a layer is reached that is a triangle, after which all points receives the same multiplicity. Furthermore, we claim that Res C3 (A1) ⊕3 Π 0,0,c = c k=0 (τ1,τ2,τ3)∈T k π c−τ1,c−τ2,c−τ3 .
At present we do not see a general formula for Res C3 (A1) ⊕3 Π a,b,c , nor can we write down an explicit formula for the branching rules of Types (III) with general r ≥ 3.
Type (V). One can try to understand Res F4 D4 by understanding Res F4 B4 , for there are the embeddings D 4 into B 4 , and B 4 into F 4 . The highest weight representations of the root systems F 4 , B 4 , D 4 are parameterized by four integers in terms of fundamental weights, and we denote their highest weight representations by Π, ρ, π respectively. In the book [7] the restrictions from F 4 to B 4 are listed for small highest weights. This suggests the following two branching rules for Res F4 B4 as we vary the weights corresponding to the first and last vertices of the F 4 Dynkin diagram: Res F4 D4 Π k,0,0,0 = s+t=k ρ 0,s,0,t , Res F4 D4 Π 0,0,0,k = 0≤s+t≤k ρ s,0,0,t .
Example 13. We list here four examples of Res B4 D4 . The last example is not included as a special case of the two formulas presented above.
|
2018-12-21T22:08:50.000Z
|
2018-12-21T00:00:00.000
|
{
"year": 2021,
"sha1": "6f90691bdf657634380e9198ea0a7f4d76b32e4d",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1812.09389",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "6f90691bdf657634380e9198ea0a7f4d76b32e4d",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
}
|
16395168
|
pes2o/s2orc
|
v3-fos-license
|
A skein approach to Bennequin type inequalities
We give a simple unified proof for several disparate bounds on Thurston-Bennequin number for Legendrian knots and self-linking number for transverse knots in R^3, and provide a template for possible future bounds. As an application, we give sufficient conditions for some of these bounds to be sharp.
Introduction
1.1. Main results. The problem of finding upper bounds for the Thurston-Bennequin and self-linking numbers of knots has garnered a fair bit of recent attention. Although this originated as a problem in contact geometry, it now lies more in the realm of knot theory and braid theory, with upper bounds given by the Seifert and slice genus, the Kauffman and HOMFLY-PT polynomials, and, more recently, Khovanov homology, Khovanov-Rozansky homology, and knot Floer homology. These bounds are sometimes collectively called "Bennequin type inequalities".
The original proofs of many Bennequin type inequalities were remarkably diverse and sometimes somewhat ad hoc. In this paper, we provide a template which simultaneously proves a number of the significant Bennequin type inequalities, thus providing a unified approach to many of these bounds. The proof of the template itself is a fairly easy induction argument based on the remarkable work of Rutherford [23]. Our template gives a means to prove future, yet to be discovered Bennequin type bounds, for example using Khovanov and Rozansky's proposed categorification of the Kauffman polynomial [12]. It also sheds some light on why particular bounds may be sharp for a certain Legendrian knot while others are not; see Section 3.
We briefly recall the relevant definitions; see also [5] or any number of other references. A Legendrian knot or link in R 3 with the standard contact structure is a smooth, oriented knot or link along which y = dz/dx everywhere. It is convenient to represent Legendrians by their front projections to the xz plane, which are (a collection of) oriented closed curves with no vertical tangencies, whose only singularities are transverse double points and semicubical cusps. Given a front, one obtains a link diagram by smoothing out cusps and resolving each double point to a crossing where the strand with larger slope lies below the strand with smaller slope; this link diagram, The author is supported by NSF grant DMS-0706777. 1 which we will call the smoothed front, is the topological type of the original Legendrian link. Any topological link has a Legendrian representative.
Given a front F , let c(F ) denote half of the number of cusps of F , and let c ↓ (F ) denote the number of cusps of F which are oriented downwards. Also let w(F ) denote the writhe of the corresponding smoothed front, the number of crossings counted with the standard signs (+1 for , −1 for ). Define the Thurston-Bennequin number and self-linking number of F , respectively, by The Thurston-Bennequin and self-linking numbers are invariants of Legendrian links and comprise the "classical invariants" for Legendrians in R 3 . (In the literature, the role of the self-linking number for Legendrian links is usually played by the rotation number r(F ) = tb(F ) − sl(F ), and the self-linking number is reserved for transverse links; our self-linking number for a Legendrian is the usual self-linking number of its positive transverse pushoff.) Within a topological type, tb and sl for Legendrian representatives is always unbounded below; one can decrease tb and sl by adding zigzags to a front. However, in the early 1980's, Bennequin [2] proved the remarkable fact that tb and sl are bounded above for any link type L, by negative the minimal Euler characteristic for a Seifert surface bounding L; for knots K, tb(F ), sl(F ) ≤ 2g(K) − 1 for any front F representing K, where g(K) is the Seifert genus.
• Most of these inequalities have obvious generalizations to links; in particular, the last four translate unchanged to bounds for links. • tb(L) ≤ sl(L) always: rotating any front F 180 • produces another front F ′ of the same topological type with 2 tb(F ) = 2 tb(F ′ ) = sl(F ) + sl(F ′ ). It follows that any upper bound for sl is also an upper bound for tb. However, the Kauffman and Khovanov tb bounds above do not extend to bounds on sl. • Some Bennequin type inequalities imply others. The τ and s bounds (and presumably the HOMFLY-PT homology bound) imply slice-Bennequin, which in turn implies Bennequin; the HOMFLY-PT homology bound also implies the HOMFLY-PT (polynomial) bound.
On the other hand, many pairs of the inequalities are incommensurable, notably the Kauffman and Khovanov bounds [14] (see also [6]). • The above inequalities (in particular, the Kauffman, Khovanov, and HOMFLY-PT bounds) suffice to calculate tb and sl for all but a handful of knots with 11 or fewer crossings [15]. • The Kauffman and HOMFLY-PT bounds have been given many proofs in the literature, involving state models, plane curves, the Jaeger formula, etc. See [3,6,8,25,26] for additional proofs of the Kauffman bound, and [4,8,25] for HOMFLY-PT. • This is not a complete list of known Bennequin type inequalities. In particular, Wu [27,29] has derived bounds on sl from Khovanov-Rozansky sl n homology. Our main results give general criteria for a link invariant to provide an upper bound for tb or sl. These criteria are satisfied for many of the known bounds.
Corollary 1. The HOMFLY-PT and HOMFLY-PT homology bounds on sl
hold for oriented links.
Theorem 2. Suppose thatĩ(D) is a Z-valued invariant of unoriented link diagrams such that: (a)ĩ(D) is invariant under Reidemeister moves II and III;
Then
Corollary 2. The Kauffman and Khovanov bounds on tb hold for oriented links.
We speculate that when τ and s are extended to oriented links, i = 1 − 2τ and i = 1 − s should each satisfy the conditions of Theorem 1. This would demonstrate that the τ and s bounds can be proven using our template as well.
It also seems likely that Khovanov-Rozansky's proposed categorification of the Kauffman polynomial [12] would satisfy a skein relation which would allow one to apply Theorem 2. This would give an upper bound on tb from Kauffman homology strengthening the Kauffman (polynomial) bound, just as the HOMFLY-PT homology bound on sl strengthens the HOMFLY-PT (polynomial) bound.
As mentioned earlier, one benefit of our results is a better understanding of when particular Bennequin type inequalities are sharp. Rutherford [23] has demonstrated a necessary and sufficient condition for the Kauffman bound to be sharp, in terms of certain decompositions of fronts known as rulings. It would be nice to have similar characterizations for sharpness for, say, the HOMFLY-PT bound and the Khovanov bound. It seems that such characterizations should now be within reach, but for now we present some sufficient conditions for these bounds to be sharp; see Section 3.
Here is a rundown of the rest of the paper. In Section 1.2, we summarize the notation used in our presentation of the Bennequin type inequalities. The proofs of Theorems 1 and 2 and their rather easy consequences, Corollaries 1 and 2, are given in Section 2. In Section 3, we use the inductive proofs of our main results to construct trees which decompose any Legendrian link into simpler links, and use these trees to study sharpness of some Bennequin type inequalities.
Notation.
Here we collect the definitions used in the Bennequin type inequalities mentioned above, including the particular conventions we use.
(These conventions coincide with those from KnotTheory [1] wherever applicable.) • max-deg a is the maximum degree in a; max-supp a is the maximum a degree in which the homology is supported; min-supp q−t is the minimum value for q−t over all bidegrees (q, t) in which the homology is supported. • g 4 (K) is the slice genus of K.
• τ (K) is the concordance invariant from knot Floer homology [16], normalized so that τ = 1 for right-handed trefoil. • s(K) is Rasmussen's concordance invariant from Khovanov homology [19]. • P (K)(a, z) is the HOMFLY-PT polynomial of K, normalized so that P = 1 for the unknot and • P a,q,t (K) is the (reduced) Khovanov-Rozansky HOMFLY-PT homology [11] categorifying P (K), normalized so that [20]. (For links, we need to use the "totally reduced" version H of HOMFLY-PT homology in place of reduced HOMFLY-PT homology H; see [20].) • F (K)(a, z) is the Kauffman polynomial of K, normalized so that for a diagram D representing K, F (K)(a, z) = a −w(D)F (D)(a, z), whereF is the framed Kauffman polynomial, the regular-isotopy invariant of unoriented link diagrams defined byF ( ) = 1,F ( where V K is the Jones polynomial.
Proofs
Theorems 1 and 2 have essentially the same proof. We establish Theorem 2 first, and then prove Theorem 1 and Corollaries 1 and 2.
Proof of Theorem 2. Viewĩ as a map on fronts by applyingĩ to the smoothed version of any front. Let L be an oriented link and let F be a Legendrian front of type L. We wish to show that tb(F ) ≤ −i(L), or equivalently, that The idea, which is essentially due to Rutherford [23], is to use skein moves to replace F by simpler fronts in such a way that c −ĩ does not decrease, and then to induct . The four fronts , , , and are topologically , , , and , respectively, and are thus related by the four-term unoriented skein relation.
Suppose that F contains a Legendrian tangle . Successively replace in F , to obtain three new fronts, and suppose that c −ĩ ≥ 0 for each of those fronts. Since c is the same for all four fronts, assumption (d) in the statement of Theorem 2 then implies that c −ĩ ≥ 0 for F as well. Similarly, if F contains , and the three fronts obtained from F by all satisfy c −ĩ ≥ 0, then c −ĩ ≥ 0 for F as well.
To prove that c(F ) −ĩ(F ) ≥ 0 for all F , we induct on the singularity number s(F ) of F , defined as the total number of singularities (crossings and cusps) of F . If s(F ) = 2, then F is the standard Legendrian unknot and c(F ) = 1,ĩ(F ) ≤ 1. Now consider a general front F . Suppose that F contains a tangle of the form or . If we replace this tangle successively by three tangles according to (1) or (2), then the last two of the resulting fronts have lower s than F and are covered by the induction assumption, while the first has the same s as F .
The strategy is now to apply "skein crossing changes" ↔ to obtain a simpler front. To do this, we perform a second induction, this time on a modified singularity number s ′ (F ), defined as the number of singularities to the right of the rightmost left cusp of F . Since Legendrian isotopy, Legendrian destabilization (the removal of a zigzag), and the removal of trivial unknots do not increase c −ĩ, the Theorem follows by induction from the following result.
Lemma 1 (Rutherford [23], Lemma 3.3). Via skein crossing changes, Legendrian isotopy, Legendrian destabilization, and the removal of trivial unknots, we can turn F into a front which either has lower s, or the same s and lower s ′ .
For completeness, we sketch here the proof of the lemma. Consider the portion of F immediately to the right of the rightmost left cusp of F . By using Legendrian Reidemeister moves II and III if necessary, we can assume that this portion of F has one of the forms shown on the left hand side of For (oriented) Legendrian fronts F , we wish to show that sl(F ) ≤ −i(F ), or equivalently, that c ↓ (F ) −ĩ(F ) ≥ 0. As before, we use skein moves to induct on the singularity number of F . Note that c ↓ (F ) −ĩ(F ) is invariant under Legendrian isotopy and nonincreasing under Legendrian destabilization. If F contains a tangle (respectively ), then we can successively replace it by (respectively ) and whichever of and inherits an orientation from F , to obtain two new fronts. If c ↓ −ĩ ≥ 0 for these two fronts, then c ↓ −ĩ ≥ 0 for F as well. We now apply Lemma 1 as before.
Proof of Corollary 1. Define i(L) = max-deg a P (L)(a, z) + 1. By the skein relation and normalization for the HOMFLY-PT polynomial, the conditions in Theorem 1 hold, and Theorem 1 then gives the HOMFLY-PT bound. For the HOMFLY-PT homology bound, define i(L) = max-supp a P a,q,t (L)+ 1. The skein relation for the HOMFLY-PT polynomial (see [20]) categorifies to an exact triangle relating P( ), P( ), and P( ), and this exact triangle yields condition (b) in the statement of Theorem 1. The normalization condition (a) is easy to check, and thus Theorem 1 yields the HOMFLY-PT homology bound. (D) if D is of link type L; indeed, the complex for Khovanov homology is first defined this way. The exact triangle in Khovanov homology (in this context, see, e.g., [14]) is given by If we now defineĩ(D) = − min-supp * HKh * (D), then the exact triangle implies thatĩ In particular, condition (d) in Theorem 2 holds, and the Khovanov bound follows.
Skein trees
The nature of the proofs of Theorems 1 and 2 allows us to give necessary conditions and sufficient conditions for various Bennequin type inequalities to be sharp, and to compare these inequalities with each other. We can decompose any Legendrian knot via a skein tree, much as one would do to calculate knot polynomials using skein relations, and the skein tree can often tell us whether one bound or another is sharp.
Starting with an unoriented Legendrian front, construct the unoriented skein tree by following Rutherford's strategy described in the proof of Theorem 2: • at each step, do a tangle replacement to obtain three new fronts; • simplify the results by Legendrian isotopy, and repeat; • stop when the result is either a stabilization (isotopic to a front with a zigzag) or a standard Legendrian unlink (the disjoint union of tb = −1 Legendrian unknots). An example is given in Figure 2.
One can easily use an unoriented skein tree for F to calculate the coefficient of a − tb(F )−1 in the Kauffman polynomial for F with any orientation (this coefficient is nonzero if and only the Kauffman bound is sharp): for terminal leaves in the tree, the coefficient is 1 at a standard Legendrian unlink and 0 at a stabilized front; use the skein relation for the framed Kauffman polynomial to backwards-construct the coefficient along the tree: This is simply a restatement of a result of Rutherford [23].
We now see a heuristic reason for why the Kauffman bound sometimes fails even for Legendrian knots which maximize tb. Consider for example the Legendrian (3, −4) torus knot F shown in Figure 2 (In each case, the two replacements are the 0-and 1-resolution, respectively.) Since this only counts 0-and 1-resolutions and not crossing changes, this is the same tree used to calculate the Jones polynomial for a knot. At each stage in the Jones skein tree, a front F is connected to its 0resolution F 0 and its 1-resolution F 1 . The skein exact sequence for Khovanov homology implies the following. We now have the following sufficient condition for the Khovanov bound to be sharp. We remark that Theorem 3 implies, but is generally much stronger than, the sufficient condition for Khovanov sharpness given in [14]. Recall from [14] that the 0-resolution of a front, obtained by replacing each double point by its 0-resolution, is admissible if each component of the 0-resolution is a standard Legendrian unknot, and no component contains both pieces of any resolved double point. Proof. Suppose that F is a front with admissible 0-resolution; apply the procedure from Theorem 3. It is easy to check that in the Jones skein tree for F , F and all of its iterated 0-resolutions are circled.
We do not know how the condition of Theorem 3 compares to the sufficient condition for Khovanov sharpness given by Wu [28].
One can similarly construct an oriented skein tree for any oriented front, at each step replacing a front by the two fronts related to it by the oriented skein relation. Rather than stopping at all stabilized fronts, as for the unoriented skein tree, we stop only at fronts which are positive stabilizations (i.e., isotopic to a front with a downward zigzag ). If we encounter a negative stabilization (i.e., a front isotopic to one with an upward zigzag ), we eliminate the upward zigzag (this does not change sl) and proceed. All terminal leaves of the oriented skein tree are either positive stabilizations or standard Legendrian unlinks. See Figure 4.
As for unoriented skein trees and the Kauffman polynomial, we can use an oriented skein tree for a front F to calculate the coefficient of a − sl(F )−1 = = = Figure 4. Oriented skein tree for a Legendrian trefoil. Solid arrows represent the skein relation; the dashed arrow is a negative destabilization.
in the HOMFLY-PT polynomial for F (this coefficient is nonzero if and only if the HOMFLY-PT bound is sharp). Standard Legendrian unlinks have coefficient 1; positive stabilizations have coefficient 0; the coefficient is preserved under negative stabilizations; we can backwards construct the coefficient along the tree using the skein relation, e.g.,
This again is very similar to a result of Rutherford [23]. It is sometimes easy to tell by inspection of an oriented skein tree whether the HOMFLY-PT bound is sharp. For example: Obviously this sufficient condition is rather weak, but it does for instance imply that the Legendrian trefoil in Figure 4 maximizes sl.
Skein trees can also show the limitations of Theorems 1 and 2. Consider the unoriented skein tree for the m(10 132 ) knot F shown in Figure 5. Each of the terminal leaves of the tree is a stabilization. Now suppose thatĩ(D) is any invariant satisfying the conditions of Theorem 2. Then by Theorem 2, c −ĩ ≥ 0 for all fronts; since each of the fronts on the right hand side of Figure 5 is a stabilization, c−ĩ ≥ 1 for these. Condition (d) from Theorem 2 implies that c(F ) −ĩ(F ) ≥ 1 as well, and so i(m(10 132 )) ≤ − tb(F ) − 1 = 0. It follows that the best possible bound given by Theorem 2 is tb(m(10 132 )) ≤ 0. We however know from [15] that tb(m(10 132 )) = −1; hence the template of Theorem 2 can never give a sharp bound for tb(m(10 132 )). A similar argument shows that the template of Theorem 1 can never give a sharp bound for sl(m(10 132 )) (= −1 by [15]). = = = Figure 5. Unoriented skein tree for a Legendrian m(10 132 ) knot.
|
2007-09-13T18:23:39.000Z
|
2007-09-13T00:00:00.000
|
{
"year": 2007,
"sha1": "b1c3258ee44c22298a9f06572a98f10cf03bf40e",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/0709.2141",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "ed960864cfb2f4195be6485b0242367dfe9a948a",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
}
|
232230893
|
pes2o/s2orc
|
v3-fos-license
|
Predictive model containing PI-RADS v2 score for postoperative seminal vesicle invasion among prostate cancer patients
Background Seminal vesicle invasion (SVI) is considered to be one of most adverse prognostic findings in prostate cancer, affecting the biochemical progression-free survival and disease-specific survival. Multiparametric magnetic resonance imaging (mpMRI) has shown excellent specificity in diagnosis of SVI, but with poor sensitivity. The aim of this study is to create a model that includes the Prostate Imaging Reporting and Data System version 2 (PI-RADS v2) score to predict postoperative SVI in patients without SVI on preoperative mpMRI. Methods A total of 262 prostate cancer patients without SVI on preoperative mpMRI who underwent radical prostatectomy (RP) at our institution from January 2012 to July 2019 were enrolled retrospectively. The prostate-specific antigen levels in all patients were <10 ng/mL. Univariate and multivariate logistic regression analyses were used to assess factors associated with SVI, including the PI-RADS v2 score. A regression coefficient-based model was built for predicting SVI. The receiver operating characteristic curve was used to assess the performance of the model. Results SVI was reported on the RP specimens in 30 patients (11.5%). The univariate and multivariate analyses revealed that biopsy Gleason grade group (GGG) and the PI-RADS v2 score were significant independent predictors of SVI (all P<0.05). The area under the curve of the model was 0.746 (P<0.001). The PI-RADS v2 score <4 and Gleason grade <8 yielded only a 1.8% incidence of SVI with a high negative predictive value of 98.2% (95% CI, 93.0–99.6%). Conclusions The PI-RADS v2 score <4 in prostate cancer patients with prostate-specific antigen level <10 ng/mL is associated with a very low risk of SVI. A model based on biopsy Gleason grade and PI-RADS v2 score may help to predict SVI and serve as a tool for the urologists to make surgical plans.
Introduction
The incidence of seminal vesicle invasion (SVI) has decreased over time because of the application of serum prostate specific antigen (PSA) for prostate cancer screening, and the prevalence of SVI in a contemporary surgical series was 3-17.6% (1)(2)(3)(4)(5). SVI is a well-established indicator of adverse prognosis. Compared to organ-confined disease or extra prostatic extension (EPE), SVI is associated with worse outcome, a higher rate of recurrence and mortality (6). In patients with suspected SVI, multimodal therapy should be considered, such as radical prostatectomy (RP) combined with radiotherapy. Due to poor prognosis, treatment and operation should think carefully for patients with SVI. Therefore, accurate presurgical diagnosis of SVI is critical for urologists to make a treatment decision.
Several models that rely on clinical information and biopsy data have been developed for the prediction of prostate cancer staging, such as the Partin tables (7) and the nomogram developed by Gallina et al. (8). These models show high sensitivity (90%) but low specificity (30-60%) for the detection of SVI (9). On the contrary, the prediction accuracy of multiparametric magnetic resonance imaging (mpMRI) for SVI reveals high specificity (>90%) but poor and heterogeneous sensitivity (30-70%) overall (10). The recent models that added mpMRI information to predictive models yield better accuracy, such as the combination of mpMRI and the Partin tables (11) and the novel nomogram developed by Martini et al. (12). However, these models rely on the SVI findings of mpMRI, and none of the models currently integrate the Prostate Imaging Reporting and Data System version 2 (PI-RADS v2) score to predict postoperative SVI in patients without SVI on preoperative mpMRI.
Therefore, we aimed to construct a new model based on the PI-RADS v2 for the prediction of SVI in patients without SVI on preoperative mpMRI. This predictive model may also serve as a tool for the urologists to make surgical plans. We present the following article in accordance with the TRIPOD reporting checklist (available at http://dx.doi.org/10.21037/tau-20-989).
Patient population
We retrospectively reviewed a total of 787 patients who underwent RP for prostate cancer at our institution between January 2012 and July 2019. The inclusion criteria were as follows: patients with a preoperative serum PSA level <10 ng/mL who underwent preoperative mpMRI before prostate biopsies. Patients who underwent prior hormonal therapy or radiotherapy as well as patients with incomplete data were excluded from the cohort. Seven patients were excluded because of suspicious SVI on preoperative mpMRI. Finally, a total of 262 patients were enrolled in the study.
Clinicopathological characteristics
All patient data, including age, body mass index (BMI), preoperative PSA level, free/total PSA ratio (f/t PSA), prostate volume (PV) measured by trans rectal ultrasound (TRUS), the percentage of positive systematic biopsies, maximum cancer percentage per core, clinical stage, biopsy Gleason score (GS), and pathological characteristics of specimens following RP, were collected. PSA density (PSAD) was calculated by dividing total PSA by the PV. Cancer of the prostate risk assessment (CAPRA) score was calculated according to the UCSF-CAPRA scoring system.
Biopsy procedure and histopathology
All patients underwent TRUS-guided systematic biopsies. All biopsy specimens were evaluated by two dedicated genitourinary pathologists to determine the cancer diagnosis and the GS in positive cases. The patients were classified into the following five groups using the new GS grading system: grade group 1, GS 6; grade group 2, GS 3+4=7; grade group 3, GS 4+3=7; grade group 4, GS 8; and grade group 5, GS 9 and 10.
MpMRI
MRI was performed using a 3.0T Discovery MR750 HDx (GE Healthcare, Waukesha, WI, USA) without the use of an endorectal coil. The imaging protocol included axial T1-weighted images of the pelvis, axial T2-weighted fast spin-echo images centered on the prostate, and dynamic contrast-enhanced images. In addition, axial diffusionweighted imaging was performed with b-values of 0, 800, and 1,400 s/mm 2 .
MpMRI interpretation.
MRI images were retrospectively interpreted by one of the two experienced radiologists with >5 years' experience in reading prostate MRIs. Any disagreement in the process of interpretation was resolved by the senior adjudicating radiologist. According to PI-RADS v2 assessment categories, clinically significant cancer is highly unlikely or unlikely to be present in lesions of PI-RADS 1 or 2 (13,14). What's more, lesions with PI-RADS >2 were defined as MRI-visible lesion, which could be considered for targeted biopsy. For patients with PI-RADS 3, it may be beneficial to perform follow-up rather than immediate biopsy, as most lesions can be reclassified after a manageable period of time (15). PI-RAD 4 or 5 means highly or very highly likely existence of clinically significant cancer, which biopsy should be considered (14). Targeted MR biopsy should be considered for PI-RADS assessment category 4 or 5 lesions but not for PI-RADS 1 or 2 (16). The 3 point "Likert" scale was associated directly with clinical decisions. So, the probability of cancer was evaluated and scored on a three-point scale based on the PI-RADS v2 score, where group "negative" (PI-RADS 1-2) = low probability, group "suspicious" (PI-RADS 3) = equivocal, and group "positive" (PI-RADS 4-5) = high or very high probability.
Statistical analysis
The endpoint of the study was the identification of the presence of SVI on the RP specimens. Univariate analysis was performed to investigate the associations between clinical and pathological risk factors and the presence of SVI in patients with negative SVI on mpMRI. The factors evaluated for the prediction of SVI were age, BMI, preoperative PSA level, f/t PSA, PV, PSAD, the percentage of positive systematic biopsies, maximum cancer percentage per core, CAPRA score, Gleason grade group (GGG), and PI-RADS v2 score. Continuous variables were compared using Student's t-test and the Mann-Whitney U test, and categorical variables were compared using Chisquare test or Fisher's exact text, as appropriate. Univariate and multivariate binary logistic regression analyses were conducted to identify independent predictors of SVI.
We constructed the logistic regression model for prediction of the diagnosis of SVI, by utilizing selected variables based on the results of multivariate logistic regression analysis. Discrimination was measured using the area under the curve (AUC) derived from the receiver operating characteristic (ROC) curves. With the optimal cutoff value according to Youden's index, the performance of the model was assessed through analysis of sensitivity, specificity, positive predictive value, and negative predictive value.
Statistical analyses were performed using SPSS (version 24.0; SPSS Inc., Chicago, IL, USA). All analyses were two-sided, with statistical significance set at P<0.05. All procedures performed in this study were in accordance with the Declaration of Helsinki (as revised in 2013). Due to no influence on therapeutic strategy or need for patients' follow-up, the Institutional Review Board replied that there was no need for ethical approval. And because of the retrospective nature of the research, the requirement for informed consent was waived.
Baseline clinicopathological characteristics of the study cohort
The baseline clinical and pathological characteristics of the total 262 patients are summarized in Table 1. The median age was 66 years, the interquartile range (IQR) was 62-71 years, the median preoperative PSA level was 7.51 ng/mL (IQR, 6.07-8.63 ng/mL), and 30 patients (11.5%) presented with SVI on RP specimens.
CAPRA score, calculated from five variables including the GS and the maximum cancer percentage per core, was insignificantly associated with SVI (P=0.179) in the multivariate analysis which containing three variables: CAPRA score, the percentage of positive systematic biopsies and PIRADS (Table S1).
In the multivariate analysis, which excluding CAPRA score, GGG and PI-RADS remained significantly associated with SVI, suggesting that these variables were independent risk predictors for the diagnosis of SVI.
With the application of the coefficients of the logistic function, a predictive model for postoperative SVI was constructed using selected risk factors, including GGG and PI-RADS scores, as follows: logit (P) = ln [(P/(1-P)] = -4.661+0.428×GGG+1.212× PI-RADS.
ROC analysis was performed to assess the accuracy of this model, as shown in Figure 1.
Discussion
In our cohort, SVI was reported on the RP specimens in 30 patients (11.5%). Using the selected risk factors, containing biopsy GGG and the PI-RADS v2 score, a predictive model for postoperative SVI was constructed, which revealed a high negative predictive value of 96.5% (95% CI, 91.5-98.7%) at the optimal cutoff predictive value.
SVI is defined as pathologically invasion of the muscular wall of the extraprostatic seminal vesicle. The incidence of SVI seemed to be heterogeneous, ranging from 3% to 17.6% in recent studies (1)(2)(3)(4)(5). In our cohort, the rate of SVI was 11.5%, which is consistent with the rate reported in recent literature which the median PSA level of the patients (5.9-7.8 ng/mL) is similar with ours (7.51 ng/mL) (3)(4)(5).
SVI is considered to be one of the most adverse prognostic findings in prostate cancer, which affects the biochemical progression-free survival and disease-specific survival. The 5-and 10-year biochemical failure rate for SVI was reported as 60% and 72%, which was significantly higher than that for pT2 patients (17). In a study containing 31,415 patients, Kristiansen et al. showed that compared to EPE alone, patients with SVI had a higher risk of clinical progression and death after RP (6). According to National Comprehensive Cancer Network (NCCN) guideline version 2. 2020, patients with SVI are defined as very high-risk group, which treatment should think carefully for them. Asymptomatic patients with <5 years life expectancy is only considered for androgen deprivation therapy (ADT), external beam radiotherapy or observation. Only selected patients are recommended to undergo surgery and more effective treatments. Besides, patients with SVI influenced the choice of nerve-sparing RP, which could improve urinary continence and erectile function and is recommend in men with localized prostate cancer. According to EAU-ESTRO-SIOG Guidelines on Prostate Cancer, high risk of extracapsular disease is contraindication for nerve-sparing RP (18). However, EPE and SVI did not always show up together (19,20). In our cohort, EPE and SVI coexisted in 46.7% (14/30) of patients, and the other 16 patients without EPE who are unsuitable for nerve-sparing RP. It's of great importance to exclude these unsuitable patients before operating nerve-sparing RP. On the contrary, in the case of high likelihood of SVI, additional therapies should be discussed, for example, neoadjuvant chemotherapy (21). In terms of prognosis and therapy strategy, preoperative prediction of SVI is of great significance.
MpMRI examination plays an important role in clinical staging, which is recommended to all patients who are suspected of PCs in our institution. According to a contemporary critical meta-analysis that included a total 5,677 patients from 34 studies, MRI shows high specificity (0.96, 95% CI, 0.95-0.97) but poor and heterogeneous sensitivity (0.58, 95% CI, 0.47-0.68) (10), which means that more than half of the patients with SVI were underestimated in the worst cases. This is possible because radiologists have focused on high-specificity reading to minimize unnecessary exclusion of men from curative treatment.
Several models have been constructed in an effort to predict SVI in prostate cancer. Lughezzani et al. compared three different models that did not consider mpMRI results, including Partin tables, the European Society of Urological Oncology (ESUO) criteria, and the Gallina nomogram, for the prediction of SVI (9). This study showed that all these three tools had high sensitivity (92.7%, 89%, and 90.8%, respectively) but poor specificity (33.1%, 56.3%, and 47.6%, respectively), which confirms that these models overestimated the probability of SVI during application. MpMRI has been considered as an important part of these models for providing detailed anatomical information. Grivas et al. discovered that the AUC values of Partin with MRI predictive models were higher than those of Partin and MRI alone (0.929, 0.837, and 0.884), which showed that adding mpMRI findings to Partin tables could improve its predictive accuracy (11). Martini et al. developed a mpMRI and clinical data-based nomogram for the prediction of SVI, and this nomogram showed a relatively high AUC (0.847) (12). However, these results are limited by the relatively small number of SVI cases with respect to the variables included in the model. Besides, these models containing mpMRI findings highly rely on the negative or positive SVI results of mpMRI. To our knowledge, none of the models currently integrate the PI-RADS v2 score to predict postoperative SVI in SVI (-) patients on preoperative mpMRI.
The PI-RADS scoring system has shown a great value in predicting biopsy outcome (22), biochemical recurrence (23), infiltration of the neurovascular bundles (24), and EPE (25). Our previous studies have shown similar results, confirming the value of PI-RADS in predicting prostate cancer and clinically significant prostate cancer in men undergoing repeat prostate biopsy and in predicting pelvic lymph node metastasis at RP (26,27). After adjusting for the other confounding factors, the GS and PI-RADS score were found to be independent risk factors for the prediction of SVI. On that basis, a model including PI-RADS to predict postoperative SVI in patients without SVI on mpMRI was constructed in this study. The AUC value of this model was 0.746 (P<0.001), which is better than that of mpMRI alone. This model also showed high sensitivity and negative predictive value, which indicates that it can precisely distinguish the real SVI (-) patients and help the urologists for the formulation of treatment plans. Koh et al. found that among 275 patients with PSA ≤10 ng/mL and no cancer at the base in systematic biopsy results, none had SVI (28). Through creating probability plot graphs, Zlotta et al. pointed out that patients with PSA <10 ng/mL have a risk of SVI <5% when GS on biopsy is <7 or when the percentage of biopsies affected by cancer is <50% (29). In our series, we also observed that among the 111 patients with PI-RADS v2 score <4 and Gleason grade <8, extremely few patients (n=2) had SVI, which yielded only a 1.8% incidence of SVI with a high negative predictive value of 98.2% (95% CI, 93.0-99.6%).
PSA, a widely used serum marker for prostate cancer screening, plays an important role in other predictive models for SVI. In our study, the PSA level in SVI (+) patients was higher than that in SVI (-) patients, but without any significant difference (Median: 7.75 vs. 7.44 ng/mL, P=0.114). This is most probably because we selected patients with PSA <10 ng/mL, which narrowed the difference between the two groups. PSAD and f/t PSA did not show a significant difference as well as.
CAPRA score, which is calculated from the PSA level, the GS, the clinical T stage, the percentage of positive prostate biopsies and the patient age at diagnosis, has a great capacity in predicting prostate cancer outcomes, such as biochemical recurrence (30), metastatic potential (31), and prostate cancer-specific death (32). In our study, CAPRA wasn't an independent risk predictor for the diagnosis of SVI (P=0.070) after adjusting other confounding factors. And further research is required for the value of CAPRA score for predicting SVI.
Emerging technologies and prostate cancer biomarkers are playing a vital role in prostate cancer diagnosis and treatment (33,34). Micro-ultrasound is a novel highresolution imaging technology for diagnosing prostate cancer which is complementary for mpMRI (35)(36)(37)(38). Compare to mpMRI, micro-ultrasound, which promises real-time visualization of suspicious lesions and targeting of biopsies, has shown same or superior sensitivity (37). For detecting clinically significant prostate cancer, microultrasound biopsy has shown a higher rate with fewer biopsied cores (39), and could found prostate cancer missed by all other techniques (34). However, additional studies are needed to explore the application of micro-ultrasound for prostate cancer staging and predicting SVI.
Several limitations should be considered. First, these results were obtained from a retrospective cohort, thus bringing a certain risk of selection bias. Second, the predictive model was constructed on the basis on a small sample size from a single institution, and the accuracy of this model requires internal and external validation in a multicenter study to assess its wider applicability. Third, due to improvement in data quality over time and different MRI protocols performed in patients, there might be a certain difference in the outcome of MRI. Finally, the MRI performed previously at our institution could not meet the PI-RADS v2.1 technical stander (40), which would aggravate transferability for clinical usage, and the role of the PI-RADS v2.1 score for predicting SVI need further investigation. However, PI-RADS v2 has showed satisfactory inter-reader variability in previous study (41).
Conclusions
In the present study, PI-RADS assessment has been proved to be one of the valuable predictors of SVI in SVI (-) patients on mpMRI. The PI-RADS v2 score <4 in prostate cancer patients with PSA <10 ng/mL is associated with a very low risk of SVI. A model based on biopsy Gleason grade and PI-RADS v2 score may help to predict SVI and serve as a tool for the urologists to make surgical plans.
(BMU2020MI003). Ethical Statement: The authors are accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved. All procedures performed in this study were in accordance with the Declaration of Helsinki (as revised in 2013). Due to no influence on therapeutic strategy or need for patients' follow-up, the Institutional Review Board replied that there was no need for ethical approval. And because of the retrospective nature of the research, the requirement for informed consent was waived.
Footnote
Open Access Statement: This is an Open Access article distributed in accordance with the Creative Commons Attribution-NonCommercial-NoDerivs 4.0 International License (CC BY-NC-ND 4.0), which permits the noncommercial replication and distribution of the article with the strict proviso that no changes or edits are made and the original work is properly cited (including links to both the formal publication through the relevant DOI and the license). See: https://creativecommons.org/licenses/by-nc-nd/4.0/. SVI, seminal vesicle invasion; OR, odd ratios; CI, confidence interval; BMI, body mass index; PSA, prostate-specific antigen; f/t PSA, free/ total prostate-specific antigen ratio; PV, prostate volume; PSAD, PSA density; CAPRA, cancer of the prostate risk assessment; PI-RADS, Prostate Imaging Reporting and Data System; GGG, Gleason grade group; *, P value with <0.05 significance.
|
2021-03-16T06:16:27.381Z
|
2021-02-01T00:00:00.000
|
{
"year": 2021,
"sha1": "f52c064451ca3cbf7cdcad8dfdaba6df15af4d21",
"oa_license": "CCBYNCND",
"oa_url": "https://tau.amegroups.com/article/viewFile/61492/pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "feced82f9428f3be324748086e1c62ceab324089",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
239770027
|
pes2o/s2orc
|
v3-fos-license
|
Spectral Properties of Abdominal Tissues on Dual-energy Computed Tomography and the Effects of Contrast Agent
Background/Aim: Multiparametric dual energy comptuted tomography (CT) imaging allows for multidimensional tissue characterization beyond the measurement of Hounsfield units. The purpose of this study was to evaluate multiple imaging parameters for different abdominal organs in dual energy CT (DECT) and analyze the effects of the contrast agent on these different parameters and provide normal values for characterization of parenchymatous organs. Patients and Methods: This retrospective analysis included a total of 484 standardized DECT scans of the abdomen. Hounsfield Units (HU), rho (electron density relative to water), Zeff (effective atomic number) and FF (fat fraction) were evaluated for liver, spleen, kidney, muscle, fat-tissue. Independent generalized estimation equation models were fitted. Results: In DECT imaging there is only little difference in mean HUmixed for parenchymatous abdominal organs. Analysis including Zeff, rho and FF allows for better discrimination while a large overlap remains for liver, spleen and muscle. Including multidimensional analysis and the effects of contrast medium further enhances tissue characterization. Small differences remain for liver and spleen. Conclusion: Organ characterization using multiparametric dual energy CT analysis is possible. An increased number of parameters obtained from DECT improves organ characterization. To our knowledge this is the first attempt to provide normal values for characterization of parenchymatous organs.
The differentiation of tissue and organs on single-energy computed-tomography (CT) is performed according to differences in X-ray attenuation, which in the case of CT is measured as Hounsfield units (HU) relative to the attenuation of pure water (1,2). The CT number is influenced by the effective atomic number (Z eff ) and the electron density of each material (ρ e ). As the effective atomic number is dependent to spectral properties, a change in X-ray energy leads to a change in the resulting Hounsfield units for a tissue accordingly (3)(4)(5)(6).
The basic idea of dual-energy CT (DECT) is to apply two different X-ray energies and hence take advantage of the different spectral properties. This method has become established over recent years and offers the possibility of new and better tissue characterizations (7)(8)(9)(10)(11)(12)(13)(14)(15). By acquiring two CT datasets either simultaneously or one immediately after another, DECT enables the concurrent acquisition of two attenuation maps at low-and high-energy spectrums (10,(15)(16)(17). Although different tissues may show quite similar attenuation (in terms of CT numbers) at a certain energy level (7,8), they may show large differences in attenuation at other energy levels because of their individual electron binding energies (9,10,12). The two main reasons for this effect are Compton-scattering and photoelectric-absorption (5, 6, 11-13, 15, 18). Compton scatter is nearly independent of the photon-energy, depending primarily on the electron density of the material. It occurs predominately at high energies. However, photoelectric-absorption is highly dependent on the electron binding energy, which in turn is highly dependent on the atomic number of the examined element (3-6, 13, 18).
The probability of photoelectric-absorption increases substantially when the energy of a photon approaches the individual binding energy of a material. On the basis of these effects, different tissues may show large differences in their probabilities for photoelectric interactions, and the individual spectral characteristics may assist in the characterization of different tissues on DECT (9)(10)(11)(12)(14)(15)(16)(17)(18).
The purpose of this study was to evaluate multiparametric analysis of different tissues on DECT-scans implemented in a standardized clinical routine setting over the course of a year. The tissues evaluated were abdominal organs (liver, spleen, kidney) and muscle, as all these materials have quite similar elemental compositions and may be challenging for automated segmentation methods using single-energy CT images. Additionally, abdominal fat tissue was measured, which should be easier to separate from soft tissue because of its lower density (1). By using dedicated software (syngo.via ® Dual Energy, Siemens Healthineers, Forchheim, Germany), CT images acquired at two different energy levels can be decomposed into two new material-specific images by the application of spectral processing techniques. The results can be presented as either Z eff and ρ e images, or as one image representing high atomic number material (e.g., iodine) and the other image representing soft tissue, which also permits analysis of the fat fraction. A future idea is to create automated segmentation techniques for abdominal imaging, using dedicated trained software for the automatic recognition of abdominal organs.
We, furthermore, analyzed the effect of a contrast agent on the different DECT parameters of each organ, as this information could contribute to future studies researching pathological changes in organs. It is likely that angiopathicischemic lesions with reduced blood circulation or blood clots in the vascular system of an organ will lead to low contrast agent uptake in the respective organ.
Patients and Methods
This retrospective study was performed using routinely acquired clinical DECT-scans of human participants. The requirement for informed patient consent for this retrospective study was waived by the local ethics committee (EKNZ 2018-01641). In addition, written informed consent was obtained for the research use of the clinical data of each participant.
Study population. This study was performed at a public hospital using data acquired from patients between February 2018 and February 2019. The institution has subspecialty-trained radiologists dedicated to the analysis of CT and DECT images. The inclusion criteria for subject selection were patients above 18 years who provided written informed consent and underwent one of two defined standardized DECT-protocols. All patients were referred for DECT due to suspected kidney/ureter stones or work-up of hematuria. Sixteen DECT scans from 16 patients were excluded because the patients received two non-contrast and two contrast-enhanced DECT scans during the study period. In total, imaging data from 331 different patients were included in the study.
Imaging modalities. Dual-energy CT was performed with a SOMATOM Definition Flash second-generation dual-source CT scanner (Siemens Healthineers). Two standardized DECT protocols for the abdomen (non-contrast and contrast-enhanced) were included in the study. The standard-of-care imaging was a noncontrast DECT scan of the abdomen in patients with suspected kidney/ureter stones and a combined non-contrast and venous-phase contrast DECT scan of the abdomen for work-up of hematuria. Axial spiral datasets of the abdomen were acquired during a single breath-hold.
Image evaluation. All dual-energy images were evaluated using syngo.via ® software (syngo Dual Energy, Siemens Healthcare GmbH 2009-2018, Version 05.01.0000.0030, Erlangen, Germany). Standardized segments of each organ were measured by one reader using standardized regions of interest (ROIs). The measured DECTparameters included HU values of the mixed 100 kV and 140 Sn kV images (HU mixed ; representing a 120 kV image), electron density relative to water (ρ e ), effective atomic number (Z eff ), and fat fraction (FF). Image analysis and measurement processing were performed under the supervision of a board-certified radiologist.
The readers were free to choose optimal window settings for the image analysis. After visual assessment, the parameters were measured using ROIs set in the liver, spleen, kidney, muscle, and fat tissue. All parameters were evaluated on DECT-scans with and without contrast agent. For each tissue, at least two ROIs of between 0.5 and 4 cm 2 were placed on axial image views. Contact with macroscopic vessels or organ capsules were avoided. In the liver, three ROIs were placed in the periphery of the right hepatic lobe (hepatic segments VIII, VI, V) and one ROI was placed in the left hepatic lobe (III). In the spleen, one ROI was set in the upper half and one in the lower half. In each kidney, three ROIs were set in the cortex between the upper and lower pole, avoiding contact with the renal pelvis. For muscle measurements, the psoas, obturatorius, sartorius, or gluteus muscles were measured on both sides, avoiding muscles showing signs of fatty degeneration or atrophy in the image reporting. Fat tissue was measured in subcutaneous adipose tissue, preferably in the lumbar region. As several ROIs were placed in each organ, the measurements were averaged for each participant and organ. Figure 1 shows the application profiles used in syngo.via ® software. Measurements of attenuation were taken on default view images. Afterwards, the specific DECT-parameters (ρ e , Z eff , FF) of each tissue were extracted using the application profiles "Rho/Z" and "fat map" (subcategories of the application profile "Liver VNC"). Both profiles create material decomposition images and display all structures in a color-coded overlay. The application class "Liver VNC" is based on an iodine subtraction using a material decomposition algorithm encoding the iodine of the contrast agent (19,20). Consequently, every voxel is decomposed into iodine, fat, and soft tissue maps. The Liver VNC application offers the possibility of quantifying the fat fraction of each material in a certain ROI using the sub-algorithm "fat map". The application class "Rho/Z" provides Statistical analysis. Statistical analysis was performed using R software version 4.0.0 (2020-04-24). Summary statistics for each organ were created separately, with and without contrast agent, and included the number of observations on which the calculations were based, minimum, 1 st and 3 rd quartile, median, mean, maximum, standard deviation, and interquartile range. Potential interactions between organs and contrast agent were taken into account by performing analyses separately with and without contrast agent. Independent generalized estimation equation models (GEEs) were fitted to quantify the differences between the organs with respect to the parameters HU mixed , FF, ρ e , and Z eff , and to assess the effects of contrast agent on the parameters. The reported (unadjusted) 95% Wald-type confidence intervals are based on the sandwich variance estimator, thus taking intraindividual correlation into account. As there were at most two measurements, the choice of correlation structure was irrelevant.
Results
Study participants. The full data set consisted of a total of 2,420 measurements from 331 patients. One hundred and seventy-eight patients underwent DECT with contrast agent (n=40) or without contrast agent (n=138), whereas 153 patients underwent DECT scans with and without contrast agent during the same session. In total, we report on 291 non-contrast DECT scans and 193 contrast-enhanced DECT scans. Measurements of the spleen were not possible in some cases because of splenectomy. Figure 2 presents a flow chart of the patient selection process.
Data analysis. All parameters were successfully measured in all regions of interest in each organ or tissue. The measured HU Rho values of the "Rho/Z" application profile were successfully transformed into the electron density relative to water (ρ e ) using the formula described by Saito et al. (21). The color-coded fat overlay images of the application profile "Liver VNC -fat map" showed visually clear differences between the different included tissues, with different levels of fat content. Abdominal organs were displayed in blue (no or low fat) and fat tissue in yellow to black (fat).
Statistical analysis. The summary statistics for the DECT scans without contrast agent, representing the number of observations on which the calculations are based, minimum, 1 st and 3 rd quartiles, median, mean, maximum, standard deviation, and interquartile range, are shown in Table I. Summary statistics for DECT scans with contrast agent are shown in Table II.
The effect of contrast agent on the variables was separately assessed for each organ, and the results are shown in Table III. We report the estimated difference between noncontrast and contrast-enhanced DECT scans for each parameter and organ, as well as the 95% confidence in vivo 35: 3277-3287 (2021) intervals. Taking every variable into account, muscle and fat tissue showed the lowest differences, whereas kidney and spleen showed the highest differences when non-contrast and contrast-enhanced results were compared. Between-organ differences in the parameters were quantified separately for DECT scans performed with and without contrast agent. Results of the analyses of scans without contrast agent are shown in Table IV, whereas those with contrast agent are shown in Table V. Again, we report the estimated difference as well as the 95% confidence intervals. Comparisons of the different organs within noncontrast DECT acquisitions revealed only low differences in HU mixed between liver and muscle, and in Z eff between liver, muscle, and spleen. On contrast-enhanced DECT scans, low differences were shown in HU mixed between liver and spleen, in Z eff between liver and spleen, and in FF between liver, spleen, and muscle. The value ranges given above showed large overlaps for the different tissues within the same acquisition. Overlaps are visualized in graphs in Figure 3. We also inserted the measured values of HU mixed , Z eff and FF for liver, spleen, kidney and muscle in a 3D graph (Figure 2). Especially the overlaps of liver, spleen and muscle can be regarded here. In Figure 3 and Figure 4, fat tissue was not included in the visualization since all values of fat were highly significant different from all other tissues.
Comparing the mean values in non-contrast DECT scans of HU mixed only showed no significant differences by comparing spleen to muscle (p=0.99). Mean values of ρ e showed no significant differences by comparing liver to muscle (p=0.24), Z eff by liver to spleen (p=0.32) and liver to muscle (p=0. 16). For FF and all other tissue comparisons results showed statistically relevant differences (p<0.05).
Comparison of the mean HU mixed values in contrastenhanced scans showed no significant differences between in vivo 35: 3277-3287 (2021) 3282 In each organ, the use of an iodine containing contrast agent was associated with a change in the variable values. The estimated change in the different parameters for each organ with the 95% confidence interval are given. CI.l, Lower confidence interval; CI.u, upper confidence interval; HU mixed , Hounsfield units derived from mixed tube voltages of 100 kVp and 140 kVp representing 120 kV; ρ e , electron density relative to water; Z eff , effective atomic number, FF, fat fraction.
Discussion
This retrospective analysis of clinical dual-energy CT scans (DECT) reports a statistical summary of CT values for different abdominal tissues with and without the use of contrast agent in a 12-month cohort. Furthermore, we evaluated the estimated differences between abdominal organs, as well the effects of a contrast agent on different CT parameters. A current focus of interest in radiology is the idea of automated segmentation techniques that can automatically recognize different organs and tissues. Therefore, it is of great importance to gather data from many clinical study populations and determine the generally observed distributions to derive normal values. Furthermore, we were able to document the effects of contrast agent on the DECT values of abdominal organs. We are convinced that angiopathic-ischemic lesions with reduced blood circulation, or blood clots in the vascular system of an organ, lead to low uptake of contrast agent by the respective organ. Thus, estimating the effects of contrast agent on different DECT variables is of major interest. By applying the DECT method with low-and high-energy spectra, we obtained additional parameters for tissue characterization, such as the CT number for mixed images at different tube voltages (HU mixed ), the electron density relative to water (ρ e ), the effective atomic number (Z eff ), and the fat fraction (FF). In contrast, only HU values are generated on single-energy CT, which is a disadvantage, because tissues are generally likely to show more differences if more variables are involved. Thus, we examined DECT scans to obtain more knowledge on organic structures and improve organ characterization.
We could determine the effects of the contrast agent on each DECT parameter in each organ. Whereas muscle and fat tissue showed lowest changes in contrast-enhanced scans for each parameter due to their low intake of contrast agent, the kidneys and spleen showed the highest changes of each included organ. This is most likely a result of both organs having a high blood circulation, but also because the CT protocol used was dedicated for work-up of the urinary track, and contrast agent was therefore administered using a split bolus technique. Consequently, our results support the assumption that pathologically ischemic organic changes are detectable using our DECT protocol. Depending on the units of the measured variables, Z eff and HU mixed were the most affected by contrast agent because both parameters are affected by the atomic number of elements. Thus, the effects of the iodine-based (atomic number of iodine, Z=53) contrast agent were measurable in the collected DECT variables. We generated the 95% confidence intervals for each organ and DECT variable before and after contrast agent application, which allows the values obtained from patients to be checked, and those patients with values lower or higher than the confidence interval should be observed more closely if pathological changes in the vascular system feeding or within the organ exist. We also measured an effect of contrast agent on the FF. Although this parameter is derived from the material decomposition algorithm of the syngo.via ® application profile "Liver VNC", which extracts the iodine and creates virtual non-contrast images, we still noticed differences in the FF in response to contrast agent administration. We assume that these differences are an error due to contrast agent application, with the software not fully extracting the iodine attenuation from the virtual noncontrast images.
Our study also revealed that the variable ρ e is not an ideal parameter to work with alone for tissue characterisation. Application of the formula described in Saito et al. (21) results in very small parameter values. Thus, differences can to some extent only be recognized to the third decimal place. Furthermore, because ρ e is directly derieved from HU, it contributes no further information about the organic structure of tissue.
Former studies have already shown that DECT facilitates the differentiation of different materials (7, 8, 10, 15-17, 22, 23). However, these earlier studies referred predominantly to the differentiation of materials with different compositions, for example kidney stones from uric acid or calcium (17,24,25). The differentiation of soft tissues is an ambitious idea and may not yet be possible (10,22) no major differences in attenuation of X-rays of different energy levels (16,22). However, even when materials show similar attenuation, they can still show different electron densities and elemental compositions, which may lead to their separation based on material qualities (7,8). Therefore, we tested whether soft tissues can be separated from each other using the aforementioned DECT parameters or the averaged Hounsfield units obtained from mixing of the two different energy spectra. By measuring values in a patient cohort collected over the course of a year, we were able to estimate differences and confidence intervals, within which the true differences between organs lie. After comparing all collected variable values for the spleen, liver, and muscle, we found only low differences on non-contrast images. Nevertheless, the contrast-enhanced DECT scans showed greater differences, with the variable values extracted from muscle showing great differences from the values of liver and spleen because of the low contrast uptake of muscle. Thus, automated organ detection would be possible by applying contrast agent. Nevertheless, when comparing the liver to spleen, we only detected small differences in FF and Z eff in non-contrast images, and also in HU mixed , FF, and Z eff in contrast-enhanced images. The estimated differences in HU mixed between liver and spleen were higher in noncontrast images than in contrast-enhanced images. Nevertheless, we are convinced that there is potential to increase the differences between these tissues in contrastenhanced images, by developing the CT protocol and the contrast agent application. For example, the spleen and liver should show greater differences in images acquired during arterial phase CT.
A recent study by Hunemohr et al. also measured HU, ρ e , and Z eff on DECT scans (16). They performed scans at 120 kV and measured the parameters three times in kidney, liver, muscle, and three different fat tissues. The HU values ranged from 43 to 63 HU for liver, 41 to 46 HU for kidney, 40 to 44 HU for muscle, and −98 to −55 HU for fat tissue. The values for ρ e were 1.05-1.07 for liver, 1.05 for kidney, 1.05 for muscle, and 0.93-0.97 for fat tissue. Z eff ranged from 7.26 to 7.30 for liver, 7.27 to 7.36 for kidney, 7.21 to 7.31 for muscle, and 6.10 to 6.48 for fat tissue. Their values for ρ e and Z eff are similar to those in our study because the tube voltage of 120 kV corresponds with our mixed 100 kV and 140 Sn kV images. Nevertheless, the study sample of Hunemohr et al. was smaller than ours and did not aim to generate normal distributions. By taking measurements from a large sample, as in our study, it can be assumed that patients with values around the median are likely to correspond to healthy adults. Additionally, we calculated the first and third quartiles, as pathological organic changes could possibly be present in patients with values below or above these limits.
Our study has several limitations. First even if we fixed tube voltages, the scanner was allowed for tube current modulation since we used clinical protocols for our retrospective analyses. The aim of the algorithm is to maintain a global image noise level, but we could not take into consideration dose variations and its effect on the different parameters evaluated. Second our standardized scan protocol included a split bolus administration of contrast medium, e.g., two separate following contrast peaks that might have influenced organ attenuation due to overlap of early and late contrast phases. Third our normal distribution dataset was generated using vendor specific software tools that may be difficult to transfer to other technical equipment.
To our knowledge our work is the first attempt to generate normal values for parenchymatous abdominal organs using a standardized dual energy CT imaging protocol. Our data was aimed at multiparametric evaluation that enables multidimensional analysis. Such data may be crucial for the development of deep learning powered algorithms that allow for automatic tissue characterization. In future studies, we aim to examine whether measurements of the variables HU mixed , Z eff , and FF are suitable for diagnosing disease. Studies involving the collaboration of different institutes of pathology should be pursued. As a histological examination is the gold standard for diagnosis in many diseases, we should perform correlation analyses between the histological results and radiological examinations. It is conceivable that the microscopic changes a disease can cause may also affect the measured parameters. Diseases leading to increased organ density, such as fibrosis or cirrhosis of the liver, should consequently lead to higher CT numbers and ρ e , whereas diseases with lower organ density such as liver steatosis, should lead to lower CT numbers and ρ e , but a higher FF.
Conclusion
Organ characterization using multiparametric dual energy CT analysis is possible. The measurable CT parameters showed differences before and after contrast agent application and may, therefore, be useful for detecting pathological vascular changes. The increased number of parameters obtained from DECT in comparison to single-energy CT led to improved organ characterization and shows promise for future automated segmentation techniques. To our knowledge this is the first attempt to provide normal values for characterization of parenchymatous organs.
Conflicts of Interest
The Authors declare no conflicts of interest.
Authors' Contributions
Diana Kreul (first author): Study design, literature research, data analysis, editing and writing of the article. Tilo Niemann (corresponding author): Literature research, editing, writing and proofreading of the article. Rahel Kubik: Data analysis and proofreading of the article. John Froehlich and Michael Thali: Coediting, proofreading of the article. All Authors made pertinent contributions to the article, and proofread and approved the final article before submission.
|
2021-10-26T13:07:23.616Z
|
2021-01-01T00:00:00.000
|
{
"year": 2021,
"sha1": "3dfb2ad87e20c083becb90cfb8f2e50056c6ec6f",
"oa_license": null,
"oa_url": "https://iv.iiarjournals.org/content/invivo/35/6/3277.full.pdf",
"oa_status": "GOLD",
"pdf_src": "Highwire",
"pdf_hash": "3dfb2ad87e20c083becb90cfb8f2e50056c6ec6f",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
256200239
|
pes2o/s2orc
|
v3-fos-license
|
ACTA1 is inhibited by PAX3-FOXO1 through RhoA-MKL1-SRF signaling pathway and impairs cell proliferation, migration and tumor growth in Alveolar Rhabdomyosarcoma
Alveolar Rhabdomyosarcoma (ARMS) is a pediatric malignant soft tissue tumor with skeletal muscle phenotype. Little work about skeletal muscle proteins in ARMS was reported. PAX3-FOXO1 is a specific fusion gene generated from the chromosomal translocation t (2;13) (q35; q14) in most ARMS. ACTA1 is the skeletal muscle alpha actin gene whose transcript was detected in ARMS. However, ACTA1 expression and regulation in ARMS have not been well investigated. This work aims to explore the expression, regulation and potential role of ACTA1 in ARMS. ACTA1 protein was detected in the studied RH30, RH4 and RH41 ARMS cells. ACTA1 was found to be inhibited by PAX3-FOXO1 at transcription and protein levels by employing western blot, luciferase reporter, qRT-PCR and immunofluorescence assays. The activities of ACTA1 gene reporter induced by RhoA, MKL1, SRF, STARS or Cytochalasin D molecule were reduced in the presence of overexpressed PAX3-FOXO1 protein. CCG-1423 is an inhibitor of RhoA-MKL1-SRF signaling, we observed there was a synergistic effect between this inhibitor and PAX3-FOXO1 to suppress ACTA1 reporter activity. Furthermore, PAX3-FOXO1 overexpression decreased ACTA1 protein level and knockdown of PAX3-FOXO1 by siRNA enhanced ACTA1 expression. In addition, both MKL1 and SRF, but not RhoA were also found to be inhibited by PAX3-FOXO1 gene at protein levels and increased once knockdown of PAX3-FOXO1 expression. The association between MKL1 and SRF in cells was decreased accordingly with ectopic expression of PAX3-FOXO1. However, the distribution of MKL1 and SRF in nuclear or cytoplasm fraction was not changed by PAX3-FOXO1 expression. Finally, we showed that ACTA1 overexpression in RH30 cells could inhibit cell proliferation and migration in vitro and impair tumor growth in vivo compared with the control groups. ACTA1 is inhibited by PAX3-FOXO1 at transcription and protein levels through RhoA-MKL1-SRF signaling pathway and this inhibition may partially contribute to the tumorigenesis and development of ARMS. Our findings improved the understanding of PAX3-FOXO1 in ARMS and provided a potential strategy for the treatment of ARMS in future.
Introduction
Rhabdomyosarcoma (RMS) is the most common soft tissue tumor in children and young adults with an incidence of about six cases per 1,000,000 population per year [1,2]. Embryonal Rhabdomyosarcoma (ERMS) and Alveolar Rhabdomyosarcoma (ARMS) are the two major morphologic subtypes of RMS characterized on the basis of their clinical and histopathological features [3,4]. ERMS is more common and favorable to treatment than ARMS. In contrast, ARMS is more aggressive and has a worse outcome than ERMS. Specifically, most ARMS are characterized by chromosomal translocation of either t (2; 13) (q35; q14) or t (1; 13) (q36; q14), mainly generating PAX3-FOXO1 and PAX7-FOXO1 fusion genes, respectively. These fusion genes encode chimeric proteins PAX3/7-FOXO1, which consist of the N-terminal DNAbinding domain of PAX3/7 and the C-terminus of the transactivation domain of FOXO1 protein [5,6]. Both PAX3-FOXO1 and PAX7-FOXO1 are expressed at higher levels and have more potent transcriptional activities than the wild types of PAX3/7 proteins in ARMS tumors. But PAX3-FOXO1 is more common accounting for about 55% of ARMS cases than PAX7-FOXO1 with 22% of ARMS and is associated with worse prognosis and lower overall survival rate in this disease [7][8][9].
Numerous studies have shown that PAX3-FOXO1 is oncogenic and involved in ARMS tumorigenesis [10][11][12]. Exogenous expression of PAX3-FOXO1 could cause the transformation of chicken embryo fibroblast cells to become enlarged and grow in multiple layers [10]. In the study using immortalized human myoblast, cells expressing PAX3-FOXO1 protein developed tumor in immunocompromised mice [11]. Knockdown of PAX3-FOXO1 expression by siRNA oligonucleotide in ARMS cells reduced the cell motility, inhibited the rate of cellular proliferation and induced the muscle differentiation [12]. However, the detailed mechanism of PAX3-FOXO1 implicated in ARMS tumorigenesis is still not fully understood.
Skeletal muscle alpha-actin protein (ACTA1), encoded by ACTA1 gene, belongs to the actin protein family consisting of six isoforms in human [13]. ACTA1 isoform is the major component in skeletal muscle thin filament of sarcomere and is essential for force production, muscle contraction and movement [13,14]. ACTA1 expression is developmentally and transcriptionally regulated in vivo.
In chicken skeletal muscle, vascular actin (ACTA2) is the first muscle actin to be expressed in the myotome, then ACTA2 is downregulated and cardiac actin (ACTC) expression increases. At the time of birth, cardiac actin expression is downregulated and ACTA1 expression is increased and remains the major isoform in adult skeletal muscle [15,16]. A similar developmental process occurs for ACTA1 in human skeletal muscle [17,18]. At the transcriptional level, ACTA1 expression is mainly modulated by serum response factor (SRF) [19,20]. SRF is a MADSbox transcription factor that is highly conserved and ubiquitously expressed and can regulate muscle-specific gene expression by binding to the CC(A/T) 6 GG consensus sequence (also called CArG box) within the promoter region of target genes [21]. SRF controls ACTA1 transcription and expression by binding to CArG box and associating with the coactivator myocardin-related transcription factor A (MRTF-A/MKL1/Mal/BSAC). MKL1 is one member of the MRTF family which consists of myocardin, MKL1 and MKL2. MKL1 acts as a cofactor to associate with SRF and stimulate SRF-dependent target gene transcription. MKL1 activity is modulated by actin dynamics. MKL1 is localized to the cytoplasm by directly binding to monomeric globular-actin (G-actin) through the N-terminal RPEL domains, but once actin polymerization to form filamentous actin (F-actin) occurs in response to Rho signaling, MKL1 translocalizes into and accumulates in nucleus, where it activates the transcription of SRF target genes such as ACTA1 [22][23][24].
Being a structural component of skeletal muscle, ACTA1 is also implicated in a variety of muscle diseases. ACTA1 knockout in mice causes muscle weakness and death in the early neonatal period [15,25]. Amino acid mutations in ACTA1 protein are responsible for the congenital myopathies with muscle weakness such as nemaline myopathy (NM), intranuclear rod myopathy (IRM) and actin myopathy (AM) [13]. However, few studies have been reported about the behavior of ACTA1 in cancer disease, especially in ARMS.
Here, we firstly examined ACTA1 expression and found that ACTA1 was inhibited by PAX3-FOXO1 in ARMS cells. We later analyzed the detailed mechanism and showed that RhoA-MKL1-SRF signaling was involved in this ACTA1 inhibition by PAX3-FOXO1 in ARMS cells. Finally, we determined the potential role or function of ACTA1 in ARMS and the in vitro and xenograft assays improved the understanding of PAX3-FOXO1 in ARMS and provided a potential strategy for the treatment of ARMS in future.
Keywords: ACTA1, PAX3-FOXO1, Alveolar rhabdomyosarcoma, RhoA-MKL1-SRF signaling pathway, Cell proliferation, Tumor growth showed that ACTA1 overexpression could suppress cell proliferation and tumor growth. Therefore, our data provide a new insight to further understand the tumorigenesis or progression of ARMS and a potential strategy for ARMS treatment or prognosis in future.
All the constructs were verified by sequencing. Antibodies used were: anti-ACTA1 (No.17521
Luciferase reporter assay
Human Alveolar Rhabdomyosarcoma cells RH30, RH4 or RH41 were seeded in 24-well plate at a density of 0.5-1.0 × 10 5 cells in 0.5 ml DMEM antibiotic-free growth medium and transiently transfected using lipofectamine 2000 reagent according to the manufacture's instruction. A total of 0.35-0.40 ug plasmids DNA per well with 50 ng reporter, 2 ng renilla luciferase plasmid (pRL-TK internal control, Promega) and 300-350 ng PAX3-FKHR or other indicated expression plasmids were used for each transfection. The empty vector pcDNA3.1 or pCMV-Tag-2B was used to keep equal amount of total plasmid in transfection. Cells were placed in growth medium overnight after transfection and then serum free-starved (DMEM-0.3%FBS) for 24-36 h before luciferase activity was analyzed. Firefly luciferase activity was measured using Dual-Luciferase assay kit (Promega) with a luminometer (Lumat LB 9507, Berthold Technologies) and normalized to renilla luciferase activity. The activity difference was expressed as the fold change compared to the activity obtained from empty vector control that was set as 1. All assays were performed in duplicate and repeated independently at least three times. The error bar indicated the standard error of the mean (SEM) of the data from duplicate samples assayed.
Immunofluorescence assay
Cells grown on glass coverslips in 24-well plate were transiently transfetcted with empty vector, MKL1 and/ or PAX3-FOXO1 using Lipofectamine reagent. Cells were maintained in DMEM-0.3% FBS for 24 h after transfection. For staining, cells were washed for two times with PBS and fixed in 4% paraformaldehyde/PBS for 20 min at room temperature, then blocked with 3% donkey serum/0.3% Triton X-100/0.05%Tween-20/PBS for 1 h, followed by incubation with primary antibodies at 4 °C overnight. Coverslips were subsequently incubated with Alexa Fluor 488-and/or Alexa Fluor 633-conjugated donkey anti-mouse or donkey anti-rabbit secondary antibody (1:1000 dilution, Biotium) for 1 h at room temperature. The nuclear DNA was stained using DAPI in PBS for 10 min. Cells were observed and imaged under a Leica TCS sp5 confocal scanning laser microscope (Leica Laser Technik, GmmbH, Germany).
Western blot analysis
Cells transfected with increasing amount of PAX3-FOXO1 expression plasmid were harvested and lysed in RIPA buffer (Beyotime, Jiangsu, PRC) containing protease inhibitor cocktail (Thermo scientific) and 1 mM phenylmethylsulfonyl fluoride (PMSF). The cell lysates were centrifuged at 12,000 rpm for 20 min at 4 ℃ and the protein concentrations were quantified by BCA assay kit (Beyotime, Jiangsu, PRC). The nuclear and cytoplasmic proteins from the transfected cells were obtained by nuclear and cytoplasmic protein extraction kit according to the manufacturer instructions (Beyotime, Jiangsu, PRC). A total of 50 to100 ug/lane of proteins were separated by 10% SDS-PAGE and electroblotted to nitrocellulose or polyvinylidene difluoride (PVDF) membrane. The membrane was blocked with 5% non-fat milk-TBST, sequentially incubated with primary antibody overnight at 4 ℃ and horseradish peroxidase-conjugated secondary antibody for 1 h at room temperature. Protein bands were detected by use of an enhanced chemiluminescent substrate (Thermo scientific). The digital chemiluminescent images (blot images) were captured by a GE LAS 4000 chemiluminescence imager. The densities of protein bands were quantified using the ImageJ software.
In vitro wound healing assay
Stable cell line RH30/ACTA1 or control RH30/ vector was seeded in 6-well plate at about 3 × 10 5 cells per well in DMEM growth medium. When the cells grew to 90% confluence and formed monolayer, a scratch was made to across the center of the well with a 200 µl sterile pipette tip. The cells were gently washed twice with PBS to remove the floating cells and incubated in serum-free DMEM for 24-48 h. The movements of the cells into the scratch were photographed on a microscope at 0 h, 24 h and 48 h, respectively. The average migration distance was measured using ImageJ software.
Transwell migration assay
The in vitro transwell migration was performed by culturing the stable cell line Rh30/ACTA1 or RH30/vector control into the upper insert chamber in 24-well plate (filter with 8-µm pore, Corning Costar, MA, USA) at 4 × 10 4 cells in 200 ul serum free DMEM medium. 600 µl DMEM with 20% FBS was added to the lower chamber. After 36-48 h incubation at 37 ℃, cells remaining in the upper chamber were removed by cotton swab and cells that migrate onto the lower surface of the filter were fixed with 4% with paraformaldehyde (PFA), stained with 0.3% crystal violet for 30 min and counted in 3-5 different fields under an inverted microscope (100 × magnification).
In vivo tumor growth assay
Tumor growth assay was performed using 5-8-week-old male athymic BALB/c mice purchased from Shanghai Slac Laboratory Animal Co. Ltd. (Shanghai, PRC). The mice were maintained in a specific pathogen-free animal care facility at the Tongji University Animal Experimental Center. Briefly, RH30/vector control and RH30/ACTA1 stable cells were trypsinized, counted and resuspended in PBS. Then 2 × 10 6 cells in 200 ul PBS were subcutaneously injected into the hind limbs of the mice. Total ten mice were used, and each was injected with RH30/ vector control in the left flank and RH30/ACTA1 in the right flank of hind legs, respectively. Tumor growth was monitored every other day and the tumor dimension was measured using a digital caliper two times a week. The tumor volume was calculated according to the formula V = 0.52 × L × W 2 where the L and W represent the length and width of the tumor, respectively. At the end of the experiment, the mice were sacrificed and the tumors were harvested, frozen and stored at − 80 ℃ for further analysis. All the mouse work was carried out according to the protocols approved by the Committee on the Ethics of Animal Experiments of Tongji University.
Tumor immunohistochemistry analysis
These frozen sections from xenograft tumor samples derived from RH30/vector and RH30/ACTA1 cells were cut in 7-15 µm thick by a cryostat microbiotom and analysed by immunohistochemistry following the standard IHC staining procedures. The antibodies used for staining were Ki67 (ER1706-46, 1:400, Hubio, PRC) and Flag (0912-3, 1:50, Hubio, PRC). H&E staining was used to assess cell morphology. The stained slides were imaged and evaluated by the experienced pathologists at Shanghai East Hospital and the Service Center in Huabio Company, respectively.
Statistical analysis
Experiments were repeated at least three times and data were expressed as mean ± SEM. Difference between two groups was analyzed by two-tailed Student's t test. Difference with P < 0.05 (*) or P < 0.01 (**) was considered statistically significant.
Protein expression of ACTA1 in Alveolar Rhabdomyosarcoma (ARMS) cells
ACTA1 is a very conserved gene encoding a protein with high identical amino acid sequence from rice to human [13]. Although ACTA1 transcript had been described in some RMS tumor samples [28], little work was paid attention to its protein expression and regulation in ARMS cells. By western blot analysis, we defined the expression of ACTA1 protein in ARMS cell lines and showed the difference among them to a certain extent (Fig. 1a, left and middle). ACTA1 protein appeared expressed at relative higher levels in RH4 and RH41 cells than that in RH30 cells with lower expression. mRNA levels determined by quantitative RT-PCR exhibited a similar expression pattern to that of ACTA1 proteins in ARMS cell lines (Fig. 1a, right). Due to the lack of information about ACTA1 in Rhabdomyosarcoma in public database, we analyzed the expression of ACTA1 in related Sarcoma which usually includes Rhabdomyosarcoma, liposarcoma and so on based on the TCGA dataset using the online UALCAN program (http://ualca n.path.uab.edu) [27]. As indicated in Fig. 1b, ACTA1 transcript is significantly decreased in primary tumor compared to normal tissue (p = 1.25E-02), despite the low number of normal samples used. Obviously, patients with higher ACTA1 expression displays a little increase in the 3-or 5-year survival probability compared to those with lower expression, even though the p-value (p = 0.72) shown in the graph is not significantly over the 10 years of time course (Fig. 1c). These analyses still suggested that there may be a certain link between ARMS and the lower ACTA1 expression.
ACTA1 is inhibited by PAX3-FOXO1 at transcriptional and translational levels in ARMS cells
PAX3-FOXO1 is a specific fusion gene and codes for a transcription factor in ARMS. To assess the possible regulation of ACTA1 by PAX3-FOXO1 in ARMS cells, we cloned the promoter region of ACTA1 into pLuc-MCS vector to obtain the human ACTA1 reporter (546ACTA1) and cotransfected the RH30 cells, together with renilla luciferase control and increasing amounts of PAX3-FOXO1 expression plasmid. Following 24 to 36 h serum free starvation, the luciferase activity of ACTA1 reporter was measured using the dual-luciferase reporter system. The results indicated that PAX3-FOXO1 decreased the ACTA1 promoter activity in a dose dependent manner compared with the empty vector expression plasmid (Fig. 2a, left). In the same transfections of RH30 cells but replaced the 546ACTA1 reporter with pLuc-MCS control reporter plasmid, the luciferase activity was not any changed significantly by PAX3-FOXO1 in comparison to the empty vector control (Fig. 2a, left). The protein levels of total PAX3-FOXO1 expressed were also detected by Western blot to be gradually increased in the RH30 transfections with different amount of PAX3-FOXO1 plasmid (Fig. 2a, right). This inhibition of ACTA1 reporter activity was also observed in other Alveolar Rhabdomyosarcoma cell lines such as RH41 and RH4 cells (Fig. 2b). Consistent with the reporter activity, the mRNA level of ACTA1 in RH30 cells with overexpressed PAX3-FOXO1 protein was also shown to be decreased by qRT-PCR analysis (Fig. 2c).
To further describe this inhibition of ACTA1 by the fusion gene PAX3-FOXO1, we transiently transfected RH4 cells with various amount of PAX3-FOXO1 expression plasmid. The cells were serum free-starved for 48 h after transfection and harvested to detect the ACTA1 protein levels using western blot. As shown in Fig. 2d, ACTA1 expression (at protein level) in RH4 cells were gradually decreased or inhibited with increasing amounts of PAX3-FOXO1 expression plasmid used, suggesting that PAX3-FOXO1 could negatively control the protein expression of ACTA1 gene in RH4 cells. Meanwhile, the immunofluorescence analysis in RH4 cells more directly showed the reduction of ACTA1 expression regulated by PAX3-FOXO1 gene (Fig. 2e). In addition, we transfected RH4 cells by use of the siRNA duplex specifically against PAX3-FOXO1 gene and measured the ACTA1 expression. Data in Fig. 2f showed that knockdown of PAX3-FOXO1 drastically enhances ACTA1 expression at protein levels. These data above strongly demonstrated that ACTA1 is inhibited by PAX3-FOXO1 in the ARMS cells.
PAX3-FOXO1 downregulates RhoA-MKL1-SRF signaling pathway to inhibit ACTA1 expression activity
ACTA1 is a major actin isoform in skeletal muscle and it has been reported to be a target of serum response factor (SRF), a key protein in the RhoA-MKL1-SRF signaling pathway [19,24]. Therefore, we investigated whether RhoA-MKL1-SRF signaling pathway is involved in ACTA1 inhibition by PAX3-FOXO1. To this end, we used 546ACTA1 reporter to cotransfect the RH30 cells, together with MKL1, SRF or constitutive active mutant RhoA (Q63L) expression plasmid, in the presence of PAX3-FOXO1 protein expression and determined the luciferase activity under the same condition as in Fig. 2. In these transfected RH30 cells, all MKL1, SRF and RhoA (Q63L) could stimulate ACTA1 reporter activities but these activities were reduced (at least twofold) by PAX3-FOXO1 expression (Fig. 3a). The strongly MKL1-stimulated mRNA level of ACTA1 was accordingly decreased by PAX3-FOXO1 using qRT-PCR analysis and the protein expression of ACTA1 was also observed to be decreased by the immunofluorescence assay (Fig. 3b-c). STARS is an actin-binding protein that is specifically expressed in striated muscle to promote MKL1 nuclear accumulation and stimulate the transcriptional activity of SRF [22,29], we evaluated whether PAX3-FOXO1 could have some effect on the role of STARS in RH30 cells. As illustrated in Fig. 3d, the protein STARS could enhance the activity driven by MKL1 but this activity could be suppressed by PAX3-FOXO1 protein. Cytochalasin D (Cyto D) is a fungal metabolite to modulate actin dynamics and induce CTGF/CCN2 expression in tubular epithelial cells [30,31], we therefore determined the behavior of Cyto D on ACTA1 regulation in aRMS cells. As shown in Fig. 3e, Cyto D could induce the transcriptional activity of ACTA1 reporter, but this induction was repressed by PAX3-FOXO1 expression. The ACTA1 protein was also examined and exhibited to be modulated by Cyto D in a similar manner of transcriptional activity in the presence of PAX3-FOXO1 protein (Fig. 3f ). CCG-1423 has been a widely used inhibitor which can block MKL1 binding to importin α/ß1 protein and inhibit the RhoA-MKL1-SRF signaling pathway [32,33], we next test whether PAX3-FOXO1 could influence the action of CCG-1423 molecule. The luciferase activity showed the significant synergistic effect between PAX3-FOXO1 and CCG-1423 (Fig. 3g) in the transfections of RH30 cells with co-expressed PAX3-FOXO1 and MKL1 proteins and followed by the treatment of CCG-1423. The similar synergistic effect between PAX3-FOXO1 and CCG-1423 was also tested in RH41 cells (Additional file 1: Fig. S1). Finally, the analyses of a serial of deletion mutants created from the ACTA1 promoter showed that the ACTA1 activity regulated by PAX3-FOXO1 was also CArG box dependent in the presence of MKL1 expression, implying the important association between PAX3-FOXO1 and RhoA-MKL1-SRF pathway (Fig. 3h). Taken together, these results demonstrated that the RhoA-MKL1-SRF signaling pathway was involved in the function of PAX3-FOXO1 to inhibit ACTA1 expression.
PAX3-FOXO1 represses the total expression of MKL1 and SRF and thus their interaction but not affects their subcellular localization
It is well known that the association of MKL1 and SRF is required for the regulation of muscle-specific target genes controlled by SRF [34,35]. To further understand the inhibition of ACTA1 by PAX3-FOXO1, we investigated whether PAX3-FOXO1 plays a role in the expression and association of MKL1 and SRF by immunofluorescence staining. We overexpressed PAX3-FOXO1 protein in RH30 cells and examined the expression and colocalization of MKL1 and SRF proteins under laser confocal microscopy. As shown in Fig. 4a, MKL1(green staining) and SRF (red staining) were expressed in both nucleus and cytoplasm and the colocalization or association (yellow staining) of MKL1 and SRF mainly took place in cytoplasm or around nucleus in these PAX3-FOXO1 or control plasmid transfected cells. Importantly, the colocalization (yellow staining) between MKL1 and SRF was significantly reduced in cells with PAX3-FOXO1 overexpression compared to control cells, indicating that the protein level of MKL1 or SRF might be affected by PAX3-FOXO1. We thus transfected PAX3-FOXO1 gene into RH4 cells and determined the expression of MKL1 and SRF (Fig. 4b). The results in Fig. 4b indicated that PAX3-FOXO1 can indeed repress the protein expressions of MKL1 and SRF. Corresponding to this data, the protein levels of MKL1 and SRF were strongly increased in the RH4 cells with PAX3-FOXO1 knockdown using siRNAs (Fig. 4c). Meanwhile, we determined the subcellular localization of MKL1 and SRF proteins in cells with increasing amount of PAX3-FOXO1 overexpression. The data in Fig. 4d, e demonstrated that PAX3-FOXO1 had no any significant effect on the subcellular distribution of MKL1 and SRF proteins. The relative quantities of MKL1 and SRF expression in nuclear or cytoplasmic fraction were not significantly changed and both proteins were primarily localized in nucleus in cells with increasing PAX3-FOXO1 overexpression. Additionally, the protein levels of RhoA were also measured in the case of overexpression or knockdown of PAX3-FOXO1 and no obvious alteration of RhoA were detected (Fig. 4f, g). These results strongly demonstrated that the ACTA1 inhibition by PAX3-FOXO1 occurs through the repression of MKL1 and SRF, but not of RhoA expression in the RhoA-MKL1-SRF signaling pathway.
Ectopic overexpression of ACTA1 inhibits cell proliferation and cell migration
As a major constituent of skeletal muscle, ACTA1 plays important roles in cell contraction, motility, structure and morphologic change [14,15,18]. In order to explore the potential role of ACTA1 in ARMS, we established stable cell lines RH30/ACTA1 and RH30/vector with over-or normal expression level of ACTA1 protein, respectively. CCK-8 assay was employed to determine the effect of ACTA1 on cell proliferation. And the results showed that ACTA1 overexpression in RH30 cells (RH30/ACTA1) could significantly inhibit the cell proliferation after 60 h compared with the control cells (RH30/vector) (Fig. 5a). Considering this behavior of ACTA1 to impair the cell proliferation rate, we next examine the effect of ACTA1 on cell migration by the classic scratch wound healing and transwell methods. In the wound healing assay, we found that the ACTA1 expression in RH30 cells had moderately decreased the migration ability in comparison to the control cells (Fig. 5b). In the transwell assay shown in Fig. 5c, the cell number on the lower surface of the filter was less for the ACTA1 overexpressed RH30 cells than the vector control cells. These assays suggested that ACTA1 overexpression could inhibit cell growth and reduce the cell migration ability. Altogether, these results shown above led us to further investigate the potential role of ACTA1 in ARMS tumorigenesis.
Overexpression of ACTA1 suppresses tumor growth in nude mice
To evaluate the role of ACTA1 in ARMS tumorigenesis, we established a xenograft model by s.c inoculating the empty vector and ACTA1 overexpressed stable cell lines into the left and right hind limbs of male nude mice, respectively. 2 × 10 6 cells in 200 ul PBS were used for each injection site. About 12 days after injection, palpable solid tumors were visible in the flanks of the mice. The tumor sizes were measured two times a week and the tumors were harvested about 5 weeks later of injection when the mice were sacrificed. The average size of the tumors resulting from RH30/ACTA1 cell line was clearly smaller than that from the vector control cells (Fig. 6a). The Mean ± SEM tumor volume from the RH30/vector control cells was 1338.21 ± 267.1 mm 3 at the end of the experiment. In contrast, the tumor volume from RH30/ACTA1 cells was 969.41 ± 214.4 mm 3 , which suggested that the tumor growth may be suppressed by ACTA1 overexpression (Fig. 6b). Meanwhile, these tumor samples were evaluated to be composed of a large amount of small round cells in the tissue section by H&E staining, showing the same characteristic as RMS tumor (Fig. 6d). In addition, overexpression of ACTA1 (Flag-ACTA1) protein was detected in tumors from RH30/ACTA1 cells by western blot and immunohistochemistry analyses (Fig. 6c, e). We also detected the expression differences of Ki67 in the tumor section with the higher level of Ki67 expression in tumors derived from RH30/vector control cells than those from RH30/ACTA1 (Fig. 6f ), suggesting the inhibitory effect of ACTA1on tumor growth. Together, the data in the xenograft assay demonstrated that ACTA1 overexpression could suppress tumor growth in nude mice and thus may play a role in the tumorigenesis or progression of aRMS in vivo.
Discussion
RMS is thought to be associated with skeletal muscle tissue origin [2,36]. However, the roles or functions of skeletal muscle proteins are less reported in RMS. ACTA1 is an important member of skeletal muscle proteins and is coexpressed with cardiac alpha-actin in adult skeletal muscle tissue [18]. Although early study showed the existence of the alpha skeletal muscle actin gene (ACTA1) transcript in ARMS tumor samples [28], no more detailed work about ACTA1 in aRMS had been reported. In the present work, we firstly examined ACTA1 protein expression in ARMS cells by immunoblot and showed the possible association between ACTA1 expression and ARMS (Fig. 1). Then we investigated the regulation of ACTA1 expression in ARMS cells. To our surprise, we found that ACTA1 could be a novel candidate target gene of PAX3-FOXO1 based on its decreased expression at transcription and protein levels under the condition of overexpressed PAX3-FOXO1 protein. It has been well known that ACTA1 is activated by SRF or its coactivator MKL1. We thus tested whether this decrease or inhibition of ACTA1 expression by PAX3-FOXO1 is associated with SRF or MKL1. To the end, we co-transfected RH30 cells with ACTA1 gene reporter (546ACTA1), SRF, MKL1 and/or PAX3-FOXO1 expression plasmids and measured the luciferase activity. As expected, these activities were repressed by PAX3-FOXO1 in comparison to those induced by SRF or MKL1 protein. We also tested the effects of RhoA and STARS on the ACTA1 promoter activity and obtained the results similar to those from MKL1 and SRF in the presence of PAX3-FOXO1 overexpression. In addition, the analysis of the functional site within the promoter region revealed that this inhibition of ACTA1 transcription activity was CArG box dependent (Fig. 3h), demonstrating that the inhibitory action by fusion gene PAX3-FOXO1 is closely related to the RhoA-MKL1-SRF signaling pathway. CCG-1423 is an inhibitor of the RhoA-MKL1-SRF signaling pathway [37], the strongly synergistic effect between PAX3-FOXO1 and CCG-1423 further established the specific function of PAX3-FOXO1 in ACTA1 regulation in ARMS cells. Furthermore, the ACTA1 activity exerted by Cytochalasin D could also be blocked by PAX3-FOXO1 in these cells. Finally, the measurement of mRNA and protein levels of ACTA1 expression directly showed the inhibitory role by fusion gene PAX3-FOXO1 in ARMS cells. From these results, we concluded that ACTA1 expression is inhibited or downregulated by PAX3-FOXO1 through the RhoA-MKL1-SRF signaling pathway in ARMS cells.
Many target genes regulated by the transcription factor PAX3-FOXO1 fusion protein have been identified so far [38][39][40][41], but genes that were inhibited by PAX3-FOXO1 were relatively less reported or not fully explored in RMS cell system. To address whether the skeletal muscle alpha actin gene ACTA1, the newly identified candidate target of PAX3-FOXO1 plays a role in ARMS, we created the stable cell line (RH30 /ACTA1) with ACTA1 overexpression and evaluated the effect of ACTA1 on ARMS cells. We observed that ACTA1 overexpression could obviously impair the proliferation rate of RH30 cells when compared to the control cells (RH30 /vector). The scratch wound healing assay showed the RH30 /ACTA1 cells migrated slower than the control cells. Finally, the xenograft assay demonstrated that ACTA1 overexpression could suppress the tumor growth in the mouse model system used. These results are almost consistent with the observations from PAX3-FOXO1 knockdown by Kikuchi et al. [12], suggesting that ACTA1 might play an important role in ARMS tumorigenesis or development. However, the colony formation assay in our experiment didn't show the inhibitory effect of ACTA1 to cell proliferation (Additional file 1: Fig. S2.). It may be the presence of other actin isoforms that obscure the behavior of ACTA1 in ARMS cells during the longer period of incubation. Whether this implies that ACTA1 may also be associated with apoptosis is unknown. Therefore, it will be interesting to determine these assays in future by use of other ARMS cells with ACTA1 expression knockdown or knockout.
ACTA1 expression in cells is complicated and regulated by multiple ways. In cardiomyocytes, ACTA1 transcription is upregulated by serum-and glucocorticoid-inducible kinase (SGK1) and Small CTD phosphatases (SCP1) protein [42,43]. miRNA-26b and Myolinc have a negative or positive role on ACTA1 expression in cardiomycytes and myogenesis, respectively [43,44]. All these studies demonstrated the complexity and importance of ACTA1 expression and regulation in cells. The mechanism analysis in our work further revealed the similar and remarkable inhibition of MKL1 and SRF expression by PAX3-FOXO1 in RH4 cells, but without alteration in the nuclear accumulation of MKL1 and SRF proteins in RH30 ARMS cells. Meanwhile, RhoA expression in the same signaling pathway was almost not affected by PAX3-FOXO1. We therefore supposed that ACTA1 inhibition by PAX3-FOXO1 could be as a result of the repression of MKL1 and/or SRF expression in ARMS cells. A postulated model describing the ACTA1 inhibition by PAX3-FOXO1 and eventually leading to cell and tumor growth is shown in Fig. 7. To our knowledge, this is the first report about the expression and regulation of ACTA1 by PAX3-FOXO1 in ARMS cells. Obviously, these findings are helpful to further understand the involvement of PAX3-FOXO1 in ARMS tumorigenesis. This study also suggested that the RhoA-MKL1-SRF signaling pathway may play an important role in ARMS disease. In future work, the detailed process of PAX3-FOXO1 to repress MKL1 or SRF will be explored. And other effects of ACTA1 expression or of the novel RhoA-MKL1-SRF signaling pathway on the tumorigenesis of ARMS will be further studied at the same time.
Conclusions
We investigated ACTA1 expression and identified that ACTA1 was inhibited by PAX3-FOXO1 fusion gene at mRNA and protein levels in ARMS cells. The mechanism underlying this inhibition was further revealed to be involved in the RhoA-MKL1-SRF signaling pathway with repressions of MKL1 and SRF but not RhoA expression by PAX3-FOXO1. In addition, the distribution of MKL1 and SRF in nucleus or cytoplasm was demonstrated not to be significantly changed by PAX3-FOXO1 expression. The potential role of ACTA1 to impair cell and tumor growth in ARMS was explored by in vitro and in vivo experiments. Therefore, ACTA1 inhibition by PAX3-FOXO1 may play an important role in the development of ARMS. And the appropriate control of ACTA1 expression and/or RhoA-MKL1-SRF signaling pathway might be a potential strategy for ARMS treatment in future. PAX3-FOXO1 can cooperate with these molecules in ACTA1 regulation. SRF activates ACTA1 expression by binding to the promoter region of ACTA1 gene. Overexpressed ACTA1 protein can inhibit ARMS cell proliferation, migration and tumor growth. Therefore, decreased ACTA1 expression by PAX3-FOXO1 may help to promote cell proliferation, migration and finally tumor growth.
|
2023-01-25T15:18:20.949Z
|
2021-01-28T00:00:00.000
|
{
"year": 2021,
"sha1": "6991941aec15bf8d68196fe9380e3172f6f23ad2",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1186/s13578-021-00534-3",
"oa_status": "GOLD",
"pdf_src": "SpringerNature",
"pdf_hash": "6991941aec15bf8d68196fe9380e3172f6f23ad2",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": []
}
|
269457130
|
pes2o/s2orc
|
v3-fos-license
|
Implications on star-formation-rate indicators from HII regions and diffuse ionised gas in the M101 Group
We examine the connection between diffuse ionised gas (DIG), HII regions, and field O and B stars in the nearby spiral M101 and its dwarf companion NGC 5474 using ultra-deep H$\alpha$ narrow-band imaging and archival GALEX UV imaging. We find a strong correlation between DIG H$\alpha$ surface brightness and the incident ionising flux leaked from the nearby HII regions, which we reproduce well using simple Cloudy simulations. While we also find a strong correlation between H$\alpha$ and co-spatial FUV surface brightness in DIG, the extinction-corrected integrated UV colours in these regions imply stellar populations too old to produce the necessary ionising photon flux. Combined, this suggests that HII region leakage, not field OB stars, is the primary source of DIG in the M101 Group. Corroborating this interpretation, we find systematic disagreement between the H$\alpha$- and FUV-derived star formation rates (SFRs) in the DIG, with SFR$_{{\rm H}\alpha}<$SFR$_{\rm FUV}$ everywhere. Within HII regions, we find a constant SFR ratio of 0.44 to a limit of $\sim10^{-5}$ M$_{\odot}$~yr$^{-1}$. This result is in tension with other studies of star formation in spiral galaxies, which typically show a declining SFR$_{{\rm H}\alpha}/$SFR$_{\rm FUV}$ ratio at low SFR. We reproduce such trends only when considering spatially averaged photometry that mixes HII regions, DIG, and regions lacking H$\alpha$ entirely, suggesting that the declining trends found in other galaxies may result purely from the relative fraction of diffuse flux, leaky compact HII regions, and non-ionising FUV-emitting stellar populations in different regions within the galaxy.
INTRODUCTION
The interstellar medium (ISM) comprises the fuel behind star formation in galaxies.While the stars themselves form from the cold ISM, primarily molecular hydrogen, feedback from this star formation in the form of supernovae, stellar winds, and high-energy photons ensures that much of the ISM exists in a high-temperature, ionised state (McKee & Ostriker 1977;Madsen et al. 2006;Haffner et al. 2009).The ionised ISM thus contains a ledger of the ionising potential of galaxies, a record of which is critical for understanding both the impact of baryonic feedback on galaxy evolution, a necessary constraint on structural cosmological parameters (e.g., Jing et al. 2006;van Daalen et al. 2011;Chisari et al. 2018), and the early evolution of galaxies during the epoch of reionisation, an era now increasingly accessible with the advent of JWST (e.g., Windhorst et al. 2023).
Of that ionised gas, an important component is morphologically diffuse, and so at a glance appears unassociated with any specific ionisation source.Hoyle & Ellis (1963) first proposed the existence of this diffuse ionised gas layer in the Milky Way (MW) based on the detection of a free-free absorption signature in the Galactic synchrotron background by Reber & Ellis (1956) and Ellis et al. (1962).Reynolds et al. (1973) eventually detected this layer directly in H and H emission, and later observations with the Wisconsin H Map-★ E-mail: a.emery.watkins@gmail.com(AEW) per (WHAM; Haffner et al. 2003) found that faint H emission is ubiquitous in the northern sky to a surface brightness of H = 0.1 R (∼ 5.7 × 10 −19 erg s −1 cm −2 arcsec −2 ).Gaustad et al. (2001) found similar results in the Southern hemisphere to slightly less sensitivity (0.5 R).Dettmar (1990) and Rand et al. (1990) first identified extragalactic DIG in the edge-on disk galaxy NGC 891, above and below the disk plane, and subsequently it was found in interarm regions in lower inclination disk and irregular galaxies (Hunter & Gallagher 1990;Walterbos & Braun 1992;Ferguson et al. 1996).It became clear that this diffuse ionised gas (DIG, dubbed the warm ionised medium by McKee & Ostriker 1977) is ubiquitous in star-forming galaxies.It occasionally even appears far outside its putative host (e.g., Devine & Bally 1999;Lehnert et al. 1999;Keel et al. 2012;Watkins et al. 2018).
DIG properties are deeply connected to the structure of the ISM (e.g., Wood et al. 2005;Seon 2009), hence constraining its ionisation source is critical for understanding how such radiation propagates through gaseous media.From the earliest investigations, it was clear that photoionisation must be an important such source.For example, the power necessary to ionise DIG is comparable to the total power injected by luminous young stars and supernovae, both in the MW (e.g., Reynolds 1990) and elsewhere (e.g., Ferguson et al. 1996).
In most DIG, little extra heating beyond photoionisation is required to model its observed spectra (Domgorgen & Mathis 1994;Mathis 2000).
The path those ionising photons take, however, is less straightforward.DIG spectra differ from that of the more compact H II regions, being often relatively elevated in [N II]6549,6584Å (hereafter, [N II]), [S II]6716,6731Å (hereafter, [S II]), [O I]6300Å, and other low-ionisation emission lines.Typically, these line strengths increase with decreasing H surface brightness (e.g., Madsen et al. 2006;Haffner et al. 2009;Hill et al. 2014) and with height above the midplane (e.g., Rand 1998;Haffner et al. 1999;Otte et al. 2001;Miller & Veilleux 2003;Levy et al. 2019), albeit with wide variability.Photoionisation simulations demonstrate that this can be achieved via leakage of ionising photons from H II regions: because photons with energies near the ionisation potential of hydrogen (13.6 eV) are preferentially absorbed in a neutral medium, the ionising spectrum of the Lyman continuum (LyC) photons which propagate into the diffuse ISM tends to be harder than that found within H II regions (e.g., Wood & Mathis 2004), increasing the kinetic energy of electrons and thus gas temperature.Lines such as [N II] and [S II] are predominantly collisional (Osterbrock & Ferland 2006), thus elevated [N II]/H and [S II]/H line ratios imply higher gas temperatures.
Yet [O III]5007Å, which has a much higher ionising potential (∼ 35 eV), sometimes also increases with height beyond what is expected from a hardening LyC spectrum alone (e.g., Rand 1998;Collins & Rand 2001), suggesting some additional ionising component is necessary to fully explain DIG spectra.Such additional proposals range widely, from post-ABG stars in the stellar halo or thick disk (also known as hot low-mass evolved stars, or HOLMES; e.g., Wood & Mathis 2004;Flores-Fajardo et al. 2011;Rautio et al. 2022), to shock ionisation from supernova or AGN feedback (e.g., Dopita & Sutherland 1995;Simpson et al. 2007;Ho et al. 2014), to magnetic recombination (Raymond 1992;Lazarian et al. 2020).Most likely, many such mechanisms contribute to DIG ionisation in different amounts depending on local phenomena, such as the creation of superbubbles (Madsen et al. 2006;Rautio et al. 2022), so the exact fractional contribution of each is still a matter of debate.
Even the photoionisation budget is not completely clear, however, as O and B stars outside of H II regions likely contribute to DIG ionisation to some extent (possibly nearly 40%; Hoopes & Walterbos 2000;Hoopes et al. 2001).Such field O and B stars have been identified in the MW and its satellites (e.g., Gies 1987;Oey et al. 2004;Lamb et al. 2013), and many star-forming galaxies also host substantial extended diffuse FUV components (Gil de Paz et al. 2005;Thilker et al. 2005Thilker et al. , 2007)).Some orphan O and B stars are host to their own spherical, ghostly H II regions (e.g., Oey et al. 2013), implying that they do ionise their local ISM and thus can contribute to DIG.
However, it remains unclear where these stars originate, and therefore how their impact might vary as a function of environment.While some may form in the field directly, most others likely formed within clusters and later drifted ("walk-away" stars; de Mink et al. 2012;Renzo et al. 2019) or were jettisoned ("run-away" stars; Blaauw 1961) to their current locations (e.g., Oey et al. 2004;de Wit et al. 2005;Lamb et al. 2010;Vargas-Salazar et al. 2020).Depending on their velocities, these stars may not stray far from their birth clusters before dying.
If H II region leakage is the primary source of DIG photoionisation, one might expect a weak correlation between H and FUV flux in DIG regions, but a strong correlation between the estimated incident ionising flux from a galaxy's H II regions and DIG surface brightness (e.g., Zurita et al. 2002;Seon 2009;Belfiore et al. 2022).DIG would also be predominantly found surrounding H II region complexes, as geometric dilution and neutral ISM absorption would prevent gas ionisation elsewhere (save extraplanar DIG, where the plane-parallel approximation is more appropriate and most of the ISM is ionised; e.g., Berkhuijsen et al. 2006;Flores-Fajardo et al. 2011).If, on the other hand, in-situ field O and B stars are the primary source, we would see the inverse behavior in the correlations, and the spatial distribution of DIG would depend on the origins, lifespans, and velocities of the ionising stars.Contribution from shock ionisation or AGN would likely be localized to supernova remnants and galaxy cores, respectively, but would be difficult to isolate without additional diagnostic lines, while HOLMES contribution would be found primarily where old FUV-weak stellar populations dominate, such as the bulge or stellar halo (e.g., Lacerda et al. 2018).
The connection between diffuse H and diffuse FUV is also mystified somewhat by a well-known discrepancy between H-and FUVderived star formation rates (SFRs) in low surface brightness (LSB) regions.Both in the FUV-emitting outer disks of massive galaxies (Goddard et al. 2010;Byun et al. 2021) and in dwarf and LSB galaxies (e.g., Lee et al. 2009;Meurer et al. 2009;Lee et al. 2016), the ratio between H-derived and FUV-derived SFRs is depressed, often to below 50% (but see Bell & Kennicutt 2001).Proposed mechanisms behind this observation in the low-density regime include changes in the stellar initial mass function (e.g., Meurer et al. 2009;Pflamm-Altenburg et al. 2009), higher LyC escape fraction in low-mass systems (e.g., Relaño et al. 2012), and less efficient star formation resulting in more sporadic star formation histories (SFHs; e.g., Sullivan et al. 2004;Weisz et al. 2012;Emami et al. 2019).Thus, understanding the origins of DIG is key to the proper utilisation of H emission as an SFR indicator on large spatial scales, and may help illuminate the fundamental physics behind star-formation as a whole.
To help provide more constraints on the ionisation sources of DIG, we explore the diffuse H and FUV emission in the M101 Group, a local ( = 6.9 Mpc; Matheson et al. 2012) loose association of galaxies.The group is rather sparse, containing only the massive (log(M * ) = 10.6;Muñoz-Mateos et al. 2015) face-on spiral M101 (NGC 5457), its lower mass (log(M * ) = 9.1; Muñoz-Mateos et al. 2015) companion NGC 5474, the irregular star-forming dwarf NGC 5477, and a handful of much fainter satellite candidates, most only recently identified (Müller et al. 2017).Indeed, a recent survey using the Hubble Space Telescope found that M101's satellite population is very sparse, with roughly half the number of low-mass companions as the MW to a limit of = −7.7 (Bennet et al. 2020).However, this low group mass, and consequent low velocity dispersion, should allow for more impactful tidal interactions between the group members (Negroponte & White 1983).Integrated light (Mihos et al. 2013) and resolved stellar (Mihos et al. 2018) photometry of M101's outer disk has illustrated the impact of this on the star formation history (SFH): a burst of star formation which peaked 300-400 Myr ago in M101's outer disk.Follow-up simulations demonstrate that the most massive companion, NGC 5474, is the most likely culprit (Linden & Mihos 2022).
The group's well-characterized SFH thus makes it a useful target for studying the origins of the DIG, as it allows one to marginalize SFH as a possible parameter when interpreting H/FUV SFR ratios.We thus explore the relationship between H II region emission and the DIG surface brightness, as well as that between FUV and H emission in the DIG, within the M101 group's most massive members, using archival GaLaxy Evolution EXplorer (GALEX; Martin et al. 2005) ultraviolet imaging and our own ultra-deep H narrowband imaging done with the Burrell Schmidt Telescope.We give a brief summary of our observations and archival data in Sec. 2 and Table 1.We describe our methodology behind photometry of DIG and H II regions in Sec. 3, focusing on measurement and corrections to systematics such as extinction.We present scaling relations de-rived from our systematics-corrected H and FUV measurements in Sec. 4. We discuss these results in Sec. 5, and finally provide a full summary in Sec. 6.
OBSERVATIONS SUMMARY
We use imaging data from two different observatories for our study.First, we use broadband and narrow-band images of M101 and NGC 5474 taken with the Burrell Schmidt Telescope (BST) at Kitt Peak National Observatory, a 0.6/0.9mtelescope optimized for LSB imaging.Broadband observations were taken in April of 2009 and April of 2010, in a modified Johnson -band filter (∼ 200Å bluer than standard) and in Washington , respectively (Mihos et al. 2013).This broadband imaging is calibrated directly to Johnson B and V magnitudes using stars in the field surrounding M101; the details of the photometric calibration can be found in Mihos et al. (2013).Narrow-band observations were taken in April through June of 2014 (H; Watkins et al. 2017) and March through May of 2018 (H; Garner et al. 2021), using custom ∼ 100Å-wide filters for both onand off-band observations.In addition, we used archival GALEX farultraviolet (FUV) and near-ultraviolet (NUV) imaging.For M101, we used the deep imaging from the guest investigator program published in Leroy et al. (2008), and for NGC 5474, we used the imaging from the Nearby Galaxies Atlas (NGA; Gil de Paz et al. 2007) program.
We summarize the observations in Table 1, including survey, photometric band, resolution on the image coadds, total integration times on-target, and pixel-to-pixel root-mean-square (RMS) uncertainty in the background (in physical flux units).Details of the observation and data reduction strategies used for each set of observations can be found in the associated references (column 9).The BST and GALEX pixel scales are 1.45 arcsec px −1 and 1.5 arcsec px −1 , respectively.
The RMS uncertainty for the H difference image is 7.97×10 −18 erg s −1 cm −2 , slightly lower than in either narrow-band image individually.This is because much of the background uncertainty in low-resolution imaging comes from unresolved sources, many of which subtract out when creating the difference image.
METHODS
We present here our procedure for identifying, measuring the fluxes of, and applying photometric corrections to both H II regions and DIG.We required photometry of both diffuse emission and the more compact H II regions first to separate the two components, and second to assess the impacts of both H II region leakage and field O and B stars on DIG properties.
Our initial catalogue of H II region candidate sources is also likely a mix of true H II regions and interlopers.Thus, we first apply physical corrections to all H II region candidate source photometry, then make interloper cuts based on these physically-corrected parameters.
Detection
We identified both candidate H II regions and DIG regions with the software Sourcerer (formerly, MTObjects; Teeninga et al. 2013Teeninga et al. , 2016;;Haigh et al. 2021).Briefly, Sourcerer performs object detection and segmentation using a max tree algorithm (Salembier et al. 1998), which identifies local maxima in an image's flux distribution as leaves of a tree, and nodes as increasingly large connected regions of the image (with the root represented by the entire image).Only those local maxima and connected nodes determined to be statistically significant against the local background are designated as detections, assuming the background has normally distributed noise.Being based on local flux hierarchy, the segmentation Sourcerer performs makes it ideal for identifying embedded point sources, while being simultaneously sensitive to extremely LSB contiguous emission.We ran the software on images cropped around each galaxy separately to ensure the background noise characteristics were not influenced by the variation in exposure counts in different parts of the coadd.
Without access to spectra of most of M101, particularly in its faint outskirts, disentangling DIG and H II region flux can be an ambiguous task.To simplify the process, in both galaxies we chose to label the point-like sources visible in our BST H difference image as H II region candidates and all other significant H emission as DIG.This is, to some extent, justified by our images' low resolution.At M101's 6.9 Mpc distance, one arcsecond corresponds to ∼ 33 pc, while the distribution of extragalactic H II region radii published by Congiu et al. (2023) shows a peak at roughly ∼ 90 pc.Our H narrowband imaging has a FWHM∼ 2 ′′ (Table 1), or 67 pc.Therefore, typical H II regions are barely resolved in our BST imaging, and are unresolved in FUV imaging given that instrument's much larger PSF (FWHM∼ 4 ′′ ).
Even so, our decision to isolate point-like sources defines H II regions as only the most compact and brightest parts of the ionised clouds, likely centred on young star clusters.We recognize this definition differs from that used in many other extragalactic studies (e.g., Thilker et al. 2000;Erroz-Ferrer et al. 2019;Garner et al. 2021).We consider its impact throughout our analysis.
While the software does segment images using a top-down approach, segmented regions containing point-like sources still typically contain flux from neighbouring diffuse pixels, and so are typically asymmetric.To identify the centers of the point-like sources in each segmented region, we used either the coordinate of each region's brightest pixel, for regions with average surface brightness three times the RMS of the background local to each galaxy, or we used the flux-weighted centroid of the whole region, for sources fainter than this limit, to avoid noise peaks influencing the choice of central coordinate in LSB regions.We show the coordinates of the point-like sources this method identified in the central regions of M101 in Fig. 1, overlaid on our H difference image.
To separate DIG from H II regions, we applied adaptive masks to each point-like source within the Sourcerer segmentation map.For each source, we scaled the FUV PSF curve of growth to the source's total FUV flux to determine the radius at which the PSF surface brightness dropped a factor of five below our measured FUV limiting surface brightness (for M101 and NGC 5474: ∼ 8.5 × 10 −20 and 1.2×10 −19 ergs s −1 cm −2 Å −1 , respectively, 1 on 100 ′′ ×100 ′′ scales).We limited the mask radii between 2 px≤ ≤ 10 px to avoid over-or under-masking; 10 px contains > 95% of the total FUV PSF flux, hence even for extremely bright sources this aperture limit suggests only a maximum ∼ 5% flux contamination for DIG pixels directly adjacent.
We initially assigned every unmasked pixel detected by Sourcerer as DIG.However, even in the native resolution difference image, Sourcerer often identified correlated noise in the background as significant detections, leading to prominent background contamination in our DIG map.Given our small galaxy sample, we opted simply to erase by-hand all Sourcerer detections at large radius with small size.Comparing scaling relations measured using the initial and cleaned DIG maps shows that our by-hand cleaning removed only long, LSB tails mostly below each band's noise limit.We show cleaned DIG maps for both galaxies in Fig. 2. Any pixel in these maps with a non-zero value we consider a DIG detection.
Defined in this way, 90% of our DIG pixels have surface brightnesses below log(Σ H ) < 38.3 erg s −1 kpc −2 .This is lower than the DIG-H II region separation threshold proposed by Zhang et al. (2017) (log(Σ H ) < 39 erg s −1 kpc −2 ), though many of our DIG pixels have surface brightnesses exceeding their threshold.Our choice to define embedded point sources as H II regions is more comparable to the method employed by Thilker et al. (2000), who define DIG-H II region boundaries using the local gradient of the H surface brightness, albeit ours is more stringent.
Photometry
To estimate the impact of H II region flux leakage on surrounding DIG, we required H II region flux estimates free of DIG contamination, and of contamination from neighbouring point-like sources.We thus measured fluxes of each point-like source by masking all neighbouring sources, doing 2 px radius aperture photometry of each target (to avoid source crowding; see Fig. 1), then applying background and aperture corrections to estimate total H II region fluxes.We find that, with aperture corrections applied, 3 px aperture fluxes are consistent with 2 px aperture fluxes to within log( ) ± 0.1 in all photometric bands; hence, our choice of aperture has no impact on the correlations we examine throughout this paper.
We measured local backgrounds as the sigma-clipped median fluxes of unmasked pixels within ring apertures centred on each source, with inner radii of 10 px and widths of 2 px (chosen to lie beyond the PSF 95% flux radius in all photometric bands).Subtracting these local backgrounds corrects both for DIG contamination and for line absorption from any underlying stellar population (see Garner et al. 2022).We used the aperture corrections published by Morrissey et al. (2007) for GALEX bands, and for the BST, we derived our own from our coadds, by stacking and normalizing point sources with signal-to-noise ratios > 100 external to all resolved galaxies in the field.We applied aperture corrections only to background-corrected fluxes to derive total fluxes for each source.
DIG photometry required measurements from both the diffuse H and diffuse FUV emission, to compare DIG and field O and B star populations.This comparison required the H and FUV images to have the same pixel scale and resolution.Thus, to ensure our DIG segmentation maps matched between photometric bands, we first reprojected our H on-and off-band images to the GALEX pixel scale (a tiny change, from 1.45 ′′ px −1 to 1.5 ′′ px −1 ) using the Astropy-affiliated package reproject (v0.8-specifically, repro-ject_adaptive, which we found best preserved both flux and surface brightness per pixel; Astropy Collaboration et al. 2022).We then convolved each narrow-band image with a normalized Gaussian kernel with 2 = 2 FUV − 2 , where refers to the standard deviation of our H on-and off-band averaged coadd PSFs.Differencing these convolved images resulted in reprojected difference images for each galaxy.
We generated final-generation DIG segmentation maps (Fig. 2) from these reprojected difference images (converted roughly to analog-to-digital units using the on-and off-band photometric zeropoints).We measured DIG fluxes pixel-to-pixel using these maps.
Photometric corrections
Converting from raw to intrinsic fluxes required correcting for both extinction and for line contamination ([N II] and [S II]) in our narrowband filters.We describe how we estimated these corrections for both H II regions and DIG in this section.
H II region corrections
We first corrected all measured H II region fluxes in all photometric bands for foreground MW extinction.We derived extinction values in all of our photometric bands using the Astropy-affiliated code dust_extinction (v1.2;Astropy Collaboration et al. 2022), selecting values of / from the average MW extinction curve from Gordon et al. (2009) and assuming = 0.023 (Schlafly & Finkbeiner 2011).
To derive extinction internal to H II regions in both galaxies, we used the Balmer decrement measured from our H and H narrowband images, assuming a theoretical value of H / H = 2.86 1 .We first corrected all H fluxes for internal stellar absorption following Garner et al. (2022), by subtracting from each value 5Å EW of absorption.This is the average absorption EW for H II regions based on observed and model literature values (González Delgado et al. 1999;Gavazzi et al. 2004;Moustakas & Kennicutt 2006).Absorption in 1 Congiu et al. (2023) recommend a theoretical value of H / H = 3.03 for DIG-corrected H II region Balmer decrements in star-forming galaxies, but the intrinsic dispersion among measured Balmer decrements in our regions is high enough that the use of this value produced no noticeable change to the extinction gradients we derived in either galaxy.
H within H II regions is typically negligible, and our local background subtraction removed any absorption from underlying older stellar populations within the galaxies.Fig. 3 shows the FUV gradient for H II regions in both galaxies.The region-to-region dispersion in measured Balmer decrements was quite high ( > 0.6 mag at any given radius within M101).Some of this likely arises due to intrinsic variability in dust content and geometry among H II regions; our lack of image resolution thus makes extinction measurements on any individual H II region highly uncertain, even for bright regions with low photometric uncertainty.Hence, for simplicity, we chose to apply global corrections as a function of radius, using linear fits between and radius for both M101 and NGC 5474, ignoring all regions with 1 < H / H < 8 (see Garner et al. 2021).In M101, we found that extinction values flattened to ∼ 0.4 ± 0.9 (roughly FUV ∼ 1; Fig. 3) beyond ∼ 300 ′′ (∼ 10.8 kpc), hence applied a constant extinction correction to all regions found beyond that radius by extrapolating from the last best-fit point interior to it.NGC 5474 shows no clear radial gradient.Red dashed lines in Fig. 3 show our best-fit extinction curves for both galaxies.
Our filter placement for the H narrow-band imaging also included stellar continuum, H, and [N II] emission in the on-band, and stellar continuum and [S II] emission in the off-band.We thus needed to estimate and correct for both of these emission lines in all derived H fluxes.
We made these corrections for H II regions in the manner described by Garner et al. (2022)
DIG corrections
DIG represents a different physical system than H II regions.Hence, while we applied the same kinds of corrections to DIG, the forms of some of those corrections necessarily differed.The exception is the MW extinction correction, which we applied in the same manner as for the H II regions.
Internal extinction corrections here were less straight-forward.In DIG-dominated regions, we found significant stellar absorption in H (and mild absorption in H) in both target galaxies, which is difficult to correct for without spectra.Thus, we used publicly available data from the MUSE Atlas of Disks (MAD; Erroz-Ferrer et al. 2019), in combination with the H II region metallicities provided by Croxall et al. (2016), to derive an empirical extinction correction for DIG based on our measured H II region extinction gradients.The MAD data includes, for 38 disk galaxies, line fluxes (including [N II], [S II], and H), measures of gas-phase metallicity, extinction derived from Balmer decrements, and also maps separating out DIG spaxels from H II region spaxels using the method developed by Blanc et al. (2009).
In all MAD galaxies, the median extinction among both H II regions and DIG is roughly constant with radius (when normalized by the effective radius, though the scatter increases sharply toward the galaxy centers).This results from the complex interplay between extinction, stellar mass surface density, and metallicity (Erroz-Ferrer et al. 2019).The DIG extinction, however, shows similar trends to the H II regions, but on average is lower by ∼ 0.1.Therefore, we assume the same applies for M101 and NGC 5474, and we use our same radial extinction corrections derived from the H II region Balmer decrements to derive internal extinction in the DIG, but offset by = −0.1.
DIG and diffuse FUV emission also arises from sources close to the disk plane and so are attenuated primarily by line-of-sight extra-planar dust.Thus, for DIG and diffuse FUV, we use the stellar attenuation curve for low-inclination star-forming galaxies provided by Battisti et al. (2017, Eq. 11, with 0.77 < / < 1 coefficients from Table 1, assuming = 3.64, their extrapolated value from Table 2) to derive , rather than that of Gordon et al. (2009).Some stellar H absorption was visible from our H difference image, albeit mild.Making use of Sourcerer's inability to detect negative-valued pixels, we estimated this by masking all Sourcerer detections in both galaxies, then using radial flux profiles of the unmasked flux to estimate corrections as a function of radius.In M101, the correction in the inner 4 kpc was ∼ 4 × 10 −18 ergs s −1 cm −2 arcsec −2 (∼ 13% of the median DIG H surface brightness in the same region), decreasing linearly to zero by ∼ 540 ′′ radius (∼ 18 kpc).In NGC 5474, the correction in the inner 0.5 kpc was ∼ 1.6 × 10 −17 ergs s −1 cm −2 arcsec −2 (∼ 40% the median DIG H surface brightness in that region), decreasing exponentially to zero by ∼ 110 ′′ radius (∼ 3.6 kpc).Hence, corrections in both galaxies were small.However, these are lower-limits on the true absorption, as some DIG emission may be present in the regions with negative measured flux.
DIG typically shows enhanced [N II]/H and [S II]/H compared to H II regions, hence requires different correction factors for these emission lines.We derived these corrections for DIG by investigating the behavior of these lines within the MAD sample DIG spaxels.The cleanest correlation lies between the [N II]/H ratio and gas-phase metallicity, as nitrogen is produced alongside oxygen in the CNO cycle.The behavior of [S II] is more complex.Sulfur is produced via -capture, hence is created mostly in massive stars (> 25M ⊙ ; Weaver & Woosley 1978;French 1980), giving its abundance a weaker correlation with metallicity.[S II] emission luminosity specifically is a function of sulfur abundance and ionising radiation hardness (primarily the latter's connection to gas temperature, as [S II] is predominantly collisional; Osterbrock & Ferland 2006).SII's ionisation potential is, however, very close to that of the much more abundant He I (23.3 eV vs. 24.6 eV), so hard ionising radiation preferentially ionises He I, while slightly softer radiation might preferentially ionise SII into SIII, decreasing [S II] emission.The red dashed lines show the running mean relations, while the red dotted lines show the running standard deviations about those means.M101 shows a strong radial metallicity gradient (Croxall et al. 2016;Garner et al. 2022), but most H II regions within M101 have directmethod metallicities between 8.1 < log(O/H) < 8.6.Hence, we derived a radial [N II]/[S II] correction in the DIG using the directmethod radial metallicity gradient provided by Eq. 10 of Croxall et al. (2016), and we assume a constant value of [S II]/H = 0.47, the average value among all DIG spaxels in the MAD galaxy sample with metallicities between 8.1 < log(O/H) < 8.6.Using a variable [S II]/H fraction derived from the median curve in the right panel of Fig. 4 yields a negligible change on our corrected fluxes, hence we opt for the simplicity provided by the constant value.
The median correction we applied for [N II] and [S II] emission in the DIG in M101 is ∼ 1.3.In NGC 5474, we assume a constant [N II]/[S II]= 0.192, derived from M101's low-metallicity outer disk, and the same value of [S II]/H = 0.47, for a correction factor of ∼ 1.6.
H II region interlopers
To finalize our H II region photometry catalogue, we needed to identify and remove interloping non-H II region sources, typically foreground MW stars and background galaxies.We did this using photometric cuts, demonstrated in Fig. 5.
The left panel shows the − colour-magnitude diagram (CMD) of all point-like sources we detected within and surrounding M101, colour-coded by H EW in Å.We corrected , , and H fluxes only for MW extinction in this panel.The CMD is primarily composed of three types of objects: real H II regions, with predominantly blue colours and high EW H ; interloping MW stars, with a range of colours and predominantly low EW H , and interloping background galaxies, with a range of colours and a range of EWs.As noted by Watkins et al. (2017) and Garner et al. (2021), our H filters are placed such that absorption features in low-temperature MW stars act to depress flux in our off-band filter relative to the continuum in our on-band filter, causing such stars to appear as detections with significant EW H in our difference image 2 .Most such stars are easily identifiable, however, by their red colours; they appear as the column of points with EW H > 10 and − > 0.65.We thus removed anything with uncorrected − > 0.65 from our H II region catalogue.
The CMD of sources beyond 2 × 25 in either galaxy lacked the distribution of points with blue colours ( − ≲ 0.5) and high EW H (EW H ≳ 40) seen in the left and central panels of Fig. 5.We thus applied a colour-cut to isolate H II regions from stars, shown by the dotted cyan line in the central panel of Fig. 5, which we make after applying extinction, [N II], and [S II] corrections to all sources.To balance star removal with preservation of faint H II regions, we also excluded all such sources with corrected EW H < 6.The final H II region catalogue CMD, corrected for all of the factors described in the preceding sections, is shown in the rightmost panel of Fig. 5. Our final catalogue contains 1954 H II regions in M101 and 161 H II regions in NGC 5474.
To assess the impact of our choice to limit H II region photometry to only point-like emission, we estimated the high-luminosity power law index of M101's H II region luminosity function.The best-fit slope above log( ) > 37.5 ergs s −1 yields = −1.98,within the uncertainty of ±0.2 published by Kennicutt et al. (1989) for the same luminosity range.This suggests that isolating flux measurements to only the brightest point-like parts of H II regions is not unreasonable, and produces results consistent with other methods.The largest differences likely arise for low flux objects, below the luminosity function knee.
RESULTS
In the event that most DIG emission arises due to leaked LyC photons from H II regions, we would expect a strong correlation between DIG H surface brightness (hereafter, H ) and the luminosities of nearby H II regions.Likewise, if most DIG emission arises from in-situ field O and B stars, we would expect a strong correlation between DIG H and the cospatial FUV surface brightness ( FUV ).Hence, in this section, we showcase two correlations: that between DIG H and the incident ionising flux of each DIG pixel's nearest ten H II regions, estimated from their H luminosities; and that between DIG H and FUV of the same regions.For comparison, we also show the relationship between H and FUV flux among the point-like H II regions.Where appropriate, we also convert the H and FUV surface brightnesses and fluxes into SFRs using the same calibrations as Lee et al. (2009) and Byun et al. (2021), derived from Kennicutt (1998) and Kennicutt & Evans (2012).We concentrate on these two scenarios (leakage and field stars) because both M101 and NGC 5474 show prominent, wide-spread star formation.We thus expect the contribution from HOLMES to be relatively small, and we lack the detailed spectroscopy necessary to identify shocked gas emission.
For each fit, we employed the scipy.odr(V.1.10.0)implementation of orthogonal distance regression (ODR; Boggs & Donaldson 1989), weighting the fits on each axis by the combined photometric and calibration uncertainty where applicable.We adopted calibration uncertainties of 10% for both H and FUV (Morrissey et al. 2007;Garner et al. 2022).For H, the value derives from the combined uncertainty between the on-and off-band zeropoints, with a small amount added to consider uncertainty from photometric corrections (Sec.3.3).This simply sets a ceiling to the weights; we find that the fits are insensitive to the exact values adopted, within reason.
We provide the best-fit slopes and intercepts for each relation we discuss in this section in Table 2, including both standard errors derived from the covariance matrix of the regression residuals, and systematic uncertainties from our extinction, [S II], and [N II] contamination corrections.Where the uncertainties are negligible, as in the case of the standard errors when the number of points used in the fit is large (e.g., the H - FUV relations, which use tens to hundreds of thousands of points), we do not report them.
We derived the systematic uncertainties using a Monte-Carlo approach, with = 100 iterations per fit.First, we perturbed the best-fit parameters of the extinction, [N II], and [S II] relations used to derive the corrections (radial for the former, metallicity for the latter) using normal distributions with N (0, ) and N (0, ), where and are the standard errors on the slopes and intercepts, respectively, of each systematic relation.Additionally, we perturbed the mean [S II]/H ratio we applied for that correction by the standard deviation of that ratio in each case (H II region and DIG).We then re-derived all relevant fluxes using these perturbed relations, and rederived each correlation using the same ODR approach as before.We adopted the standard deviation of the = 100 perturbed fit parameters as the systematic uncertainty in each case.We provide total uncertainties on the fit parameters in the last two columns of Table 2, which are the quadrature sum of all uncertainties., where and are log 10 of the parameters given in column 1. Uncertainties on the fit parameters are given by the columns.The subscripts "ext" and "line" refer to systematic uncertainties on the fits induced by extinction corrections and corrections for [N II] and [S II] flux contamination, respectively, while with no subscript refers to the uncertainty derived from the covariance matrix of the fit residuals.We denote negligible uncertainties using -.The final two columns provide the quadrature sum of all uncertainties for each fit parameter.
As one additional but important point, we measure the total H fluxes within M101 and NGC 5474 to be log( H ) = −10.00erg s −1 cm −2 and −11.54 erg s −1 cm −2 , respectively (as this is merely a side-note, we eschew a formal estimate of uncertainties on these values, but at minimum they are ±0.05, or ∼ 10% from the flux calibration).These are very similar, within the uncertainties, to the values published by Kennicutt et al. (2008) of −10.23 ± 0.13 and −11.55 ± 0.05, respectively (adjusted for our adopted distance of 6.9 Mpc), in agreement with the conclusions of Lee et al. (2016).Interestingly, the largest discrepancy is with the more massive galaxy, M101, where it is not expected (Lee et al. 2009).
DIG H𝛼 surface brightness vs. incident flux from H II regions
To consider the impact of H II region LyC leakage on DIG, Fig. 6 shows the correlation between the H of the DIG and the incident ionising flux from the nearest ten H II regions at each DIG pixel, for both M101 (left) and NGC 5474 (right).We show the best-fit correlation for each galaxy as solid blue lines.We chose the nearest ten regions because we found the contribution from any regions beyond this to be negligible, and changing the analysis to the nearest nine or eight regions made no substantive change to our results.We estimated the incident ionising flux by summing the geometrically diluted H flux of the nearest ten H II regions incident on each DIG pixel, estimated from the regions' H luminosities.We converted this total H flux to an ionising radiation flux using the relation (H) = 1.37•10 −12 (Osterbrock & Ferland 2006), where is the ionising photon rate in units of photons s −1 .We denote this incident ionising flux as 10 , which has units of photons s −1 cm −2 .We do not correct the H II region luminosities for internal extinction in this step, as we assume that the incident flux at any given DIG pixel is that which escapes from the H II region, not its geometrically diluted intrinsic brightness.As such, systematic uncertainties on these values do not include any contribution from an extinction correction, although the relations change very little if we do employ extinction-corrected H II region luminosities.
Both galaxies show a strong positive correlation between incident ionising flux and DIG H , with slopes between ∼ 0.7-1 (Table 2), implying that leakage from H II regions is an important contributor to the DIG in both galaxies.We are, of course, making some physical assumptions by estimating the incident ionising flux in this manner.If LyC photons leak from the H II regions, the value of we estimate for each H II region from H corresponds to the total value of minus the fraction which escapes into the ISM.Thus, by assuming that the value of we estimate for each H II region is the same as that leaking out to ionise the DIG, we implicitly assume esc = 0.5.We explore this assumption's validity in Sec.5.2, in which we investigate how well our measured relation agrees with estimates from models, and what this implies about the fraction of DIG ionisation contributed by this leakage compared to the other potential source we investigate, field O and B stars.
H𝛼 vs. FUV
A strong correlation between diffuse H and the FUV emission in the DIG regions may suggest that field O and B stars contribute heavily to the DIG emission, while a break in such a relation would suggest a transition from one DIG regime (perhaps dominated by H II region leakage) to another.To assess this potential contribution in the M101 Group, we show the correlation between H and FUV surface brightness and flux for the DIG and H II regions in Fig. 7 and Fig. 8.As before, we show the best-fit correlations as solid blue lines.Dotted black lines show the relation where the SFR ratio is unity.Dotted red lines in Fig. 7 designate the RMS in the background, which serves as limiting surface brightnesses in both bands for the pixel-to-pixel photometry.
In the DIG (Fig. 7), we see a correlation with an unbroken slope close to one (∼ 1.3 ± 0.2), with the SFR H /SFR FUV ratio declining as a function of surface brightness in both bands.This suggests that, while there may be a connection between the diffuse FUV and H components in these galaxies, it is likely not as straight-forward as direct ionisation by field O and B stars.We discuss such stars' possible contribution in Sec.5.2.
In the H II regions, the SFR ratio is constant across the full range of flux values.In M101, this ratio is below unity (in linear units, SFR H /SFR FUV ∼ 0.44).In NGC 5474 it is consistent with unity, although the fit uncertainty is much higher than for M101.Regardless, this near constant SFR ratio among H II regions provides a seeming contrast to results from past studies (e.g., Lee et al. 2009Lee et al. , 2016;;Byun et al. 2021).We discuss the implications of this in Sec.5.1.
Separation of H II regions and DIG
We showed in Sec. 4 that the average trend among what we defined as DIG pixels showed depressed SFR H /SFR FUV universally, declining with H .We also found that H SFRs measured from the point-like objects which we identified as H II regions are about half those predicted by the FUV fluxes at all luminosities.So while the DIG trend seemingly reflects what was discovered in past investigations of this ratio (e.g., Meurer et al. 2009 2).The dotted red line denotes the RMS in the H image background, which sets our noise limit.7, but for H II regions.The axis limits differ here, as the luminosities probed are much higher.RMS limits fall outside of the axis limits in this figure .et al. 2021), the H II region trend does not, despite that we use the same SFR calibrations as those studies.Lee et al. (2009) found that the two SFR indicators diverge below SFR H ∼ 0.003M ⊙ yr −1 (log(SFR H ) ∼ −2.5) using integrated SFRs of dwarf galaxies.Byun et al. (2021) later corroborated this finding locally within two spiral galaxies, measuring SFRs within 6 ′′ circular apertures (∼ 300 pc at their targets' distances) positioned on a hexagonal grid.Goddard et al. (2010) also found that H truncates much more rapidly than FUV in the azimuthally averaged surface brightness profiles of many disk galaxies.Each of these studies thus used photometry of regions with larger spatial scales than our study, with apertures that likely contained both DIG and H II regions.
Part of the discrepancy between our results and these others may thus be methodological, as we measure our fluxes through separation of point-like star-forming regions and diffuse regions.We demonstrate this in Fig. 9, which shows log(SFR H /SFR FUV ) as a function of log(SFR H ) for three different cases.In the top panel, we measured this ratio using a grid of box apertures across M101, with sizes of 18 ′′ ×18 ′′ (∼ 600 pc at M101's distance, to mimic Byun et al. 2021), summing all flux (H II region, DIG, and FUV with no H counterpart) within each box.The bottom two panels show this same trend for our point-like H II region and pixel-to-pixel DIG region samples, measured as described in Sec. 3. The horizontal black dashed lines show equal SFRs, while the vertical blue dashed lines show SFR H ∼ 0.003M ⊙ yr −1 .The red vertical dashed lines are our limiting H surface brightness converted to a per-pixel SFR.
Using these larger box apertures, we do reproduce the trend found by Byun et al. (2021), where only regions with the highest SFRs (near SFR H ∼ 0.003M ⊙ yr −1 ) show SFR ratios approaching unity.However, the other two panels provide additional context: when summing both DIG and point-like H II region flux, the trend appears as a kind of convolution of the flat H II region trend and the declining DIG trend.Fig. 10 also demonstrates this using azimuthally averaged profiles of log(SFR H /SFR FUV ), shown as a function of disk scale length as measured in the -band.Each curve shows the average flux within concentric circular annulus apertures for three different cases: DIG pixels only (purple), H II regions only (green), and all flux (gold; again, including FUV flux with no H counterpart).While the ratio remains fairly constant in both the DIG and H II region curves (at ∼ 0.3 and ∼ 0.5 in linear units, respectively), the azimuthally averaged profile using all flux shows a strong decline in the ratio in M101 and a subtle decline in NGC 5474.Beyond a few scale lengths in either galaxy, the azimuthally averaged H flux has dropped to nearly zero, hence both profiles truncate.
The decline in the SFR ratio with SFR H in the M101 Group thus seems to result from a transition from the regime spanned by the point-like H II regions, where the ratio is constant, to the DIG regime, where it declines.The reason for the decline in the DIG regime may result from a change in FUV-emitting stellar populations there compared to the bright young clusters found within the H II regions.Both M101 and NGC 5474 show abundant FUV emission with no H counterpart, with it being particularly prevalent in M101's outer disk.Indeed, we found that the fraction of pixels in each 18 ′′ ×18 ′′ box used in the top panel of Fig. 9 with significant FUV emission (above the background RMS) but no significant H emission (below the RMS) shows a strong negative correlation with SFR.Using an alternative DIG map based on the FUV image, we found that around 27% of this H-less diffuse FUV emission would have a detectable H counterpart in our imaging were the ratio H / FUV in the DIG the same as it is in the H II regions.This could occur either if DIG stars are less massive (thus producing fewer LyC photons) than those in the H II regions, or if the DIG environment has a much higher esc .We explore this in the following section.
DIG origins in the M101 Group
As demonstrated in Sec.4.2, there is a tight, nearly one-to-one correlation between H and FUV surface brightness in the DIG.This suggests that field O and B stars may contribute a substantial fraction of the power required to ionise it.However, there is no break in the relation, as one might expect in LSB regions where ionisation from H II regions has diluted.We thus cannot rule out that the close correlation might arise simply because the two types of emission are tracing the same underlying phenomenon: that both young massive stars and DIG tend not to stray far from H II regions, even if for very different reasons.
For example, Oey et al. ( 2018) estimated a velocity dispersion of field O and B stars (more massive than spectral type B0.5) in the Small Magellanic Cloud using Gaia Data Release 2 (Gaia Collaboration et al. 2018) of ∼ 40 km s −1 in any one direction.If this is comparable in M101, over a 10 Myr timespan (roughly the lifespan of such stars), these would travel ∼ 400 pc from their natal clusters on average.For comparison, the median distance between all DIG pixels and their nearest-neighbour H II regions in M101 is 389 pc, and in NGC 5474 is 358 pc.
DIG thus does not stray much farther from H II regions than typical field O and B stars would if those stars originated within the same regions as the current star formation.DIG also exists primarily in regions with high gas density, where on-going star-formation is more likely.By cross-matching our DIG pixel coordinates with the H I moment zero map of M101 from The H I Nearby Galaxy Survey (THINGS; Walter et al. 2008), we found that the H I emission in DIG pixels shows a fairly steady value of log( ) = 20.67 cm −2 (∼ 3.4M ⊙ pc −2 ) across either galaxy, fairly typical of star-forming regions in spirals (e.g., Bigiel et al. 2008).
One way in which we can assess the likelihood that these field O and B stars are powering the DIG is by examining the diffuse FUV stellar populations through integrated colour.Fig. 11 shows an FUV−NUV (AB magnitudes) colour map of M101, with white contours outlining the DIG and H II regions.Here, we have corrected both FUV and NUV flux for extinction as described in Sec.3.3.
It is clear at a glance that FUV-emitting populations located outside of either DIG or H II regions show systematically redder colours than those located within those regions, and that H II regions themselves show bluer colours than DIG regions.To be more quantitative, the average colour within the white contours is FUV−NUV= 0.31±0.24,compared to FUV−NUV= 0.44 ± 0.25 outside of the contours (including all pixels, inner and outer disk alike), while the H II regions have a mean colour of 0.05±0.27.Most of the scatter in these colours seems to arise from variability in extinction rather than intrinsic variability or photometric uncertainty.We reproduce the distribution of both DIG and H II region colours well using their median colours perturbed by normally distributed extinction corrections with a standard deviation of 0.6 mag, roughly the scatter about our best-fit extinction gradient in M101.
Using the population synthesis software Code Investigating GALaxy Emission (CIGALE; Burgarella et al. 2005;Noll et al. 2009;Boquien et al. 2019)3 , we found that a colour as red as FUV−NUV= 0.3 is difficult to produce in the presence of a substantial population of young O stars.A population modelled as a recent (25 Myr ago) burst atop a constant SFR (a reasonable model of M101) always maintains colours < 0, while a single fading starburst does not reach a colour of 0.3 until ∼ 150 Myr of age, by which point its ionising flux is too low by several orders of magnitude to produce even the lowest values of H we measure in the DIG.Similarly, a simulation of a fading burst using the galaxy evolution software GALEV (Kotulla et al. 2009) reaches the same colour by ∼ 400 Myr of age, by which time its ionising flux is vanishingly small.This therefore suggests that, despite the spatial coincidence between DIG and diffuse FUV near H II regions, the field O and B star contribution to DIG is minimal in the M101 Group, on average.The large-scale diffuse FUV component in both galaxies could well be a remnant of the tidal interaction between M101 and NGC 5474 ∼ 300-400 Myr ago (Mihos et al. 2013(Mihos et al. , 2018;;Linden & Mihos 2022), with the FUV most coincident with the DIG being remnants of dissolved clusters from earlier episodes of star-formation in the spiral arms (likely streamed there after forming within spiral arms; e.g.Crocker et al. 2015;Garner et al. 2024).If the large-scale FUV emitted by these redder stars has a corresponding DIG equivalent, it lies at surface brightnesses below our sensitivity, and hence cannot be constrained using our data.
If field O and B stars contribute little, this diffuse gas must comprise a distinct physical environment from the point-like H II regions.We must therefore consider some alternative sources of ionisation.A study by Lacerda et al. (2018) found that in Sc galaxies like M101, the LyC contribution from older, harder ionising sources such as HOLMES should be fairly small, assuming emission with EW H < 3Å arises primarily from such sources.The distribution of EW H in M101 and NGC 5474 agrees with this, with only ∼ 10%4 of the total H flux in the DIG arising from pixels with EW H < 3Å in either galaxy.Using their criteria, the remainder of the DIG must be ionised by a mixture of sources, including photoionisation.
We thus turn our attention to leakage of LyC photons from H II regions.To assess the contribution from this source, we performed an array of simulations using the spectral synthesis code Cloudy (Ver.17.03; Ferland et al. 2017) in an attempt to reproduce the trend between log( 10 ) and DIG H displayed in Fig. 6.We created a synthetic young star cluster as our illumination source using the code Starburst99 (Ver.7.0.0;Leitherer et al. 1999;Vázquez & Leitherer 2005;Leitherer et al. 2010Leitherer et al. , 2014)), with stellar mass of 10 6 M ⊙ .With this illumination source, selecting an age of 2 Myr, we ran an array of Cloudy simulations using the Φ(H) parameter option, which allows one to specify directly the incident ionising photon flux (in photons s −1 cm −2 ) on the surface of a cloud.We used a cloud with a gas density of 1 cm −3 as the target, and set log(Φ(H)) = 2-10 in steps of 1.We then recorded the resulting cloud's emergent H , excluding reflection and transmission as we are viewing these clouds in M101 and NGC 5474 from above, while the H II region flux incident on the clouds would be predominantly in the disk plane.
In Fig. 12, we overplot these model surface brightnesses on the log( 10 )-log( H ) relation from the left panel of Fig. 6, as a red curve.We found that the shape of this curve is insensitive to the ionising source, as, unlike line ratios, H emission measure is merely a function of the ionisation and recombination rate and hence the total incident ionising flux, not the ionising spectrum (Field 1975).The curve shape is also insensitive to the chosen gas density, save for densities much higher than typically found in the DIG (> 100 cm −3 ), and to the choice of filling factor and grain composition.
The match between the predicted and observed H is remarkable.Above the image noise threshold, the close agreement between the predicted and observed values implies that the majority of the ionising flux producing DIG in M101 and its companion arises from LyC leakage from H II regions.Excluding the hard ionised DIG using the criteria from Lacerda et al. (2018), this contribution would be ≳ 90%.
As discussed in Sec.4.1, we used the measured H luminosities of each H II region, diluted only geometrically within the disk plane, to estimate the LyC flux incident on each DIG pixel.This presumes esc = 50%, and the good agreement between the Cloudy models and our data provides support that this value is approximately correct.Estimates from the literature seem to concur, albeit with a wide variability.For example, a study by Teh et al. (2023) found that H II regions with log( H ) < 38.06 ergs s −1 have esc ∼ 0.56 +0.08 −0.14 (lower for brighter populations).In the M101 Group, log( H ) = 38.06ergs s −1 is log( H ) ∼ −13.7 ergs s −1 cm −2 ; only ∼ 15% of the H II regions in M101 lie above this value.In more tentative agreement, Della Bruna et al. (2021) find a value of esc ∼ 0.67 +0.08 −0.12 among a small sample of mostly more luminous H II regions in NGC 7793.Pellegrini et al. (2012) likewise estimated a luminosity-weighted mean esc ∼ 40% in the Large and Small Magellanic Clouds.
Molecular cloud evolution models show that esc is a strong function of age, rising from nearly zero to nearly one within around 5 Myr (depending on the ionising cluster's luminosity, the gas-phase metallicity, the star formation efficiency, and other factors; e.g.Rahner et al. 2017;Kimm et al. 2022).If so, it may not be surprising if the H II region population in a galaxy with a steady SFR over gigayear timescales has a mean esc falling roughly halfway between zero and one.This does not mean that all radiation escaping from the H II regions escapes the galaxy as a whole, of course: likely this escape is not omni-directional, and whether this escaping emission ionises the galaxy's own ISM or leaves to ionise the IGM depends both on the directionality of the escape from the cloud and the location of the cloud within the galaxy itself (e.g., Pellegrini et al. 2012;Kim et al. 2023).
Because SFR and LyC flux are both related to H by a scale factor, a loss of half of the LyC flux to leakage would reduce the SFR H estimated from H by a factor of two.Assuming the values of SFR FUV are accurate, this agrees well with the average value of SFR H /SFR FUV ∼ 0.44 we find among the H II regions.In one way, this is expected: the SFR calibration we employ was initially derived by Kennicutt (1983), who in the subsequent iterations we employ (Kennicutt 1998;Kennicutt & Evans 2012) still assumed a constant SFR over > 100 Myr timescales and Case B recombination (Brocklehurst 1971) without LyC leakage, which they state provides a lower limit on the true SFR.Even so, estimating the LyC leakage fraction is not a trivial exercise, so the value we propose here, while reasonable, should be considered a fairly rough estimate.
This experiment is an alternative version of that performed by Zurita et al. (2002), Seon (2009), and most recently Belfiore et al. (2022), who used the measured H II region fluxes in their galaxies to predict the DIG surface brightness distribution.Our model differs in that we did not include attenuation of the ionising radiation through the interstellar medium.However, our results mirror theirs insofar as they imply a very large mean free path (> 1 kpc) is required to explain the DIG surface brightness via H II region leakage alone.Seon (2009) claimed that the necessary absorption coefficient was unphysically low for reasonable models of the ISM, however Belfiore et al. (2022) explained this by suggesting it may result from DIG lying preferentially above the cold gas disk (with scale height ∼ 100 pc), in a region where most of the ISM is ionised (in accord with studies of DIG in edge-on galaxies, where DIG scale heights are of order 1-2 kpc; Collins & Rand 2001;Miller & Veilleux 2003;Levy et al. 2019;Rautio et al. 2022).
As M101 and NGC 5474 are both face-on, we cannot easily corroborate this explanation, but the good agreement between our simple Cloudy model comparison with our data and the results of these past studies provides further evidence that leakage of LyC photons from H II regions is sufficient to explain most of the ionising power of the DIG here.The discrepancy between integrated H and FUV SFRs in the LSB regime in this group may therefore reflect the combination of a longer SFR duty cycle in that regime, leading to a more noticeable mixture of old and young massive stars, and the tendency for H II regions to lose ∼ 50% of their LyC photons to leakage, on average.The former may be a direct consequence of the group's unique interaction history, so it is unclear how transferable this might be to other LSB environments.
Implications beyond the M101 Group
Having established that what we have defined, morphologically, as DIG represents a distinct physical environment compared to the point-like sources we identified as H II regions, we consider here a unifying physical model of the ionised gas in M101 and its companion.We do this by relating our observations to those of H emission in the MW.Madsen et al. (2006) found that classical H II regions in our Galaxy (bright emission line regions immediately surrounding hot stars) show much more consistent temperatures and line ratios than DIG (everything else).DIG temperature, by contrast, appears to depend on its distance from such regions, or else it depends on the specific ionisation mechanism (e.g., supernova feedback rather than photoionisation).In our scenario, where DIG seems primarily ionised by LyC leakage, the gradually degrading relationship between SFR H and SFR FUV might be illustrating similar behavior in M101 and its companion.
One obvious problem with our DIG definition is its dependence on image resolution, however.For example, even though M101 is nearby, many of its faintest H II regions, with small diameters, could blend in with what we defined as DIG, imposing a hard, resolution-dependent size cutoff on what we define as H II regions.Also, H II regions at advanced ages tend to be more diffuse and patchy than younger regions (Hannon et al. 2019), which would also blend in with DIG in unresolved imaging, yielding an age-limit on this H II region definition as well (assuming that evolved H II regions are fundamentally distinguishable from DIG, which they may not be; Rousseau-Nepton et al. 2018).More distant galaxies would suffer more from these systematic effects, and their impact on the correlations we explore here would be difficult to discern without a comparison study using higher resolution imaging of the same regions.Even so, our analysis does suffer less from such effects compared to those using integrated H and FUV fluxes from whole galaxies, or even those using integrated fluxes over significantly larger apertures than what we use here (> 100 pc).
If what we observe in the M101 Group is universal, DIG is comprised primarily of gas ionised by leakage from H II regions, and so total H emission should constitute an accurate estimate of the instantaneous SFR (e.g., Magaña-Serrano et al. 2020).In low-SFR environments, however, such as dwarf galaxies, LSB galaxies, and outer disks, where the classical H II region density is low (e.g., Schombert et al. 2013), the instantaneous SFR and the longer-term SFR probed by FUV emission would tend to diverge as observed, depending on the relative extent of FUV compared to H emission.If the IMF is invariant (as suggested, in the M101 Group, by our previous results ;Watkins et al. 2017), this scenario suggests that variability in the SFR H /SFR FUV ratio in the LSB regime is purely an artifact of the longer star formation duty cycle in that regime.If the IMF is variable, however, an idea with some observational and theoretical support (e.g., Meurer et al. 2009;Pflamm-Altenburg et al. 2009;Conroy & van Dokkum 2012;Geha et al. 2013;Li et al. 2023, and many others), the variability in that ratio must arise from a complex interaction between that IMF variability and the star-formation duty cycle.Extrapolating our methodology to other nearby galaxies may help to disentangle these competing scenarios.
SUMMARY
We present H and FUV photometry of diffuse ionised gas (DIG) and H II regions in the nearby M101 Group.We find a strong correlation between the H surface brightness ( H ) in the DIG and the incident ionising flux on each DIG region from its nearest ten H II regions ( 10 ), assuming a Lyman continuum escape fraction of esc = 0.5.This suggests that flux leakage from H II regions is an important contributor to DIG.Likewise, we find a strong correlation between H and FUV surface brightness in DIG regions, suggesting that field O and B stars may contribute as well.
However, integrated FUV−NUV colours of DIG regions are quite red (∼ 0.3) compared to H II regions (∼ 0.05), implying the young stellar populations embedded within the DIG are predominantly lowmass and thus likely contribute little to DIG ionisation.By contrast, using a suite of Cloudy models, in which we ionize a slab of gas with a slew of ionising photon fluxes, we reproduced the correlation between H and 10 very well.This suggests that most of the DIG in the M101 Group can be explained by leakage of Lyman continuum photons from H II regions, with little contribution from field OB stars or other sources.The excellent match between this predicted and observed H - 10 relation is intriguing, as it implies the value of esc = 0.5 we chose is correct (in tentative agreement with past studies; e.g.Della Bruna et al. 2021;Teh et al. 2023).Also, as we did not include absorption within the interstellar medium (ISM) in our models, we find good agreement with similar analyses of other galaxies, in which the estimated mean free path of ionising radiation in the ISM is very high (> 1 kpc; Seon 2009;Belfiore et al. 2022).
We compared the star formation rates (SFRs) derived from H and FUV in both the DIG and H II regions.In the DIG, H under-predicts SFR compared to FUV everywhere, with the ratio SFR H /SFR FUV declining as a function of SFR H .In the H II regions, which we define as all point-like sources identified within both galaxies from our H difference image, the ratio is flat at a value of 0.44 to the faintest regions we detect (SFR H ∼ 10 −5 M ⊙ pc −2 ).Given this, we suspect that these point-like regions are mostly leaky, compact Strömgren spheres, while the DIG comprises a mix of faint or old H II regions and true diffuse gas.
By doing photometry within larger apertures which mix DIG, H II regions, and FUV with no H counterpart-both using boxes with widths of ∼ 300 pc and using azimuthal averaging-we reproduce a trend found in other galaxies, in which SFR H /SFR FUV decreases with SFR H below SFR H ∼ 0.003M ⊙ pc −2 (Lee et al. 2009(Lee et al. , 2016;;Byun et al. 2021).Diffuse FUV without a detectable H counterpart is wide-spread throughout the M101 Group, mostly in regions with low surface brightness, which explains why including all flux in the apertures (not only H-selected flux) results in this trend here.
The M101 Group's star formation history is defined by a recent interaction between M101 and NGC 5474, resulting in a burst of star formation 300-400 Myr ago (Mihos et al. 2013(Mihos et al. , 2018;;Linden & Mihos 2022).Thus, the declining SFR ratio with SFR H we find on these large scales in this group may result simply by mixing remnants from this burst (detectable as FUV emission with no H counterpart) with on-going star formation (detectable as H II regions and the diffuse gas they ionise around them).Repeating this analysis-separating DIG from point-like, classical H II regions-in other galaxies should help discern whether or not this is unique to the M101 Group.
Figure 1 .
Figure 1.Demonstrating results of our point-like source identification algorithm for the central regions of M101.Black circles are centred on the sources identified by re-centring the segmentation map produced by Sourcerer (see text), overlaid on a logarithmically scaled image of the BST H difference image on which we performed the segmentation.Axis labels are distance from the center of M101 in arcminutes.At M101's distance, 1 ′ ∼ 2 kpc.
Figure 2 .
Figure 2. Pixel-to-pixel maps of diffuse ionised gas in M101 (left) and NGC 5474 (right).All non-zero valued (black) pixels we consider DIG.White pixels here are either masked sources or background.We cleaned both maps by-hand of likely spurious detections, meaning those in the outskirts of either galaxy with patchy morphology and small angular size.Black lines show 5 ′ scale bars.North is up and east is to the left in both panels.
, assuming a constant value of [S II]/H = 0.2 and the radial [N II]/[S II] relation given by their Eq.2, derived from [N II] and [S II] flux values in M101 published by Croxall et al. (2016).In NGC 5474, again following Garner et al. (2022), we simply applied the average line correction factor for all H II regions within M101 (∼ 0.95).
Figure 3 .
Figure 3. Demonstrating behavior of extinction internal to H II regions in both galaxies.Each panel shows the distribution of FUV as a function of radius, as derived from Balmer decrements.The red dashed lines show the best-fit radial relation in each galaxy.In M101, the gradient flattens beyond ∼ 11 kpc, hence we assume a constant extinction beyond that radius, extrapolated from the last best-fit point in the inner relation.NGC 5474 shows no radial extinction gradient and overall lower extinction than M101.We also use these gradients to correct the DIG flux for extinction, albeit employing a different extinction law.
Figure 4 .
Figure 4. Distribution of [N II]/[S II] and [S II]/H fluxes in DIG spaxels among MUSE Atlas of Disks sample galaxies (MAD; Erroz-Ferrer et al. 2019), as a function of gas-phase metallicity (using the MAD O3N2 calibration).The red dashed lines show the running mean relations, while the red dotted lines show the running standard deviation about those means.We use these relations to correct the DIG H flux in M101 and NGC 5474 for [N II] and [S II] contamination.
Fig. 4
shows how the [N II]/[S II] ratio and the [S II]/H ratio vary with gas-phase metallicity (using the [O III] and [N II] empirical calibration, O3N2, from Marino et al. 2013) in the MAD galaxies.
Figure 5 .
Figure 5. Demonstrating removal of interloping non-H II region sources from our raw point-like source photometry catalogues.The left panel shows the − colour-magnitude diagram of all point-like sources detected in M101's vicinity, colour-coded by EW H , all corrected only for MW extinction.The central panel shows photometry of the same sources, corrected for MW and internal extinction, with EW H corrected for [N II] and [S II] contamination as well.The right panel shows the sample culled of likely interloping sources.The red dotted line in the left panel shows our initial colour-cut used to reject interloping MW stars ( − > 0.65).The dotted cyan line in the center panel shows the criteria we chose to identify H II regions using the extinction-and line-contamination-corrected photometry.We accepted as H II regions all sources below the line, as well as any above the line not rejected as MW stars with EW H > 6, necessary to preserve the faintest H II regions in both galaxies.
Figure 6 .
Figure 6.H surface brightness in the DIG as a function of the number of ionising photons incident on each DIG pixel from the nearest 10 H II regions, derived from the H fluxes of said regions, uncorrected for internal extinction.DIG pixels without significant H flux are excluded here.The colour scale in the 2D hexagon histogram denotes the density of points in each bin, with individual points shown in gray underneath.The blue lines show the best-fit relations (Table2).The dotted red line denotes the RMS in the H image background, which sets our noise limit.
Figure 7 .
Figure 7. Pixel-to-pixel FUV vs. H surface brightness in the DIG of both galaxies.The plotting schema is the same as 6, but we have included also a vertical red dotted line showing the RMS in the FUV image backgrounds.The top and right axes show surface brightness converted to SFR surface density from either band (FUV at the top and H at the right).Dotted black lines show the 1:1 relation in SFR.
Figure 8 .
Figure8.As Fig.7, but for H II regions.The axis limits differ here, as the luminosities probed are much higher.RMS limits fall outside of the axis limits in this figure.
Figure 9 .
Figure9.H-to FUV-derived SFR ratio as a function of H-derived SFRs measured in different environments.The top panel shows these values measured using a grid of 18 ′′ ×18 ′′ box apertures, summing all flux (i.e., not H-selected only) in each box.The central and bottom panels show these values for our point-like source H II region and pixel-to-pixel DIG samples, respectively.The black horizontal line denotes equality in both SFR indicators.The blue vertical line denotes SFR= 0.003M ⊙ yr −1 , the value below whichLee et al. (2009) andByun et al. (2021) found the two SFR indicators to diverge.The red vertical line denotes our limiting H surface brightness, converted to SFR.
Figure 10 .
Figure 10.Azimuthally averaged radial profiles of SFR ratio, for three different cases: DIG pixels only (purple), H II regions only (green), and all flux (gold), including FUV emission with no H counterpart.Gold profiles truncate where background begins to dominate in the H image.Radii are scaled by each galaxy's -band scale length, which are 4.42 kpc (2.2 ′ ; Mihos et al. 2013) and 1.29 kpc (0.64 ′ , measured for this work) for M101 and NGC 5474, respectively.
Figure 11 .
Figure 11.FUV−NUV colour map of M101, in AB magnitudes.White contours outline the DIG and H II regions.
Figure 12 .
Figure 12.As the left panel of Fig. 6, excluding the best-fit relation.The red curve shows H surface brightness predicted by a series of Cloudy simulations, in which the incident ionising flux on a plane-parallel cloud was set to values between 2 ≤ log() ≤ 10 photons s −1 cm −2 (see text for details).
|
2024-05-01T06:44:39.654Z
|
2024-04-29T00:00:00.000
|
{
"year": 2024,
"sha1": "147d1421079447fa441e6ed6f55360cb9bed5055",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1093/mnras/stae1153",
"oa_status": "GOLD",
"pdf_src": "ArXiv",
"pdf_hash": "147d1421079447fa441e6ed6f55360cb9bed5055",
"s2fieldsofstudy": [
"Physics",
"Environmental Science"
],
"extfieldsofstudy": [
"Physics"
]
}
|
188257450
|
pes2o/s2orc
|
v3-fos-license
|
Evaluation of Quality Attributes during Storage of Guava and Papaya Mixed Fruit Leather
Guava is a popular tropical fruit belongs to the family myrtaceae. In India Guava is extensively produced and is fourth most grown fruit crop following Mango, Banana and Citrus (Singh et al., 2016). In year 201516 Guava cultivated area in India accounted 255 thousand Ha, produced about 4048 thousand MT of fruits according to the Horticultural statistics at a glance 2017. Guava fruits are good source of ascorbic acid ranging from 70-350 mg/100g., pectin ranging from 0.52 to 2% and minerals like calcium, phosphorus, iron etc. the fruit contains substantial quantity of vitamin A, pantothenic acid, riboflavin, thiamin and niacin. Papaya belongs to the family Caricaceae, one of the most appreciated tropical fruit with great economic and nutritional importance. Papaya fruit has a sweet, exotic flavor and is rich in Vitamin A and C and antioxidants. It also contains a proteolytic enzyme, papain, which helps in digestion of protein rich foods. The vitamin A content in papaya (2020 IU/100 g) is only next to mango (Singh, 2000) and one single medium Papaya fruit provides about 224 percent of daily requirement for Vitamin C.
nature and suffers great extent of postharvest losses. Processing can play an important role in minimizing the postharvest losses of fruits. Making of fruit leather from fresh fruits is an effective way to preserve fruits (Maskan et al., 2002). Fruit leathers are often considered as a health food and health food marketing images such as "pure," "sun-dried," or "rich in vitamins" are used to describe them (Vatthanakul et al., 2010). This study was conducted to evaluate the effect of blending of guava and papaya pulp with different ratio of sugar on quality of mixed fruit leather.
Materials and Methods
Fruits of Guava cv. Allahabad Safeda were collected from orchard of JNKVV while fruits of Papaya cv. Coorg Honey Dew were collected from fruit market, Jabalpur. Fully matured firm, ripe and healthy fruits were picked and cut into pieces. Small pieces of Guava were autoclaved at 10 psi for 5 min. pulp is cooled to room temperature and then straining of pulp was done. Papaya fruits were peeled off and cut into pieces after removal of seeds and were autoclaved at 10 psi for 3-4 min, cooled at room temperature and then homogenization of pulp was done with mixer. Sodium benzoate (750ppm) was added to pulp after dissolving in small quantity of warm water and mixed thoroughly.For the preparation of mixed fruit leather in six different ratios (P1-80:20, P2-70:30, P3-60:40, P4-50:50, P-540:60 and P6-70:30)Guava and Papaya fruit pulp were mixed. In first six recipes 105 gm sugar (S1), in next six 210 gm (S2) and in last six recipes 315 gm sugar (S3) was added. In each pulp-sugar mixture citric acid was added. Now each recipe was homogenized in mixer for 1 minute. Then mixture of fruit pulp was poured into trays for 6 mm thickness, after trays were placed into sunlight, dried leathers were cut into uniform pieces and packed in polythene bags. These leathers were stored at room temperature. The Sensory parameters (i.e., colour, flavor, taste, texture and overall acceptability) and qualitative characters (i.e., TSS, acidity, pH, ascorbic acid, total sugar) were recorded for freshfruit and of guava pulp and papaya pulp separately. Organoleptic quality parameters were determined by adopting anine-point hedonic scale (1= Dislike extremely and 9= like extremely) (Amerine et al., 1965). A semi trained test penal of 10 judges did the sensory evaluation. Total soluble solids in the pulp were measured with the help of hand refractometer and pH of extracted pulp was measured using Elemer pH meter after calibration of the instrument with standard buffer solution Jain et al.,
Mixing of Guava and
The titerable acidity and ascorbic acid content were determined by AOAC methods (1995). The data obtained in the study were subjected to statistical analysis (Snecdecor et al., 1967).The organoleptic evaluation and testing of quality characters were carried out at 0, 20, 40, 60, 80 and 100 days of storage.
Results and Discussion
The TSS of guava pulp was recorded 17 per cent and in case of papaya pulp it was 12 per cent. However, the values of per cent acidity for guava and papaya was (0.45%) and (0.38%) respectively. The Ascorbic acid was calculated 182 mg/100gm for guava pulp and 58mg/100 gm for papaya pulp. pH of guava and papaya pulp were found 3.97 and 6.17 respectively. Total sugar was recorded 10.50% for guava pulp and 6.5% for papaya pulp.
The overall acceptability of mixed fruit leather was computed based on the organoleptic scores of various qualities as colour, flavor, texture and taste. The results showed that maximum score (8.47) for overall acceptability found in 80%guava + 20%papaya pulp combination (Fig. 1). It can be inferred that the blending of fruit pulp gives the better compatibility to pulp for preparation of quality leather. Combined effect of pulp ratio and different quantity of sugar was found non significant throughout the storage period of 100 days. During storage, it was observed that overall acceptability of mixed fruit leather slightly decreased as the days of storage were increased. These results are in agreements with those found by Mansy et al., (2005) in mango papaya nectar and Saravanan et al., (2004) in papaya jam. Baramanray et al., (2005) showed that the organoleptic rating of freshly prepared product is highly acceptable and reduced significantly with increased storage period.
Data regarding TSS of mixed fruit leather during storage have been presented in (Table 1). The highest significant value (36.82%) of recipe (80% guava + 20% papaya) was observed at 0 day of storage. Data revealed that higher concentration of guava pulp increased the TSS percent of mixed fruit leather and this effect was observed upto 100 days of storage. Further it was seen that per cent TSS of mixed fruit leather was increased with increased concentration of sugar also persisted for 100 days of storage. As the period of storage increased, the TSS value of mixed fruit leather increased significantly up to 100 days of storage. The increase in TSS during storage might be due to the conversion of polysaccharides like starch and pectin into simple sugar. Similar inference was drawn by findings of Sharma et al., (2008) and Jakhar et al., (2012). This might be due to conversion of some of the insoluble fraction. Similar trend was reported by Sudha et al., (2007). The increase in TSS might be due to moisture loss during storage. These findings have been well supported by Sreemathi et al., (2008). mean S1 S2 S3 S1 S2 S3 S1 S2 S3 S1 S2 S3 S1 S2 S3 S1 S2 S3 The data pertaining to the Acidity of different recipes of mixed fruit leather as affected by storage duration has been specified in (Table 2). From the study of Table 2 it can be concluded that the effect of guava pulp on acidity was prominent at every stage (0, 20, 40, 60, 80 and 100 days) of storage. Acidity was increased as days of storage were increased up to 100 days storage period. With regard to the effect of sugar content S1 (15gm sugar) had maximum acidity percent value and with increased concentration of sugar, value of per cent acidity was decreased.
Further it was observed that acidity of the leather also decreased significantly with increase in sugar content. Similar result was also reported by (Jain et al., 2007) The results inferred that formation of organic acid by degradation of ascorbic acid accounted for Increase in acidity during storage. A slight increase in acidity during storage was also reported by (Shakir et al., 2008). These findings are in conformation with the findings of (Chaudhary et al., 2006), (Manimegalai et al., 2001)and (Byanna et al., 2012).
In conclusion, on basis of Sensory scores and important quality attributes, it might be concluded that treatment where guava and papaya pulp were in ratio of (80:20) with S2 (30 gm sugar/100 gm pulp) was most efficient to retain the fruit quality attributes upto 100 days of storage under room temperature. In case of storage of mixed fruit leather at room temperature, a slight decrease in sensory attributes (colour, texture, flavor and taste) and overall acceptability in mixed fruit leather was noticed under all the treatments under study. While, in case of qualitative characters (per cent TSS and per cent acidity), a slight increase in all the treatment was recorded.
The change in the quality parameters was largely dependent on different fruit pulp suagr ratio and days of storage.
|
2019-06-13T13:20:32.763Z
|
2018-12-10T00:00:00.000
|
{
"year": 2018,
"sha1": "56de5d404f85799a55a2add74743249bab4e055e",
"oa_license": null,
"oa_url": "https://www.ijcmas.com/7-12-2018/Rajani%20Singh,%20et%20al.pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "7242357a476e22d15aebacd6c508e982d4f729af",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Mathematics"
]
}
|
49563151
|
pes2o/s2orc
|
v3-fos-license
|
Changes in cognitive function in patients with intractable dizziness following vestibular rehabilitation
The purpose of the present study was to investigate changes in cognitive functions, including visuospatial ability, attention, and executive function in patients with intractable dizziness following vestibular rehabilitation. The correlations between improvements in cognitive function and dizziness-related variables and emotional distress were also explored. During hospitalization for 5 days, participants were trained on a vestibular rehabilitation program. Participants completed questionnaires including the Dizziness Handicap Inventory (DHI), Hospital Anxiety and Depression Scale (HADS), and Trail Making Test (TMT), which were used to assess cognitive function. The center of gravity fluctuation measurement and timed up and go test (TUG), which were objective dizziness severity indexes, were performed before, 1 month after, and 4 months after hospitalization. Following vestibular rehabilitation, participants exhibited a significant improvement in the TMT, DHI, HADS, and TUG scores. Correlation analysis between the variables at each time point indicated that TMT scores positively correlated with TUG at baseline. The correlation between changes observed in the TUG and TMT scores was not significant. The degree of improvement of the TUG score did not bear a linear relationship with that of the TMT scores. However, these correlation results were not completely consistent with those in the multiply imputed dataset.
lifestyle counseling to exercise daily, including walking, and (3) sleeping sufficiently and reducing stress. We recruited participants for the present study from this pool of patients if they met the following criteria: (1) patient was ≥20 years old; (2) dizziness had persisted for at least 3 months despite conventional treatment mentioned above in the outpatient clinic; (3) the patient wished to have intensive, inpatient therapy for persistent dizziness; (4) the patient had not experienced vestibular rehabilitation before the starting the intervention; and (5) the patient was literate. Our exclusion criteria were as follows: (1) a diagnosis of dizziness due to cerebrovascular disorder; (2) medical contraindications for making the necessary head movements during vestibular rehabilitation (e.g., severe cervical disorder); (3) serious comorbidity (e.g., a life-threatening condition, severe cognitive impairment, or severe psychiatric disorder); (4) central nervous system disease; or (5) bilateral vestibular deficit.
Patients underwent pure tone audiometry, vestibular investigation (including eye movements), posturography, head impulse test, video head impulse test, electronystagmography, auditory brainstem response, computed tomography, and/or magnetic resonance imaging as necessary for the diagnosis. The clinical diagnosis was defined based on the results of these examinations. Canal dysfunction was confirmed in patients with vestibular neuritis and unilateral vestibulopathy.
The present study was approved by the ethical committee of the National Tokyo Medical Center (R12-009) and has been performed in accordance with the ethical standards laid down in the 1964 Declaration of Helsinki and its later amendments.
Measures. Trail Making Test. The Trail Making Test (TMT) is used to assess visuospatial scanning, attention, processing speed, and executive function. In the TMT-A, participants were asked to connect a series of numbers in consecutive order (1, 2, 3, etc.). The TMT-A examines visual scanning ability, attention, and processing speed. In the TMT-B, participants were required to connect a series of letters and numbers in alternating consecutive order (1, A, 2, B, 3, C, etc.). The TMT B examines executive function, visual scanning ability, attention, and processing speed. The time in seconds to complete the task was recorded 13 . We calculated the difference by subtracting the TMT-A score from the TMT-B score (TMT-B-A) 14 . The TMT-B-A score was reportedly minimizes visuo-perceptual and working memory demands, providing a relatively pure indicator of executive control abilities 15 .
Dizziness Handicap Inventory. The Dizziness Handicap Inventory 16,17 is a standard questionnaire that quantitatively evaluates the degree of handicap in the daily lives of patients with vestibular disorders; it consists of 25 questions. The total score ranges from 0 (no disability) to 100 (severe disability).
The center of gravity fluctuation measure. The center of gravity fluctuation measure for objective assessment of the severity of dizziness was performed using a stabilometer (G-5000, Anima Corp., Tokyo); it provided the total path length (LNG) and environmental area (ENV) during quiet stance with eyes open and eyes closed for 60 s (the LNG is same as the velocity of sway path value multiplied by 60 s).
Timed up and go test. The timed up and go test (TUG test) 18 assesses functional mobility consisting of basic motor agility and dynamic balance. During the test, patients were required to stand up from a chair, walk 3 m, turn, walk back, and sit down. We used a chair without armrests since obese patients found it difficult to sit against armrests. Participants freely selected the direction of the turn by themselves. The time needed to perform this task was recorded twice. The smaller value was registered as the TUG test score.
Hospital Anxiety and Depression Scale. The Hospital Anxiety and Depression Scale (HADS) 19,20 is a self-reported questionnaire containing 14 questions scored on a 4-point scale, consisting of an anxiety subscale and depression subscale with seven items each. This psychometric instrument was chosen because all its items refer solely to an emotional state and do not consider somatic symptoms.
The intervention. Patients were hospitalized for 5 days in groups of 8-10 individuals. During this time, the groups were trained to perform the 30-min vestibular rehabilitation program by themselves 21 . The program consisted of head and eye exercises in a sitting or standing position. Exercises in the sitting position included the following seven exercises: (1) quick horizontal eye movement; (2) quick vertical eye movement; (3) eye tracking horizontal direction; (4) eye tracking vertical direction; (5) horizontal head shaking with gazing fixed target; (6) vertical head shaking with gazing fixed target; and (7) oblique head tilting with gazing fixed target. Each eye or head movement was repeated 20 times per session. Exercise in a standing position consisted of the following 13 exercises: (1) standing up and sitting down with eyes open, three times; (2) standing up and sitting down with eyes closed, three times; (3) standing with eyes closed and feet apart for 20 s; (4) standing with eyes closed and feet together for 20 s; (5) in tandem stance with right foot in front for 20 s; (6) in tandem stance with left foot in front for 20 s; (7) one leg stand on the right foot for 20 s; (8) one leg stand on the left foot for 20 s; (9) 180° turn to the left, three times; (10) 180° turn to the right, 3 times; (11) walking 10 m with tandem gait; (12) walking 10 m with horizontal head shakes; and (13) walking 10 m with vertical head shakes. During training, patients performed these exercises three times a day under the supervision of a physician. After 5 days, all patients had learned how to perform the exercises. The patients were then instructed to continue performing the vestibular rehabilitation program three times a day after discharge. All participants were asked to record their exercises after discharge from hospital, and physicians verbally confirmed participant progress at every visit.
Procedure.
After the participants had provided written, informed consent, they were evaluated on the day of hospitalization (time 1), as well as at 1 month and 4 months after hospitalization (time 2 and time 3, respectively), using the above-mentioned questionnaires. The TMT, static posturography, and TUG were also conducted. All The primary analysis consisted of repeated measures analysis of variance (ANOVA) to analyze the effects of time on all outcomes, and correlation analysis (Pearson's correlation coefficient) to examine the relationship between scores at each time point and changes in outcomes during rehabilitation. Additionally, correlation analysis between each score at time 1, and changes in scores from time 1 to time 3, were performed as the secondary analysis based on the results obtained. The significance level for the ANOVA was set at less than 5%. Multiple testing corrections for correlation analyses were performed using the Bonferroni test. The significance level for correlation between scores at each time point was p < 0.004 (=0.05/12 [3 time points * 3 sub-scores of the TMT and age]) after the correction. The significance level for correlation between changes in outcomes during rehabilitation was p < 0.016 (=0.05/3 sub-scores of the TMT) after correction. The significance level for correlation between each score at time 1 and changes in the scores from time 1 to time 3, and partial correlation was p < 0.004 (=0.05/11 variables) after the correction.
Additionally, we applied an intention-to-treat (ITT) analysis using a multiple imputation technique 22 to create and analyze multiply imputed datasets. Data were missing for 71 of the 131 participants. The incomplete variables were as follows: (1) Multiple imputation was estimated using Bayesian linear regression. We averaged and combined the 20 imputed datasets. We conducted primary tests, including the repeated measures ANOVA, to analyze the difference in the outcomes among all time points. We performed a correlation analysis to examine the relationship among the outcomes at each time point and among the changes in these outcomes from time 1 to time 3 using multiply imputed datasets. Data availability. The datasets generated during and/or analyzed during the current study are available from the corresponding author on reasonable request.
Results
Participant characteristics. During the study period, 396 patients with dizziness were hospitalized, of which 131 patients (32 male and 99 female patients) met the inclusion criteria and agreed to participate in the present study. We further excluded those who had data missing at any time of the examination (n = 71, 22 male and 49 female patients); thus, 60 patients (10 male and 50 female patients, mean age = 55.9 ± 15.3 years) were included in the final analysis ( Fig. 1). Table 1 outlines the diagnoses of the participants, according to a medical history recorded during their initial visit.
Change of each variable by vestibular rehabilitation. Table 2 Regarding the TMT scores, there was a significant main effect of time in the TMT-A, TMT-B, and TMT-B-A scores. The post-hoc test showed that the TMT-A score at time 3 was significantly lower than that at time 1 (p < 0.0001) and time 2 (p = 0.03), while the TMT-B and TMT-B-A scores at time 2 (p < 0.0001) and time 3 (p < 0.0001) were significantly lower than at time 1.
Furthermore, a significant main effect of time on the DHI score was found, and the post-hoc test revealed that the score at time 3 was significantly lower than that at time 1 (p < 0.0001) and time 2 (p = 0.04). Further, the score at time 2 was also significantly lower than that at time 1 (p < 0.0001).
Regarding the center of gravity fluctuation measure, there was a significant main effect of time in the LNG during eye-closing; the post-hoc test revealed a significantly lower score at time 3 than at time 1 (p = 0.03). No significant differences in other LNG and ENV variables were found.
A significant main effect of time on the TUG test score was also found and the post-hoc analysis revealed significantly lower scores at time 2 (p < 0.0001) and time 3 (p < 0.0001) than at time 1.
In LNG and ENV variables. Regarding the DHI, the post-hoc analysis revealed that the scores at time 2 (p < 0.0001) and 3 (p < 0.0001) were significantly lower than those at time 1, and that the score at time 3 was significantly lower than that at time 2 (p = 0.03). Regarding the post-hoc analyses in other variables, the scores at time 2 (TMT-A: p = 0.005, other: p < 0.0001) and 3 (p < 0.0001) were significantly lower than those at time 1. Table 3 Table 4 Correlation between each score at time 1 and the corresponding changes from time 1 to time 3. Table 3 summarizes the correlation results between each score at time 1 and change in scores from time 1 to time 3. All scores except the TUG at time 1 significantly and positively correlated with their corresponding changes from time 1 to time 3 (see colored area in Table 5). Regarding the relationship between the change of the TMT scores and other variables at baseline (time 1), the change of the TMT-A score significantly and negatively correlated with the HADS-D score. Conversely, changes in the TMT-B and TMT-B-A scores significantly and positively correlated with the TUG scores.
Discussion
In the present study, we demonstrated that patients with intractable dizziness exhibited a significant improvement in cognitive functions including visuospatial ability, attention, and executive function as evidenced by the TMT. These changes also coincided with improvement in dizziness-related indexes and psychological distress following vestibular rehabilitation. Although the mean TMT scores at baseline were relatively higher than previously reported average scores in healthy participants aged 55 and 59 years 23 , those scores became lower than the average scores 3 months after the initiation of vestibular rehabilitation. A previous study reported that balance training improved memory and spatial cognition, but not executive functions, in healthy participants 24 . The presence of affective complications in individuals with vestibular impairments may contribute to cognitive dysfunction 11 . Thus, both improvements of dizziness and emotional distress by vestibular rehabilitation could contribute to changes in cognitive functions including executive function as evidenced by the TMT-B-A and TMT-B. The indexes of the center of gravity fluctuation measure, except for the LNG during eye-closing, were not significantly improved by vestibular rehabilitation. We previously reported that in patients with intractable dizziness, body sway during the eye-open condition 25 or both during eye-open and eye-close conditions 10 was not significantly improved by vestibular rehabilitation, which is in contrast to other findings indicating significant improvements of these indexes 26 . Interestingly, participants who exhibited significant improvements of LNG and ENV 26 had more severe indexes than participants who did not show significant improvements of these indexes, both previously 25 and in the present study. Thus, LNG and ENV may not be very sensitive to the effect of vestibular rehabilitation on dizziness. In the multiply imputed dataset (N = 131), we found almost the same results as those in the 60 participants. Further, the ANOVA in 60 participants may indicate relatively robust results.
Our results also indicated that visuospatial ability, attention (TMT-A or TMT-B), and executive function (TMT-B-A) positively correlated with functional mobility (TUG) before the initiation of vestibular rehabilitation. Based on these findings, cognitive function, including visuospatial ability, attention, and executive function, could be related to functional mobility, in the presence of prominent dizziness symptoms alone. Further, the correlation could weaken as dizziness symptoms improve. However, the results in the multiply imputed dataset (N = 131) showed significant and positive correlations between the TMT-A and B scores and the TUG score at time 3. Notably, these results were not found in the 60 participants; thus, we should re-examine the correlation results in a larger sample.
However, based on the correlation analysis between changes from baseline to 4 months after the start of vestibular rehabilitation in the TMT scores and those in other variables, the improvement of the TMT-A score was Scores at time 1 negatively correlated with the improvement of perceived dizziness handicap and emotional distress in the 60 participants. Additionally, more severe states of all variables except the TUG score at baseline were associated with greater corresponding changes between examinations from times 1 to time 3. Furthermore, more severe depression at baseline were associated with smaller changes in the TMT-A scores and decreased perceived dizziness handicap at baseline tend to be related to smaller changes in the TMT scores (p = 0.008), while worse functional mobility assessed by the TUG test at baseline was associated with a greater change in the TMT-B and A-B scores. Thus, severe states of emotional difficulties, rather than greater improvement of those variables, could be related to weaker improvement of the visuospatial ability and attention, but not of executive function. However, the significant correlation coefficients reported in the present study were not robust because the results of the correlation analysis for the changes in multiple parameters from time 1 to time 3 observed in the multiply imputed datasets (N = 131) were not consistent with those in the 60 participants. Thus, the results of the correlation analyses should be interpreted with caution. In addition, following vestibular rehabilitation, patients with intractable dizziness demonstrated a significant improvement in their cognitive functions as evidenced by the TMT scores, with coincident improvement of their functional mobility. In particular, the tendency of executive function improvement (time 1 to times 2 and 3) appeared to be comparable to that of the functional mobility during vestibular rehabilitation. However, although the correlation between improvements in the functional mobility and cognitive functions was positive, this finding was not statistically significant. Thus, the magnitude of improvement in these cognitive function domains could not have correspond with that in functional mobility.
TMT-A TMT-B TMT-B-A DHI
A few limitations in the present study must be noted. First, cognitive functions were evaluated using the TMT alone. Second, we should additionally use other sensitive measures for assessment of functional mobility including the Dynamic Gait Index 27 and the Functional Gait Assessment 28 in future research. Third, we did not conduct any assessment during the 5 days in hospital. Fourth, since a low number of the participants were included in the present study we could not analyze our data in consideration of sex and age. Fifth, data for at least one of the outcomes were missing for approximately half of the participants (54.2%), particularly at 4 months after the start of investigation. Sixth, the confirmation of adherence to the home-based program after discharge was not sufficient. Although we asked all participants to record their exercises after discharge and verbally confirmed progress with them at every visit, we did not obtain data of records. Finally, the present study lacked a control group.
Conclusion
Patients with intractable dizziness demonstrated a significant improvement in cognitive functions including visuospatial ability, attention, and executive function, with coincident improvement of dizziness-related indexes and psychological distress, following vestibular rehabilitation. These cognitive function domains correlated with functional mobility consisting of basic motor agility and dynamic balance before the initiation of vestibular rehabilitation. There was no linear relationship between the degree of improvement of functional mobility and the improvement in cognitive function. However, given the discrepancy between correlation results in the 60 participants and multiply imputed dataset, we should re-examine the correlation results in a larger sample.
|
2018-07-04T13:10:45.434Z
|
2018-07-03T00:00:00.000
|
{
"year": 2018,
"sha1": "47738a60373c9b7c7657c9a16c9d64951157f3ff",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-018-28350-9.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "47738a60373c9b7c7657c9a16c9d64951157f3ff",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
219177103
|
pes2o/s2orc
|
v3-fos-license
|
A Statistical Approach to Signal Denoising Based on Data-driven Multiscale Representation
—We develop a data-driven approach for signal de-noising that utilizes variational mode decomposition (VMD) al-gorithm and Cramer Von Misses (CVM) statistic. In comparison with the classical empirical mode decomposition (EMD), VMD enjoys superior mathematical and theoretical framework that makes it robust to noise and mode mixing. These desirable properties of VMD materialize in segregation of a major part of noise into a few final modes while majority of the signal content is distributed among the earlier ones. To exploit this representation for denoising purpose, we propose to estimate the distribution of noise from the predominantly noisy modes and then use it to detect and reject noise from the remaining modes. The proposed approach first selects the predominantly noisy modes using the CVM measure of statistical distance. Next, CVM statistic is used locally on the remaining modes to test how closely the modes fit the estimated noise distribution; the modes that yield closer fit to the noise distribution are rejected (set to zero). Extensive experiments demonstrate the superiority of the proposed method as compared to the state of the art in signal denoising and underscore its utility in practical applications where noise distribution is not known a priori.
I. INTRODUCTION
Signals from various practical applications are subject to unwanted noise owing to various physical limitations of acquisition systems, e.g., audio recording systems, lidar systems, EEG and ECG acquisition systems etc. Consequently, to avoid any false decisions based on these noisy signals, it is necessary to remove the unwanted noise beforehand. For this purpose, earlier denoising approaches employed filtering either in time domain or in the transform domain. The filtering methods in original signal domain are referred to as time domain filters which are mostly based on the least mean square (LMS) principle of noise smoothing [1], [2]. On the other hand, transform domain filters are facilitated by the differentiability of signal and noise in the transform domain [3], [4].
The problem of additive white Gaussian noise (wGn) removal has been optimally solved for wide sense stationary signals, i.e., signals with perfectly known invariable statistics, using the Weiner filter. However, that approach may not be adequate in practical settings due to the following reasons. Firstly, majority of real life signals are nonstationary in that their attributes (statistics) change with time. Secondly, the assumed wGn model may not always be used to characterize noise in time series data, e.g., EEG/ECG signals. Consequently, more evolved techniques capable of accounting for the nonstationarity of signal and non-Gaussianity of noise are required to process practical signals.
Discrete wavelet transform (DWT) is a multiscale method to process the non-stationary signals that exhibits property of sparse distribution of signal singularities within its coefficients. The noise coefficients, on the other hand, have lower amplitudes and uniform spread [4]. That allows to differentiate between signal and noise coefficients using a suitable threshold, e.g., universal threshold-based approaches [5], [6] and Steins unbiased risk estimate (Sure)-based approaches [7], [8]. Similarly, shrinkage functions based on the probability distribution of signal and noise coefficients are also derived using Bayesian estimators, e.g., [9], [10].
The above-mentioned methods require a prior information about the signal and noise (distribution) models in order to estimate the threshold or derive the shrinkage (thresholding) function to suppress the noise. A variety of noise models are available based on the experimental studies [3], however, these models do not fully account for the factors contributing to the noise during acquisition. Consequently, noise is abstractly modeled using these experimental models within the denoising methods. A more challenging task in this regard involves the specification of a generalized signal model owing to the arbitrary nature of information generally found within the times series data. Secondly, specification of prior models restricts the efficacy of these methods in real world signals.
This issue has been partially addressed in framework proposed in [11] which combines DWT with the goodness of fit (GoF) test. Hereafter, this approach is called as DWT-GoF method. It is worth mentioning that the DWT-GoF method requires only a prior noise model. Here, noise is expediently modeled as a zero-mean additive wGn that is conventionally used to model the random noise in the data-acquisition and communication systems, for example. The detection of wGn at multiple wavelet scales is facilitated by the fact that the DWT preserves the Gaussianity of noise. This essentially requires detection and rejection of wavelet coefficients fitting the Gaussian distribution for denoising. Henceforth, the DWT-GoF method [11] rejects noise from DWT scales by estimating the GoF of Gaussian distribution on the multiscale coefficients.
arXiv:2006.00640v1 [eess.SP] 31 May 2020
An improved version of the DWT-GoF method has been proposed in [12], [13] that employ GoF test along with the dual tree complex wavelet transform (DTCWT), which is called the DT-GOF-NeighFilt method in the sequel. The key feature of the DT-GOF-NeighFilt method is to incorporate a novel neighborhood filtering technique to minimize the loss of signal details while rejecting the noise. Apart from the GoF test, other hypothesis testing tools such as False discovery rate (FDR), Bayesian local false discovery rate (BLFDR) are also used in combination with wavelet transforms for signal denoising [14], [15].
Another avenue for multiscale denoising involves datadriven decomposition techniques. For instance, empirical mode decomposition (EMD) [16] that employs a data-driven approach to extract principal oscillatory modes from a signal. Within EMD, local extrema (maxima/minima) of a signal are interpolated to obtain its upper and lower envelops and their mean is subtracted from the original signal. This process, called sifting, continues recursively until zero-mean oscillatory components, namely intrinsic mode functions (IMFs), are obtained. Owing to this ability to expand a signal into its IMFs, EMD is considered well suited for processing the nonstationary signals generally encountered in practice. Keeping in view its efficacy for 1D signals, several variants of EMD have also emerged for multichannel signals, e.g., multivariate EMD (MEMD) [17], dynamically sampled MEMD [18], etc.
When employed for denoising, EMD aims at detecting the IMFs representing the (oscillatory) signal parts and rejecting the IMFs corresponding to the non-oscillatory noise. A wavelet-inspired interval-thresholding function is used for detecting the oscillatory signal parts from the noisy IMFs [19]. Specifically, the EMD-based interval thresholding (EMD-IT) [19] aims to detect the oscillations separated by two consecutive zero crossings. This is achieved by comparing the extrema of an interval against a threshold value leading to either retention or rejection of the whole interval. The interval thresholding has since been used within a variety of denoising methods and has seen several variants including interval thresholding based on histogram partition [20], MEMD-based interval thresholding [21] and a purely multivariate interval thresholding [22].
Instead of performing thresholding, the work in [23], [24] employed statistical tools to detect the relevant (signal) modes for a partial reconstruction of the denoised signal. However, these denoising approaches may result in suboptimal performance due to the mode mixing (i.e., manifestation of multiple IMFs within a single IMF) property of EMD and its sensitivity to noise and sampling. Essentially, the aforementioned shortcomings within EMD framework result in leakage of noise into a few signal modes which leads to their rejection resulting in suboptimal denoising. The lack of mathematical foundation of the EMD limits the chances of rectification of these issues within its framework. The issue of noise presence within the selected relevant IMFs was better handled by partial reconstruction of the thresholded relevant modes [25].
The recently proposed variational mode decomposition (VMD) is based on optimization of a variational problem to obtain an ensemble of a fixed number of band limited IMFs (BLIMFs) [26]. Owing to its sound mathematical foundation, VMD successfully avoids mode mixing and is robust to noise and sampling unlike EMD [26], [27]. From the view point of denoising, a very important feature of VMD is its ability to segregate the desired signal into a few initial BLIMFs while noise is mostly stashed into a few final BLIMFs. Hence, by rejecting the modes with noise, a good estimate of the true signal may be obtained by partial reconstruction.
A literature review shows that the existing VMD-based denoising approaches select relevant signal modes by comparing the probability distribution function (PDF) of an individual BLIMF against the PDF of the noisy signal. This is well founded because a distribution function is generally reflective of the signal present within the noisy data. An estimate of the signal present in a BLIMF may be obtained by measuring the closeness of its PDF with that of the noisy signal, for example, by employing Euclidean distance [28], Bhatacharya distance [29], etc. Therein, the modes statistically close to the noisy signal are retained as relevant signal modes while largely dissimilar modes are rejected as noise. For a detailed study on the efficacy of various statistical distances for estimating relevant modes, the interested reader is referred to [30]. The result presented in [30] show that the Hausdoffs distance [31] yields the best denoising performance. Apart from that, the method in [32] selects relevant modes using the detrended fluctuation analysis (DFA) (originally used with EMD within the EMD-DFA method [24]) that estimates the randomness of data by observing the lack of trend. This method, hereafter referred as VMD-DFA [32], rejects the noisy BLIMFs and reconstructs the denoised signal based on the remaining modes.
In this paper, we present a novel approach to signal denoising that uses Cramer Von Misses (CVM) statistic locally on multiscale signal decomposition obtained through VMD. A nonlinear thresholding scheme based on Goodness of Fit (GoF) test is utilized to test whether the obtained CVM values (at multiple scales) conform to noise distribution or not. Those parts of the signal which conform to noise are discarded while the rest are retained. Our approach is different and more effective than other denoising methods, e.g., [27], owing to the inherent robustness of CVM statistic in testing for a given data distribution; we refer readers to a detailed description of empirical distribution function (EDF) based statistics, including CVM, for detecting normality [33]. Specifically, we propose a robust multistage procedure whereby first the predominantly noisy modes are detected using the CVM distance which are subsequently used to estimate the noise distribution. Finally, an empirical GoF test based on CVM statistic and the estimated distribution of noise is used to reject the noise coefficients from within the remaining modes. The main contributions of this work include: • Estimation of noise distribution model from within the noisy signal that is facilitated by the effective segregation of noise and true-signal by the VMD into separate groups of modes owing to its robustness to noise and mode mixing. • The use of the robust CVM distance based on EDF statistic as a means to detect relevant signal modes and the same time reject the predominantly noise modes.
• The annihilation of noise from within the remaining relevant signal modes by estimating how closely the estimated the noise distribution fits the local segments of the selected IMFs using the CVM test. To validate the performance our method, extensive computer simulations have been carried out for denoising a variety of benchmark signals corrupted by artificially generated Gaussian noise. Furthermore, the efficacy of the proposed method is demonstrated by denoising a few (real) EEG signals corrupted by actual (non-Gaussian) sensor noise.
The rest of paper is organized as follows: Section II provides the preliminaries related to the proposed methodology that is subsequently presented in Section III. Section IV reports experiments analyzing the performance of our proposed work while Section V presents a few practical denoising examples. Finally, conclusion along with the future prospects of this work are discussed in Section VI.
A. Variational Mode Decomposition (VMD)
VMD employs an entirely non-recursive approach to decompose a signal y(t), ∀ t = 1, . . . , N , into K predefined modes BLIMFs u k (t), ∀ t = 1, . . . , N . This is achieved by first finding the center frequencies w k and then an ensemble of compact BLIMF by solving the following constrained variational problem [26] argmin subject to where δ(t) denotes the Dirac distribution, and * represents the linear convolution. In order to enforce the constraint that the aggregate of the total number of K modes u k (t), where k = 1, . . . , K, amount to the original signal; Lagrangian multipliers γ(t) are used where the quadratic data fidelity term K k=1 u k (t) − y(t) 2 is used for its accelerated convergence and to ensure minimum squared error [26]. This way, the center frequencies w k required to find compact modes that successfully avoid mode mixing are estimated by solving (3) using the alternating direction method of multiplied (ADMM) [26]. Further details of the algorithm can be found in [26].
B. Cramer Von Mises (CVM) statistics
CVM statistic [34] belongs to a class of statistical distances [33] that estimate how closely a dataset or observations follow a given distribution function. In this regard, the CVM statistic requires an estimate of the distribution of given observations, that is obtained using the EDF. It is worth mentioning that EDF happens to be a robust model of distribution even for smallsized data and is easy to compute [35]. More importantly, EDF is a discrete approximation of the cumulative distribution function (CDF) that means a distribution test is realized by testing how close an EDF of the data at hand is from the CDF of that reference distribution. This type of testing framework is termed as GoF test of a distribution on given dataset whereby EDF-based distances, e.g., Kolmogrov Smirnov (KS) statistic [36], Anderson Darling (AD) statistic [37], CVM statistic etc., are used to estimate the measure of fit of the reference CDF on the EDF of data at hand.
Given the CDF E 0 (z) corresponding to reference distribution and EDF E(z) of given observations, the CVM statistic is given as follows where z denotes the support of the distribution function. Note that (4) involves an indefinite integration and is practically not computable. Therefore, its computable numerical adaptation is presented by D'Augustino in [35] where z (t) denotes a set of observations having finite length L, i.e., t = 1, 2, · · · , L. The GoF test based on CVM statistic is realized by estimating the significance level or threshold λ that specifies the maximum value of the test statistic ∆ (5) that is sufficient to suggest a close-fit. The GoF testing framework checks the following binary hypothesis: where H 0 denotes the null hypothesis suggesting a close-fit of null (or reference) distribution on the given data while the H 1 denotes the alternate hypothesis of no-fit. The threshold parameter λ is estimated by minimizing the probability of false alarm (P fa ), i.e., false detection rate of the alternate hypothesis H 1 given the null hypothesis H 0 , and is mathematically stated as follows where Prob(·) denotes the probability of the event stated within the parenthesis. Here, the P fa = α, where α is kept very small, e.g., α = 10 −2 −10 −4 to minimize the false detection of noise as signal [11], [38], [39].
III. DESCRIPTION OF PROPOSED APPROACH Consider the signal model where y(t), x(t) and ψ(t) denote the noisy, true signal, and additive noise component, each of length N . It is customary to assume ψ(t) being modeled as N (0, σ), i.e., zero-mean wGn process with variance σ 2 . However, the additive noise in the real-life signals may be non-Gaussian. Therefore, the existing denoising approaches developed on the assumption of wGn may have a limited scope for the practical signals. In this work, we propose to address the noise removal in practical signals using a robust multi step procedure based on the VMD and CVM statistic. The main idea of the proposed work is to estimate noise distribution from the dominantly noisy VMD modes of the noisy signal and then use it as a mean to detect the noise coefficients from the rest of the modes using the CVM statistic. As stated earlier VMD effectively segregates the true signal from noise whereby signal details are mostly concentrated in a few initial BLIMFs. This is because of the well posed variational problem in (1) that leads to the expansion of the noisy signal y(t) as an ensemble of a set of K BLIMFs {u k (t), ∀ k = 1, . . . , K} as given in (2). Given that 1 < k 1 < k 2 < K, these BLIMFs may be largely categorized into following categories owing to the robust architecture within VMD to segregate signal and noise [30], [32]: The conventional VMD-based denoising approaches exploit this representation to perform the signal denoising whereby the modes with dominant signal, i.e., {u k (t), k < k 1 }, are employed as relevant modes for a partial reconstruction of the denoised signal. The rest of the modes are simply rejected. This methodology results in a significant loss of the desired signal-details due to the rejection of intermediate (signal plus noise) modes, i.e., u k (t) for k 1 < k < k 2 , along with the modes with dominant noise, i. e., {u k (t) for k > k 2 }.
In order to maximally preserve the desired signal information, the proposed framework only rejects the dominantly noise modes while the remaining modes are preserved as relevant signal modes using the CVM statistic. Hence, our definition of relevant modes includes the initial modes containing mostly signal and intermediate modes composed of both signal and noise. Subsequently, the selected relevant BLIMFs are cleansed of noise via a statistical thresholding function that operates by first estimating the noise distribution from the rejected noise modes which is then used to detect noise coefficients from selected modes using the CVM test. Finally, the denoised signal is partially reconstructed using the thresholded BLIMFs. This robust multistage procedure is depicted using the block diagram in Fig. 1 where each stage is explained in detail in the subsequent sections.
A. Relevant Mode Selection
This section details the process adopted to select BLIMFs containing signal information with or without noise, i.e., the relevant modes {u k (t), k < k 2 }. This is achieved by detecting the BLIMFs entirely composed of noise, i.e., {u k (t), k > k 2 }. In this regard, CVM distance based on EDF statistics is used to estimate the statistical distance between the noisy signal and the individual BLIMF.
1) Rationale: To identify relevant signal modes, conventional VMD denoising approaches investigate signal content in a BLIMF by computing some distance measure D k between the empirical PDFs of the noisy signal and the BLIMFs, as given below where p y and p u k respectively denote the empirical PDFs of the noisy signal y(t) and the kth BLIMF u k (t). The empirical PDFs p y and p u k are estimated by dividing the data in hand into a finite number of bins leading to the construction of a PDF for these bins, e.g., use of KS-density function in [28]- [30]. The issue with this approach is that all the data elements in a bin are assigned the same probability as the probability of the bin where it resides. This compromises the individuality of the data points within the bin resulting in a less robust estimate of the distribution especially for small-sized data.
A robust estimate may be obtained by using the EDF (5) which is discrete approximation of the CDF of the data distribution. This makes EDF a robust estimator of data distribution even for small-sized data. Consequently, EDF is frequently used within the GoF-based hypothesis testing in various practical applications [38], [40]. Therefore, we propose to use EDF-based distance to obtain a robust estimate of the distance between the noisy signal y(t) and its BLIMFs u k (t) where E y (z) and E u k (z) respectively denote EDFs of the noisy signal and the kth BLIMF u k (t).
2) Estimating D k using CVM Statistic: In order to obtain an estimateD k of the actual (statistical) distances D k between the kth mode u k (t) and the noisy signal y(t), both of size N , we use the CVM statistic as followŝ where an estimate of the signal EDFÊ y (z) is computed from the noisy signal y(t) thougĥ Here, z denotes the support of the distribution function and t denotes the time index of the data values. The operation (y(t) ≤ z) results into a binary decision (i.e., 0 or 1) at every index t of the summation whereby, for a given z; number of values of y(t) less then or equal to z are accumulated.
In order to develop an insight on how CVM statistic estimates the distance between the noisy signal and the BLIMFs, consider Fig. 2 which plots EDFs of a few selected BLIMFs with the estimated reference EDFÊ y (z) from the noisy signal. These results are obtained for the benchmark signals 'Bumps' and 'Blocks' (shown later when we present the detailed simulation results). It is observed in Fig. 2, that the EDFs of first and second BLIMFs are closest to the EDF of noisy Bumps and Blocks signals that essentially means these initial modes are mostly signal. Contrarily, the EDFs of fourth and sixth BLIMFs are further from the reference EDF which means these higher modes have lesser signal content and more noise.
3) Criteria for Selection of Relevant Modes: The abovedetailed discussion indicates that the relevant signal modes may be selected by evaluating slopes of the distances between the consecutive BLIMFs [30], [32]. Naturally, significant change in slope between the two adjacent BLIMFs means rapid decline of signal content when moving from earlier to the latter. This ensures that signal will decline further in the forthcoming BLIMFs with the increase in noise. Consequently, the existing methods [30], [32] employ maximum slope in the distance-curve to determine a threshold, k 1 , to select the relevant modes containing signal details where S k denotes the slope of the distances of kth and (k+1)th BLIMFs, computed via Consider Fig. 3 which plots CVM distances of the modes of two benchmark (noisy) signals 'Bumps' and 'Heavy Sine' (shown later when we present the detailed simulation results). It is seen from Fig. 3, that the CVM distances D k corresponding to the 'Heavy Sine' signal show maximum slope between the first and second BLIMFs, and declines massively when moving to the second BLIMF and the subsequent ones. This essentially means that most of signal content is concentrated in the first BLIMF. A similar observation can be made for the 'Bumps' signal. It is seen that the maximum slope is observed between the second and third BLIMF, and decreases rapidly in the latter modes. This shows that the signal content is largely concentrated in first two BLIMFs.
The above-detailed procedure, however, selects only dominantly signal modes as the relevant ones {u k (t), k < k 1 } for partial reconstruction of the denoised signal. By this definition, the rejected noise modes {u k (t), k > k 1 } include the intermediate signal plus noise modes {u k (t), k 1 < k < k 2 } that causes loss of signal details. To address this issue, we suggest rejection of only purely noise modes {u k (t), k > k 2 } and retention of all the modes containing signal (with or without noise) {u k (t), k < k 2 }. The selection of relevant modes according to new definition, i.e., modes containing signal (with or without noise) {u k (t), k < k 2 }, requires estimation of mode index k 2 that indicates the start of purely noise modes.
For this purpose, we alter the criteria discussed above by dividing the CVM distance curve (plotted in Fig. 3 for instance) into transient and stable regions where earlier relates to the purely signal modes while latter relates to the modes with noise. It is observed that the CVM distance plotted in Fig. 3 have transient phase before the maximum slope and the region after that can be categorized as the stable phase for all the BLIMFs. Since, the transient phase ends with maximum slope that can be seen as the threshold for selecting the signal only modes while an estimate of modes containing signal plus noise may be obtained from the stable phase. We suggest looking for maximum slope within the stable region which indicates the point of maximum change between partially noisy modes to purely noisy modes. Understandably, maximum slope in the stable region separates the purely noise modes and the modes with signal. Based on the above discussion, maximum slope in the stable region may be obtained as follows where k 1 is obtained from (13) and k 2 denotes the index of the mode that is followed by noise only modes. Mathematically, the thresholding criteria for selection of relevant modes and rejection noise only modes is then given below
B. Estimation of Noise Distribution from the Rejected Modes
In this step, the distribution model governing noise within the VMD BLIMFs is estimated empirically from the rejected modes {u k (t), k > k 2 } which are mostly composed of noise. This step of the estimation of noise EDF is an essential part of the proposed approach as indicated by block 'CVM Test' in Fig. 1. The process adopted for empirical estimation of the noise distribution is illustrated in detail in Fig. 4 whereby, given a large sized dataset from an unknown distribution; a good estimate of its CDF may be empirically obtained using the ensemble average of the EDF of its local segments. That is a standard procedure used by statisticians to empirically estimate the unknown distribution function governing a given dataset [41]- [43].
Let u (n) k (t) = {u k (t), k > k 2 }, ∀ t = 1, . . . , N denote the rejected noise BLIMFs to differentiate these noise modes from the relevant signal modes. First, we divide each rejected noise mode u . . , j + L 2 } of equal size L + 1 and centered around an index j (depicted using boxes drawn on the rejected noise coefficients at the bottom left of Fig. 4). Next, the EDF E This way, EDFs of all of the non-overlapping segments are computed using (17), This step is depicted using the EDF blocks in Fig. 4. Finally, by ensemble averaging the EDFs of all the segments from the rejected modes yields a close estimateÊ 0 (z) of the actual noise CDF where the accuracy of the estimateÊ 0 (z) ≈ E 0 (z) increases with increase in number of segments, i.e., the length of the dataset. Fig. 4 plots the resulting estimateÊ 0 (z) of the noise CDF E 0 (z) that is obtained from the rejected modes of the noisy signal corrupted by additive wGn at SNR = 10 dB.
C. Thresholding Relevant Modes Using CVM Test
This section describes the CVM statistic-based testing framework used to reject noise from the selected relevant modes {u k (t), k ≤ k 2 }. The aim here is to reject the coefficients corresponding to noise ψ(t) without losing those corresponding to the true signal x(t). In this regard, detection of noise coefficients from the selected modes is defined as a local hypothesis testing problem by selecting a local segment u u u jk = {u k (t) ∀ t = j − L/2, . . . , j + L/2} of size L + 1, around each coefficient u k (j), from a selected BLIMF {u k (t), k ≤ k 2 }, as followŝ H 0 : u u u jk ∈ ψ(t), whereĤ 0 andĤ 1 respectively denote the null and alternate hypothesis of our VMD-based denoising approach. In order to test the hypothesis given in (19), i.e, to check the possibility that u u u jk ∈ ψ(t); the EDF E jk (z) of the local segment u u u jk is computed based on (17) and then the goodness of fit (GoF) of E jk (z) is evaluated/ tested on the estimated noise EDFÊ 0 , where Here, the symbol ∼ denotes the close-fit and denotes no-fit of EDFs which is decided based on the value of the CVM distance ∆ jk between E jk (z) andÊ 0 (z) computed using (5). To achieve that, a threshold λ k is estimated such that the false detection of null hypothesis H 0 , referred to as false alarms, are minimized. In essence, λ k indicates the maximum possible value of distance ∆ jk required to suggest a close-fit between the two EDFs. Therefore, a close-fit E jk (z) ∼Ê 0 (z) is detected when ∆ jk is within the specified bounds, i.e., ∆ jk ≤ λ k . On the other hand, the case of no-fit E jk (z) Ê 0 (z) is obtained when the distance exceeds the specified bound of threshold, i.e., ∆ jk > λ k .
1) Selection of Threshold Based on Rejected Noise Modes: Within GoF tests, the threshold or critical value λ k is selected for each mode k that serves as an upper bound on the CVM distances ∆ jk . Generally, λ is selected for very small value of P fa (7) which ensures least false rejections of the null hypothesis H 0 . That means false detections of no-fit (i.e., H 1 ) when the reference EDF actually fits the given data samples (i.e., H 0 ) are minimized. In the context of the testing problem (21), very small P fa means threshold is selected such that it minimizes the false detection of noise (i.e.Ĥ 0 ) as true signal (i.e.Ĥ 1 ). Hence, this requirement of minimum P fa fits right in the denoising problem since the goal in denoising is to ensure maximum noise is removed which can be achieved by minimizing the P fa while attempting to maximize the preservation of the true signal.
Conventionally, threshold selection is performed by accumulating the probabilities of false detection, i.e., P fa = Prob(∆ jk > λ|Ĥ 0 ). Given the PDF p(u where {u k (t)) may be obtained by computing the derivative of the empirically estimated noise CDFÊ 0 (t) (18) Consequently, an empirical adaptation of (21) to our denoising problem may be obtained through Since, the detection of the range of coefficients {u (n) k (t); ∆ jk > λ|Ĥ 0 } is central to the computation of P fa using (24) for a given threshold λ k . The proposed empirical approach estimates the coefficients {u (n) k (t); ∆ jk > λ} from the rejected noise modes {u k (t), k > k 2 } ∈ ψ(t).
To begin with, a range of candidate thresholds are selected and the noise coefficients within each rejected mode u (n) k (t) are divided into M windows having size L + 1. Next, each candidate threshold λ is used within the CVM test for applying it on the coefficients from each window. Therein, CVM statistic ∆ jk between the EDF E(z) of a window of noise coefficients and the reference noise EDFÊ 0 (z) is computed through (5) followed by hypothesis testing based on (20). For each λ, probability of false alarm P fa is computed by recording the instances of erroneously detecting noise segments (from rejected modes) as signal and then dividing the accumulated false alarms by total number of windows M .
This way, a threshold versus P fa table is estimated by computing the P fa for all the candidate thresholds λ. The relationship between the threshold λ and the P fa , obtained from the rejected BLIMFs of a given input noisy signal having SNR = 10 dB, is graphically shown in Fig. 4 (bottom right).
In general, higher number of false alarms (i.e., higher P fa ) are observed for lower thresholds but with increase in threshold value, the P fa decreases.
Afterwards, a threshold is selected for a given P fa from the estimated threshold λ versus P fa table. Here, it is important to consider the trade-off between the P fa and the probability of true-signal detection P d when selecting a threshold for noise reduction. Generally, P d also decreases with decrease in P fa , i.e., signal is falsely (or erroneously) rejected as noise for lower P fa , see [44] for more insight into this matter. Therefore, to avoid loss of signal from the initial BLIMFs (which are mostly composed of signal), higher P fa is selected to keep P d high as well, i.e., signal is not falsely rejected as noise. On the other hand, lower P fa is chosen for latter modes (with dominant noise component) to reject maximum noise. To that end, a scale adaptive separate threshold λ k is selected for each of the relevant mode u k (t) using the following decaying function where P (k) fa denotes the false alarm probability of the kth mode. The decaying function (24) assigns higher P fa to initial signal modes (i.e., to recover maximum signal) and smaller P fa to latter noise BLIMFs (i.e., to reject maximum noise).
2) Thresholding Function: Traditional hard-thresholding function detects noise coefficients based on their smaller amplitudes via a threshold that is well adapted to wavelet denoising. However, the use of a thresholding function that detects noise based on amplitude difference among coefficients is not properly motivated for VMD denoising. This is because VMD denoising methods do not exploit sparsity of multiscale decomposition, instead it estimates the local trend to detect the signal content from noise [19], [32]. The proposed approach works on the same principle of estimating local trend in a BLIMF to detect and reject noise. In this work, the local trend is estimated using the EDF E tk (z) of local segment u u u tk that is selected around each coefficient at location t. Subsequently, it is checked whether the E tk (z) is close to that of noiseÊ 0 (z) by estimating the (CVM) distance ∆ tk between the two EDFs using (11) and then it is compared against the threshold λ k Finally, the denoised signal is reconstructed from the thresholded relevant BLIMFs {û k , k < k 2 }, as followŝ wherex is an estimate of the true signal x(t) obtained using the proposed approach. In the rest of the paper, we will refer to the proposed approach as VMD-CVM.
IV. SIMULATION RESULTS AND DISCUSSION In this section, simulation results are presented to demonstrate the effectiveness of the performance of the proposed method. Following methods have been considered for the performance comparison.
• VMD-DFA [32]: Performs partial reconstruction by rejecting the VMD-BLIMFs exhibiting detrend (i.e., randomness) using the DFA. • DWT-GOF [11]: Tests the normality DWT coefficients to detect and reject noise. • DT-GoF-NeighFilt [12]: Exploits the quasi-translation invariance of the DTCWT using the normality test and neighborhood classification based filtering for effective noise removal. The following performance measures have been employed for the performance comparison: • Signal-to-noise ratio (SNR); • Mean squared error (MSE). The test datasets include both real and synthetic signals. Among those, synthetic signals include 'Blocks', 'Bumps', 'Heavy Sine', and 'Doppler' respectively plotted in Fig. 5 (a)-(d). The real signals include 'Sofar' and 'Tai Chi' as shown in Fig. 5 (e) and (f), respectively whereby prior signal records the oceanographic float drift of the water flowing through Mediterranean sea [47] and latter signal tracks the human body movements in a Tai Chi sequence using a 3D sensor attached to the ankles [17].
A. Experimental Settings
We report several experiments to study and analyze different aspects of the proposed method when compared against the sate of the art. In this regard, noisy signals are generated by adding wGn (at varying input SNRs) to the aforementioned input signals shown in Fig. 5. These noisy signals are subsequently denoised using the comparative methods where quantitative measures of performance (i.e., SNR and MSE) are obtained by comparing the clean input signal against the denoised, which are reported in tabular as well as graphical form. The qualitative analysis of the comparative methods is presented by visually demonstrating how closely the denoised signals follow their corresponding clean input signals. For VMD-based denoising methods, we chose the number of user defined modes K = 10 unless specified otherwise by the method. For wavelet denoising, the multiscale decomposition was performed using Daubechies filter bank with eight vanishing moments (i.e., 'db8') with number of decomposition levels M = 5. On the contrary, complex wavelet filters were used when decomposition using the DTCWT. The rest of the simulation parameters for the various methods have been selected on the basis of guidelines provided in the respective references. Table I reports output SNR and MSE values of the denoised signals obtained for various methods considered in this paper. In this regard, input signals including synthetic 'Bumps' and 'Blocks' signals (of 2 12 sample size) and real 'Sofar' and 'Tai Chi' signals (of 2 10 sample size) are corrupted by wGn such that input SNRs become −5 dB, 0 dB, 5 dB and 10 dB. For each method, the mentioned output SNR and MSE values in the Table I are average for J = 20 realizations. The best results, i.e., highest output SNR and the corresponding MSE values are highlighted in bold.
B. Input SNR vs. Output SNR
It can be seen from the Table. I that the proposed VMD-CVM method outperforms the state of the art methods for almost all input signals, except for a few cases where the DT-GOF-NeighFilt yields better performance than the proposed VMD-CVM. The rest of the data-driven denoising methods based on VMD or EMD fall behind the wavelet denoising methods used in this study. This superior performance of the proposed method owes to the robust multistage procedure that recovers signal within the noisy modes which are rejected as noise in other VMD-based methods. Table. I is the highest output SNR values of the proposed VMD-CVM for the 'Blocks' signal despite its piece-wise constant nature. This is significant because VMD/EMD-based denoising methods conventionally fail to extract the details of piece-wise constant signals. This result demonstrates the efficacy of the robust architecture within the proposed approach. For the 'Sofar' and 'Tai Chi' signals with a fair bit of complexity due to sharp and subtle variations within, the VMD-CVM method yields best performance for input SNR ≥ 0 dB. Thereby, the margin of difference between the SNR values from proposed method and the DT-GOF-NeighFilt are large enough to be more than 5% of the best. C. Signal Length vs. Output SNR Next, we analyze the proposed framework by comparing it against the state of the art in signal denoising on synthetic signals with varying lengths N = 2 10 to 2 14 , corrupted by input noise of 0 dB. In this experiment, comparative analysis of quantitative results is presented graphically using error-bar plot as shown in Fig. 6 that not only displays the mean of the output SNRs over J = 20 iterations but also gives an estimate of possible variations in SNRs during these iterations. Here, generally it is observed that performance of the denoising methods is significantly improved as the length of the signal increases. That is understandable because increase in length of the signal increases its redundancy that helps in better extraction of signal details in presence of noise.
An important observation in
The error-bar plot for 'Blocks' signal in Fig. 6 (a) demonstrates that proposed VMD-CVM shows best results for signal length N ≥ 2 12 , while DTCWT-GoF yields highest SNRs for N < 2 12 . Similarly, For 'Bumps' signal in Fig. 6 (b), proposed VMD-CVM yields highest output SNRs for all signal lengths except N = 2 13 where DT-GOF-NeighFilt marginally betters our method. For 'Heavy Sine' and 'Doppler' signal in Fig. 6 (c & d), the VMD-CVM and DT-GOF-NeighFilt methods closely follow each other and outperform the rest of the comparative methods by a significant margin while yielding similar results on all the lengths. Among these two methods, the proposed method yields highest mean output SNR along with higher standard deviation apart from the odd case where DT-GOF-NeighFilt yields better results.
It can be concluded from the results that proposed VMD-CVM methods stands out along with the DT-GOF-NeighFilt that yielded equally effective denoising performance. Mostly, the VMD-CVM yielded top SNR values especially for higher length input signals. For lower length N = 2 10 , DT-GOF-NeighFilt generally outperformed the proposed method.
D. Qualitative Performance Analysis
The qualitative analysis demonstrates how closely the denoised signals from various methods resemble their corresponding true or noise-free signals. Generally, this is shown by plotting the denoised signals along with the original (noisefree) one that enables the reader to visualize how well the denoising methods extract signal-details from the noisy signal. To that end, we plot denoised 'Bumps' and 'Taichi' signals along with the original ones respectively in Fig. 7 and Fig. 8 whereby the corresponding noisy signals are also shown for comparison. We compare the visual results of the proposed VMD-CVM method against the top comparative state of the art methods namely BLFDR, EMD-IT, VMD-DFA, GoF-DWT and DT-GOF-NeighFilt. Denoised signals were obtained by respectively denoising the noisy 'Bumps' (shown in Fig. 7(a)) and noisy 'Tai Chi' signal (shown in Fig. 8(a)) where noisy version of input SNR = 10 dB is shown in gray while true signal is shown in dark black.
It is observed from the Fig. 7 that the proposed VMD-CVM method yielded best estimate of the original signals. Observe that the denoised 'Bumps' signals from DWT-GoF and DT-GOF-NeighFilt yield very close estimate of the original signals as can be seen from Fig. 7 (c & d) but both these methods suffer through artifacts. That overshadows their efficiency of extracting signal details when compared to the proposed VMD-CVM that yields an equally close estimate of the original signal but without artifacts, see 7 (e). More visible spike artifacts are found in the denoised 'Bumps' signals by EMD-IT, see from Fig. 7(b), which deteriorate the overall quality of the denoised signals when compared to the original ones. The denoised 'Tai Chi' signals from the comparative methods are plotted in Fig. 8 (b)-(d) where it can be seen that BLFDR and VMD-DFA fail to recover the peaks and highly varying parts of the signal. That is owing to the complex structure of the 'Tai Chi' signal composed of subtle variations with high range of frequencies which pose a challenge to extract in presence of noise. A better estimate of the true signal is obtained by the DT-GOF-NeighFilt that largely recovers the variations while doing away with noise. Though, it fails to capture the subtle variations specially in the last half of the denoised signal, see Fig. 8(e). The best estimate of the 'Tai Chi' signal is obtained by the proposed VMD-CVM that captures the subtle variations throughout the signal as can be observed from see Fig. 8(f). Apart from the sharp changes situated in the middle of this signal, the proposed method recovers all the details of the real 'Tai Chi' signal demonstrating its effectiveness for complex real world signals.
Furthermore, the VMD-DFA yields exaggerated variations as artifacts in the aftermath of denoising process, see Fig. 8 (c). That is owing to its partial reconstruction nature where relevant modes were selected to reconstruct the denoised signal and the presence of noise within the selected modes was ignored. Observe from Fig. 8(e) that the proposed approach does not suffer from this issue because our method performs thresholding on the selected relevant modes to reject the coefficients exhibition noise-like-statistics. Consequently, the reconstruction of the denoised signal based on cleansed thresholded BLIMFs successfully avoids the artifacts otherwise seen the results of the VMD-DFA method.
NOISE
In this section, we present denoising results of the proposed method when applied to an ECG signal corrupted by the sensor noise. The raw ECG signal in this regard is taken from [48] that is corrupted by actual sensor that is typically modeled using the non-Gaussian distribution despite the presence of thermal noise due to electronic components that follows Gaussian distribution. As a result, the noise mostly obscures the useful information within the subtle variations of the ECG signal, observed from Fig. 9 (a) where the noisy ECG signal in gray color along with a clean version of the raw ECG signal also available in [48] to be used as a ground truth.
To address this challenging problem we used the proposed method that can estimate the distribution of noise/ artifacts from within the noisy signal and subsequently use it to reject the noise. The resulting denoised signal is plotted in Fig. 9 (b) (in dark black) where clean signal is also shown (in gray) in the background the denoised signal. Here, the effectiveness of our method is shown by demonstrating how closely the denoised version follows the clean ECG signal. Evidently, from Fig. 9 (b), the denoised version closely follows the clean signal in the background because it recovered important details including sharp peaks and slower variations. Despite the presence of the noise artifacts, observed near the sharp peaks, the overall quality of the recovered ECG remains intact verifying the efficacy of the proposed method in suppressing sensor noise while retaining the subtle variations which were previously hidden in the sensor noise.
VI. CONCLUSIONS
In this paper, we have addressed the problem of noise removal from the practical signals whereby the noise is considered to be governed by unknown probability distribution. We propose to exploit the desirable properties of VMD to estimate the EDF of noise from within the noisy signal. As stated earlier, VMD possesses ability to segregate the signal and noise in separate group of BLIMFs owing to its robustness to noise and mode mixing. First, we detect the group of BLIMFs predominantly composed of noise using CVM statistic followed by the empirical estimation of noise EDF from these rejected modes. Subsequently, the estimated distribution is used as a means to detect and reject the noise coefficients (the coefficients fitting the estimated noise EDF) from the remaining modes. The estimation of GoF of the reference noise EDF on the local segment has been performed by the CVM-GoF test. The effectiveness of the proposed method has been demonstrated by comparing its performance against the state of art methods. It has been observed that the proposed method comprehensively outperformed the rest of the methods considered in this paper. In addition, the efficacy of the proposed method has also been demonstrated when addressing the problem of removal of sensor noise governed by some unknown distribution. For this purpose, we took the example of EEG signals (corrupted with sensor noise). It has been shown that the proposed method successfully removes the noise. The future prospects of this work may include its use in other practical applications for removal of noise before the signal processing part, e.g., denoising of Lidar signals, vibration signals from conditioning systems of heavy mechanical systems etc.
|
2020-06-02T21:03:19.546Z
|
2020-05-31T00:00:00.000
|
{
"year": 2021,
"sha1": "11a27836937691adcc21f9797e64a664594d431c",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/2006.00640",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "11a27836937691adcc21f9797e64a664594d431c",
"s2fieldsofstudy": [
"Computer Science",
"Mathematics",
"Engineering"
],
"extfieldsofstudy": [
"Computer Science",
"Engineering"
]
}
|
14311400
|
pes2o/s2orc
|
v3-fos-license
|
Higher mobility of butterflies than moths connected to habitat suitability and body size in a release experiment
Mobility is a key factor determining lepidopteran species responses to environmental change. However, direct multispecies comparisons of mobility are rare and empirical comparisons between butterflies and moths have not been previously conducted. Here, we compared mobility between butterflies and diurnal moths and studied species traits affecting butterfly mobility. We experimentally marked and released 2011 butterfly and 2367 moth individuals belonging to 32 and 28 species, respectively, in a 25 m × 25 m release area within an 11-ha, 8-year-old set-aside field. Distance moved and emigration rate from the release habitat were recorded by species. The release experiment produced directly comparable mobility data in 18 butterfly and 9 moth species with almost 500 individuals recaptured. Butterflies were found more mobile than geometroid moths in terms of both distance moved (mean 315 m vs. 63 m, respectively) and emigration rate (mean 54% vs. 17%, respectively). Release habitat suitability had a strong effect on emigration rate and distance moved, because butterflies tended to leave the set-aside, if it was not suitable for breeding. In addition, emigration rate and distance moved increased significantly with increasing body size. When phylogenetic relatedness among species was included in the analyses, the significant effect of body size disappeared, but habitat suitability remained significant for distance moved. The higher mobility of butterflies than geometroid moths can largely be explained by morphological differences, as butterflies are more robust fliers. The important role of release habitat suitability in butterfly mobility was expected, but seems not to have been empirically documented before. The observed positive correlation between butterfly size and mobility is in agreement with our previous findings on butterfly colonization speed in a long-term set-aside experiment and recent meta-analyses on butterfly mobility.
Introduction
Dispersal ability is a key factor affecting occurrence patterns and population trends in animals (Ewers and Didham 2006). Ongoing changes in land use and climate also pose strong selective pressures on species traits that are connected to animal mobility (Bonte et al. 2012;Baguette et al. 2013). An increased need to understand the impacts of environmental change at population and community levels has recently attracted much interest in the measurement of mobility differences across individuals, populations, and species (Bowler and Benton 2005;Clobert et al. 2012). However, despite the accumulating experience in estimating mobility (Nathan et al. 2008), producing reliable multispecies comparisons has remained a challenging task. Here, we used butterflies and moths for a multispecies mobility comparison to examine differences in dispersal ability among species and between species groups. Butterflies are one of the most popular groups in animal mobility research (Stevens et al. 2010), whereas knowledge on other insect groups, even among Lepidoptera, has remained scanty.
Several previous studies on butterflies have demonstrated the important role of interspecific mobility differences in species distributions and species responses to habitat and climate change. For example, the effects of habitat fragmentation have been shown to differ between butterfly species with varying mobility ( € Ockinger et al. 2009( € Ockinger et al. , 2010. € Ockinger et al. (2010), using body size as a proxy for mobility, showed that butterfly species with low mobility have been most strongly affected by habitat loss and other studies have reported similar results. Kotiaho et al. (2005) found that threatened butterfly species are characterized by low mobility, and the meta-analysis by Thomas et al. (2011) showed that dispersal ability is one of the main drivers of long-term butterfly population trends. These results indicate that dispersal ability may crucially affect how species can cope with global threats such as climate change and habitat loss and fragmentation.
Moreover, recent studies have highlighted the importance of intraspecific variation in mobility and that relatively fast microevolutionary changes in dispersal ability and emigration propensity may play a significant role when species are adapting to changing environments (Merckx et al. 2003;Schtickzelle et al. 2006;Duplouy et al. 2013). Fast evolutionary changes may influence ecological population dynamics and vice versa, potentially causing complex eco-evolutionary dynamics in dispersal (Hanski and Mononen 2011). However, the large number of factors influencing evolution of dispersal complicates predictions on what would be the optimal dispersal strategy in different landscapes and in case of different population structures .
Butterfly mobility has been empirically studied using a number of different approaches (Stevens et al. 2010;Sekar 2012). The most popular approach has been to conduct mark-release-recapture (MRR) studies in natural butterfly (meta)populations (Hovestadt and Nieminen 2009). However, mobility estimates from different single-species MRR studies are not directly comparable, because the results are strongly dependent on the spatial scale (Schneider 2003;Franz en and Nilsson 2007) and landscape structure (Mennechez et al. 2003;Dover and Settele 2009) of different studies. Manipulative experimental approaches have enabled to answer more specified questions concerning different components of butterfly mobility and to carry out intra-and interspecific comparisons. However, experimental releases of butterflies in the field (S€ oderstr€ om and Hedblom 2007; Kallioniemi et al. 2014) and studies conducted in large habitat cages (Norberg et al. 2002;Hanski et al. 2006) have been relatively restricted in spatial scale and have rarely involved more than two species.
Because of the great demand for comparable mobility estimates in community ecological studies, there is an obvious need for empirical studies producing comparable mobility estimates for a larger number of species simultaneously and in standardized conditions. We produced such estimates by experimentally releasing a large number of marked individuals of 60 butterfly and diurnal moth species in a large set-aside field and then collecting recaptures within the study landscape. Our aim was to collect a sufficient amount of comparable data in order to analyse interspecific differences in mobility and test our hypotheses on the effects of specific species traits on butterfly mobility based on earlier studies. More specifically, we aimed to answer the following study questions: (1) Do butterflies differ significantly from geometroid and noctuoid moths in mobility? (2) Does body size (wingspan) explain mobility differences between butterfly species? (3) Which other species traits affect mobility differences between butterfly species?
Based on previous studies on moths (Nieminen 1996;Nieminen et al. 1999), we hypothesized geometroids to be less mobile than noctuoids. Our expectation for the relationship between butterfly and moth mobility was less clear, because much variation has been reported in both species groups and direct multispecies comparisons between butterflies and moths have been lacking. However, our earlier results of a six-year set-aside experiment showed that butterflies colonized the set-aside faster than diurnal moths (Alanen et al. 2011), suggesting higher mobility in butterflies than moths.
Based on recent meta-analyses on butterfly mobility (Stevens et al. 2010(Stevens et al. , 2012Sekar 2012) and our own results on colonization speed in butterflies (Alanen et al. 2011), we hypothesized mobility to increase with increasing body size (wingspan). The motivation to test the role of a set of other species traits stems from recent studies reporting significant effects of various traits on butterfly mobility (Stevens et al. 2010(Stevens et al. , 2012Sekar 2012). Furthermore, we used the opportunity offered by our experimental set-up to test also the effect of release habitat suitability on mobility of species originating from different habitat types, hypothesizing that decreasing habitat suitability would increase emigration rate (Bowler and Benton 2005). Finally, we also considered the potential effects of phylogenetic relatedness on butterfly mobility. Characteristics of closely related species are often more similar compared with distantly related species, and thus the assumption of independent data points may be violated in comparative analyses including multiple species (Ives and Zhu 2006).
Experimental design and study area
The experiment had a simple design in which marked lepidopteran individuals were released daily in a 25 m 9 25 m release area within a 11-ha set-aside field, which was established eight years earlier ( Fig. 1; for the former six-year set-aside experiment, see Alanen et al. 2011). Movement distances of the marked individuals were then systematically recorded by recapturing them at different distances from the release area both within and outside the set-aside field (Fig. 1). This design enabled us to record distance moved and emigration rate in a comparable manner for a larger set of butterfly and moth species than to our knowledge in any previous study.
The release set-aside field was located in Yp€ aj€ a, southwestern Finland (ETRS-TM35FIN N 6745551 E 299807), in an agricultural landscape dominated by spring cereal production. The landscape surrounding the set-aside field was flat and open agricultural land in all directions except toward the northwest, where there was a mosaic area of forests, species-rich semi-natural grasslands, and built-up areas starting from c. 600 m from the set-aside (Fig. 1). The release set-aside was occupied by a relatively diverse community of grassland butterflies and diurnal moths, with many species even more abundant at the time of our release experiment than in year 2008, when the sixyear set-aside experiment ended (see Table S1). For instance, Lycaena hippothoe had clearly established a local population on the set-aside after year 2008.
Butterfly and moth releases
A total of 2011 butterfly and 2367 moth individuals belonging to 32 butterfly and 28 moth species were marked and released in the 25 m 9 25 m release area within the set-aside field (for a detailed list of released species, see Table S1; nomenclature according to Kullberg et al. 2002). Individuals for the releases were collected from the set-aside field (40% of released individuals) as well as from the surrounding landscape (nine sites, 60% of individuals). The nine sites were located 50-3800 m from the release set-aside field and were good butterfly habitats, mostly patches of semi-natural grasslands and sheltered, sunny forest edges with some semi-natural vegetation. These sites were selected in order to maximize both the number of individuals and species released in the experiment. Collecting (unmarked) individuals from these sites for the releases also effectively served in collecting recaptures of marked individuals that had already emigrated from the release set-aside field (see below).
Butterflies and diurnal moths were marked, released, and recaptured daily during two study periods: from 30 May to 11 June and from 28 June to 14 July 2011. The first period covered the flight season of early summer species in southwestern Finland, whereas the second period covered the flight season of mid-summer species. This procedure enabled us to cover a large proportion of butterfly and diurnal moth species' occurrence during the summer season. The weather was mostly warm and sunny (i.e., favorable for lepidopteran activity) during the two study periods.
Usually, individuals were collected for the releases from the release set-aside field during the morning and from the surrounding landscape during the afternoon. Figure 1. Aerial photograph of the study area. Letter A indicates the release area within the focal set-aside field. The black line with arrows indicates the 2500-m-long transect in which marked individuals were systematically searched. Solid white lines show the searching routes outside the release set-aside, and dashed white lines show the routes which were walked less frequently. Numbers 1-4 indicate favorable butterfly and moth habitats, which were used both for collecting individuals for the releases and for searching recaptures of emigrated individuals; especially sites 1 (abandoned farmyard and a sheltered forest edge) and 2 (semi-natural grassland patch) attracted many emigrants. Butterflies were always marked with an individual number on the wing using a fine-point pen, whereas other lepidopteran species were marked with a color spot made on the wing with a thicker marker pen. The latter was performed by gently pressing the pen through the butterfly net without taking the moth individual in hand, in order to avoid damaging its fragile wings. Immediately after marking, each individual was placed individually within a 120-ml plastic container which was then stored in a cool box in order to keep the marked individuals inactive before the release.
Individuals marked within the release set-aside during the morning session were released close to the center of the 25 m 9 25 m release area daily approximately at 12 o'clock, whereas the marked individuals collected from the surrounding landscape were typically released between 16 and 18 o'clock. In the release area, the butterflies and moths were gently placed individually on plant leaves and flowers. Recaptures were never collected within the 25 m 9 25 m release area.
Protocol for recaptures
In collecting data on movements of released butterflies and moths, the focus was on both within set-aside movements and movements to the surrounding landscape. Therefore, recaptures were searched daily in a systematic way at different distances from the release area, both in the release set-aside and in its surroundings.
Approximately one hour was spent on collecting recaptures within less than 100 meters from the release area every morning. Such a high effort was directed on the relatively close vicinity of the release area in order to ascertain at least some recaptures from as many released species as possible, including the least mobile species. In addition, the whole release set-aside field was systematically searched through by walking a 2500-m-long constant transect ( Fig. 1) every day. Approximately similar effort was directed on gathering recaptures of emigrated individuals in the surrounding landscape. Fig. 1 shows the routes along field margins and road verges in the vicinity of the release set-aside field in which recaptures were searched for as often as time allowed (almost daily). In addition, four favorable butterfly and moth habitats (numbers 1-4 in Fig. 1) turned out to attract many emigrated individuals, and therefore, these areas were visited almost daily. In summary, an area of c. 1 km 2 was wellsurveyed daily, whereas in total recaptures were collected from an area of c. 4 km 2 in size.
For each recaptured individual, the following information was recorded: date, time, species, sex, individual number (for butterflies), and the exact location of the recapture, marked on an aerial photograph of the area.
Measurement of movement parameters
Two main measures of mobility were recorded for each species with recaptures: average distance moved and emigration rate.
Distance moved was measured for each recaptured butterfly individual as the distance between the release point and the location of the last recapture, thereby each individual contributed to the results only once. For diurnal moths, which were not marked individually, the distance from the release point was recorded for every recapture point. Distances moved were measured from the aerial photographs in which the recapture points were marked in the field. For the statistical analyses, distances moved were ln-transformed after which they followed a normal distribution. An individual was considered as emigrated, if it was recaptured outside the release set-aside field. Based on the same logic as with distance moved, only the last recapture of a butterfly individual was used for indicating emigration, irrespective of its previous recapture records. In contrast, all recaptures of moths were considered as independent observations. As a third measure potentially related to mobility, the proportion of recaptured individuals was recorded for all studied species. In previous studies on lepidopteran mobility, increasing fraction of disappeared (i.e., not recaptured) individuals has sometimes been considered as an indication of increasing mobility or emigration (Kuussaari et al. 1996;Merckx et al. 2009). In contrast to the other two mobility measures which are solely based on recaptures, all released lepidopteran individuals contributed to this measure and thus the fraction of recaptured individuals could potentially give some additional information on mobility.
In order to facilitate an unbiased comparison of mobility between butterflies and moths in statistical analyses, we also calculated all three mobility measures for butterflies using the same logic as in moths, that is, treating each butterfly recapture as a separate data point in the data set.
Species traits
The analyses on the role of species traits focused only on butterflies as published species trait data are scanty for moths. The following six species traits were examined in order to explain observed mobility differences in butterflies: body size, adult habitat specificity and preference, larval host plant specificity and host plant type, and release habitat suitability. Body size was measured as a continuous variable, whereas all the other species traits were measured as categorical variables. The trait classifications for each studied species are shown in Table S2.
Body size of each species was measured as the average female wingspan (in mm), based on the Finnish butterfly handbook by Marttila et al. (1990). Adult habitat specificity was classified as a binary variable: habitat specialists occupying one or two and generalists occupying more than two habitat types following Ekroos et al. (2010) and originally based on Komonen et al. (2004). Habitat preference had three classes: forest edges and clearings, semi-natural grasslands, and field margins in open farmland, following Kuussaari et al. (2007). The specificity of larval host plant use was measured as a binary variable: mono-and oligophagous species feeding only on one host plant genus and polyphagous species feeding on more than one plant genus, based on Komonen et al. (2004). Larval host plant type was classified to the following four categories: woody plants (i.e., trees and shrubs as well as species in the family Ericaeae), grasses (Poaceae), leguminous plants (Fabaceae), and other herbs, based on Alanen et al. (2011).
Habitat suitability of the release set-aside field was a variable constructed specifically for our current analyses. It was based on extensive quantitative observations on the natural occurrence of the studied butterfly species in the release set-aside field, as explained in Table S1. All the species released in our mobility experiment were classified into three groups: 1 = species never recorded, 2 = species with 1-5 records, and 3 = species with >5 records during years 2003-2011. Class 3 represents species for which the set-aside field was most suitable as a breeding habitat. This measure of habitat suitability was considered as an empirically well-justified and for our purposes more accurate measure of species habitat preference than the previously published classification, presented above.
Statistical analyses
The first set of statistical analyses focused on mobility differences between two phylogenetically delineated species groups, butterflies (Papilionoidea) and geometroid (Geometroidea) moths (van Nieukerken et al. 2011), using comparably calculated mobility variables as explained above. Noctuoid (Noctuoidea) moths were excluded from these analyses, as there were only a few recaptures (more than one individual recaptured only in one species; Table 1).
Differences in mean distance moved between the species groups were tested using linear mixed models (LMM) using species group as a categorical fixed factor. Species was included in the model as a random factor in order to take into account the nonindependence of observations from different individuals of the same species. Model fitting was conducted using restricted maximumlikelihood (REML) estimation with the degrees of freedom calculated according to the Kenward-Roger method (Bolker et al. 2009). Differences in emigration rate and recapture probability between the three species groups were tested using the same logic, but by fitting generalized mixed models (GLMM) with logistic link function and binomial error distribution (due to binary response variables). GLMM fitting was conducted using adaptive Gauss-Hermite quadrature estimation (Bolker et al. 2009) with the degrees of freedom calculated with the betweenwithin degrees of freedom approximation. For all three response variables, the pairwise differences between the three species groups were tested using Tukey's test.
As the second step of analyses, multivariate models were built to examine which combinations of species traits best explained mobility differences between butterfly species. Here, only the last recapture of each butterfly individual was taken into account. Also, the sex of each individual was included in these models, because the motivation of the two sexes to move and emigrate may be quite different. However, before multivariate model building, the univariate relationships between each species trait, sex and the three mobility measures were examined by building a separate statistical model for each species trait and mobility measure (Appendix S1). Pairwise relationships between the explanatory species traits were examined before model building in order to avoid inclusion of collinear explanatory variables. Consequently, two species traits (larval host plant type and habitat preference) were omitted from multivariate model building, due to significant relationships with other traits (Appendix S1). Moreover, the potential effect of the original collection area (from the set-aside field or from surrounding landscape) of the released butterfly individuals on mobility was tested, and it did not affect emigration rate or distances moved (Appendix S1). Thus, the role of the source area could be ignored in the analyses.
Forward selection was used in building the LMM and GLMM with multiple variables, that is, the statistically significant variables (P < 0.05) were entered into the model in the order of their explanatory power. For the only continuous variable, body size, both linear and quadratic effects were tested. Statistical significances were calculated using an F-test. No overdispersion was observed in the analyses. Pairwise differences between the categories of the categorical species traits were tested using Tukey's test. All LMM and GLMM models described above were built using the statistical package SAS/STAT â 9.2 (SAS institute Inc., Cary, NC).
In order to take into account the potential effects of phylogenetic relatedness on butterfly mobility in our study, the final multivariate models for distance moved and emigration rate were refitted using generalized estimation equations (GEE) as implemented in the ape library, version 3.0.11 (Paradis et al. 2004) statistical environment (R Core Team 2013). GEE are extensions of generalized linear models (GLMs) to be applied when the statistical nonindependence of the data can be determined with a correlation matrix (Paradis and Claude 2002). Paradis and Claude (2002) have demonstrated the applicability of GEE in comparative studies using a between-species correlation matrix derived from a phylogenetic tree, and P€ oyry et al. (2009) provide a previous example on butterflies. GEE are especially suitable for data that include categorical variables (Paradis and Claude 2002), as was the case in our study.
To calculate a correlation matrix for relatedness in GEE, a phylogenetic hypothesis was derived for the 32 butterfly species included in our study (Appendix S2).
The branching sequences of butterfly families were derived from recent family-level phylogenetic studies covering all higher taxa of butterflies (e.g., Heikkil€ a et al. 2011). Placement of lower taxa down to individual species was deduced from the phylogenetic studies focusing specifically on each group (Appendix S2). Branches with weak support or unresolved branches in the original studies were treated as polytomies. For simplicity, all tree branches were assumed to be of equal length. In order to include individuals in the analysis, we placed them on species branches so that between-individual distances were assumed to be 0.01 x species branch length. Statistical significances were calculated using an F-test, and the phylogenetic hypothesis was used to calculate the corrected Table 1. Mobility results for all recaptured species: Number of released individuals (n), number of recaptured individuals (RC ind ), recapture probability (%; RC % ), emigration probability (%; Emig % ), mean distance moved AE standard error (m; D mean AE SE), and maximum distance moved (m; D max ). For each butterfly species, the values in parentheses indicate the total number of recaptures and estimates of emigration rate and mean distance moved, based on all recaptures and calculated similarly as in diurnal moths. degrees of freedom for the data. For recapture probability, the models did not converge using the GEE approach.
Results
A total of 385 individuals of 18 species of butterflies and 107 individuals of 9 species of moths (6 geometroids and 3 noctuoids) were recaptured within the release set-aside field (328 individuals) and in its surroundings (164 individuals). Table 1 summarizes information on the released and recaptured individuals and their mobility for all species with at least one recapture (for information on all released species, see Table S1).
Differences between butterflies and moths
The two compared species groups, butterflies and geometroid moths, differed significantly in all three examined measures of mobility (Table 2, Fig. 2). Butterflies were more mobile than geometroids as indicated by their longer mean distances moved (315 m vs. 63 m) and higher emigration rate (54% vs. 17%). The higher recapture rate of butterflies than geometroids (22% vs. 6%) most probably reflected the better detectability in butterflies than geometroids. The mobility of noctuoid moths seemed to be somewhere between butterflies and geometroids (Table 1), but the noctuoid data were too limited to allow meaningful statistical analyses.
Butterfly movements in relation to species traits
The studied butterfly species showed a lot of interspecific variation in mobility. Average distance moved varied from 84 m and 106 m in the two most sedentary species (Lycaena hippothoe and Thymelicus lineola, respectively) to 619 m and 985 m in the two most mobile species (Boloria euphrosyne and Anthocharis cardamines, respectively). Emigration from the release set-aside field varied from 8% (L. hippothoe) and 14% (Aphantopus hyperantus) to 100% in five of the studied species (Table 1). Two species traits, release habitat suitability and body size, became included together in the multivariate models best explaining the two main mobility variables, distance moved (LMM) and emigration rate (GLMM) ( Table 3A). The effects of the two traits were very similar in both models. Distance moved and emigration rate were lower in butterfly species for which habitat suitability was the highest. Furthermore, both distance moved and emigration rate tended to increase with increasing body size, when the effect of habitat suitability was taken into account (see also Fig. 3A and C).
For the third mobility variable, recapture rate, body size, and sex were the two variables included together in the multivariate GLMM (Table 3A). The effect of body size became significant only when its nonlinear component was included in the model. Recapture rate was highest in butterfly species of intermediate size and particularly low in the largest species released in the Table 2. LMM and GLMM results on the differences in the three mobility variables between butterflies and geometroid moths. The differences between the species groups remained significant in all three variables when the models were refitted for the subset of species for which the release set-aside provided suitable habitat (release habitat suitability class = 3). Figure 2. Differences in (A) mean distance moved, (B) emigration rate, and (C) recapture rate between butterflies and geometroid moths. Means are least squares means (LSM) with 95% confidence intervals based on the statistical models fitted to collected data ( Table 2). The asterisks indicate the statistical difference between the species groups (**P < 0.01, ***P < 0.001). experiment. The significant effect of sex was due to the higher recapture rate of males than females. When the final LMM and GLMM models were refitted using generalized estimation equations (GEE) in order to take into account the potential effects of phylogenetic relatedness, the results changed slightly (Table 3B). In the GEE model for distance moved, the effect of body size did not remain significant (P = 0.14), but habitat suitability still had a significant effect. In the GEE model for emigration rate, both habitat suitability and body size had a significant effect.
Discussion
The release experiment successfully produced directly comparable mobility data for butterflies and moths. Recaptures were collected from almost 500 individuals belonging to 27 species. The data set enabled us both to detect differences in mobility between two lepidopteran superfamilies and to identify significant effects of species traits on distance moved and emigration rate in 18 species of butterflies.
Differences between butterflies and moths
As expected, experimentally released butterflies were more mobile than thin-bodied, weakly flying geometroid moths in terms of both distance moved and emigration rate. Butterfly movement distances were on the average five times longer and emigration rate three times higher than in geometroid moths. Data for noctuoid moths remained too sparse to infer any general results. Our findings are in agreement with our previous results on colonization of set-asides by butterflies and diurnal moths (Alanen et al. 2011) and an experiment comparing mobility of lepidopteran species groups (Nieminen 1996) in a network of small islands. Like our results, the results of Nieminen also suggested that butterflies are most and thin-bodied geometroids least mobile, whereas noctuoids show intermediate mobility. It should be noted, however, that Nieminen studied only two butterfly species, Vanessa atalanta and Hipparchia semele, of which V. atalanta is known as a regular long-distance migrant, representing one of the most mobile butterfly species occurring in Europe (Stefanescu 2001).
Recapture rate was generally much lower in diurnal moths than in butterflies. We argue that there are two likely reasons for this: the lower flight activity and thus the lower detectability of moths and the higher population densities of the most abundant moths compared to the most abundant butterflies. Based on the observed relative abundances of marked vs. unmarked individuals in the release set-aside field, we estimated that the geometroid Semiothisa clathrata and the noctuoid Euclidia glyphica, for instance, were an order of magnitude more abundant than the most abundant butterflies, such as Aphantopus hyperantus and Polyommatus amandus. Nevertheless, due to our systematic sampling protocol, the relative recapture probabilities of the studied taxonomic groups did not differ at different distances from the release point, and thus the mobility results can be reliably compared between different species and species groups.
Our results indicate that it is more difficult to obtain reliable mobility data from diurnal moths than butterflies by mark-release-recapture method.
In light of the theoretical model by Travis and Dytham (1999), the observed pattern of mobility variation across moth and butterfly species in our experiment has potential consequences for species persistence. According to their predictions, species with either low or high dispersal rate should perform best in highly fragmented landscapes, whereas species with intermediate mobility are predicted to perform worst. Our findings seem to fit these predictions because the geometroid moths, that were found to be the least mobile lepidopterans, have not declined in Finland (Huld en et al. 2000) and are typically common and abundant in many kinds of uncultivated grassland. Similarly, large butterfly species with high mobility have not suffered from habitat fragmentation, whereas some grassland specialist butterflies with intermediate mobility, such as L. hippothoe, have disappeared from many intensively cultivated landscapes (Ekroos and Kuussaari 2012). This model prediction has previously received empirical support from British butterflies (Thomas 2000).
Butterfly movements in relation to species traits
Butterfly mobility was strongly affected by habitat suitability. Butterflies tended to quickly emigrate from the release set-aside field, if it did not offer suitable breeding habitat for the species in question. Body size explained additional variation in mobility after the effect of habitat suitability had been taken into account in the statistical models. Both distance moved and emigration rate increased with body size, as expected based on our earlier results on butterfly colonization speed (Alanen et al. 2011) and meta-analyses on butterfly mobility (Stevens et al. 2010(Stevens et al. , 2012Sekar 2012). When phylogenetic relatedness among species was included in the analyses, Figure 3. Statistically significant relationships between species traits and the three mobility variables: (A-B) distance moved, (C-D) emigration rate, and (E-F) recapture rate in butterflies. Means are least squares means (LSM) with 95% confidence intervals based on the multivariate models fitted to collected data (Table 3). In the panels A, C, and E, the dots represent means for individual species. The letters a and b within the panels B, D, and F indicate homogeneous groups and thus the treatments which differed significantly in pairwise comparisons. the significant effect of body size disappeared for distance moved, but habitat suitability remained significant.
Habitat suitability
Comparison of average emigration rates in terms of habitat suitability highlights its importance in butterfly mobility: On the average, 33% of recaptured individuals had emigrated in those species that naturally occurred in the release set-aside, whereas 94% of individuals had emigrated in species for which the set-aside was considered unsuitable for breeding. The observed emigration rates in grassland species, for which the release set-aside field provided suitable breeding habitat, are roughly similar to previous observations on grassland specialist butterfly metapopulations (Hovestadt and Nieminen 2009;Stevens et al. 2010). The systematically high emigration rate in species, for which the set-aside was unsuitable for breeding, can be understood as a natural dispersal response owing to their unfitting habitat preference (mostly for forest edges and clearings, Table S2), lack of required larval host plants, and consequently, lack of conspecific individuals within the release set-aside field. Previously Conradt et al. (2001) have shown that individuals of Pyronia tithonus exhibited distinctly different flight behavior when released in an unsuitable compared with a suitable breeding habitat. Even though the important role of release habitat suitability was not surprising, we could not find any previous studies which would have empirically documented it across multiple species. This is probably due to the difficulty of directly detecting habitat suitability effects on mobility without experimentally manipulating butterfly occurrence. Previous experimental studies examining butterfly flight behavior by releasing individuals in field conditions have typically focused only on some components of flight or dispersal behavior (Conradt et al. 2001;Ries and Debinski 2001;S€ oderstr€ om and Hedblom 2007;Schultz et al. 2012) and have not specifically studied mobility differences across several species at a large spatial scale. In this regard, the recent study by Kallioniemi et al. (2014) is exceptional, because they examined butterfly behavior at habitat boundaries in a release experiment and reported differences in the likelihood of crossing habitat boundaries in seven butterfly species.
Our results indicate that butterflies recognize suitable habitats during dispersal and may switch to more sedentary behavior when encountering them. Species preferring forest edges showed a high emigration rate, and several individuals were recaptured in the only relatively nearby forest edge habitat, at c. 600 m distance from the release set-aside field (Fig. 1). However, it is unlikely that butterflies could have visually recognized the forest edge already from the release set-aside, as previous studies suggest that distances from which butterflies are capable of recognizing suitable habitat are much shorter. For example, Conradt et al. (2001) released individuals of two butterfly species within unsuitable habitat at different distances from a suitable habitat patch and found that Maniola jurtina and Pyronia tithonus were usually capable of locating the suitable habitat at 65-85 m distance but not further away from the release point.
Body size
The finding of a positive relationship between butterfly body size and mobility was expected and in agreement with the meta-analyses by Stevens et al. (2010) and Sekar (2012), even though our results probably underestimated the significance of body size owing to the very low number of recaptures in the largest species. These species, such as Nymphalis urticae (no recaptures), N. io (2 recaptures), and large fritillaries in the genus Argynnis (no recaptures), are strong and fast fliers and thus difficult to catch in the field (see Fig. 3E and Table S1). More recaptures from these species would probably have strengthened the correlation between mobility and body size. Residual variance of the body size-mobility relationship was largely explained by release habitat suitability. This finding is in agreement with the results of Stevens et al. (2012) who concluded that even though butterfly body size seems to always be positively correlated with measures of mobility, its predictive power is limited without taking other key species traits into account.
In addition, we found phylogeny to play an important role in butterfly mobility, which is in contrast with Stevens et al. (2012). The effect of body size on distance moved did not remain significant after the phylogenetic relatedness of butterfly species had been taken into account. This is not surprising, as a substantial proportion of variation in body size between butterfly species stems directly from size differences between butterfly families (e.g., Nymphalidae vs. Lycaenidae), whereas size differences are often small between closely related species within a family (e.g., within Lycaenidae and Hesperiidae).
Conclusions
Our release experiment showed that comparable multispecies data on important components of insect mobility can be gathered simultaneously at a relatively large spatial scale. Three conclusions can be drawn based on the results. First, butterflies moved longer distances and had higher emigration rate than geometroid moths. Second, release habitat suitability had a strong effect on butterfly mobility so that species naturally occurring in the release set-aside were much less mobile than species for which the set-aside was not a suitable breeding habitat. Third, mobility of butterflies increased significantly with body size after the effect of habitat suitability had been taken into account, but the effect of body size was partly confounded by phylogenetic relatedness. The experimental multispecies approach used here offers interesting opportunities for future studies of insect mobility. It builds on the tradition of studying mobility and dispersal behavior based on experimental releases of individuals, but which previously have focused at only one or a few species and conducted at smaller spatial scale (see Kallioniemi et al. 2014 and references therein for recent examples).
|
2016-05-04T20:20:58.661Z
|
2014-09-12T00:00:00.000
|
{
"year": 2014,
"sha1": "2a1eb4eb8c2f07320dbe34c7a60e63f5591c2ead",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1002/ece3.1187",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "0a3f152a5143aa1afb80aba4dfa8c2e58d5ca178",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
229291106
|
pes2o/s2orc
|
v3-fos-license
|
The Double Jeopardy of Feeling Lonely and Unimportant: State and Trait Loneliness and Feelings and Fears of Not Mattering
There have been recent concerns about an “epidemic of loneliness” during the pandemic, given the pervasiveness of loneliness in the population and its harmful effects on health and well-being. Therefore, it is important to establish the correlates of loneliness. The purpose of the current study was to explore how loneliness relates to a construct termed mattering, which is the feeling of being important to other people. Mattering was assessed with multiple measures in the current study (e.g., mattering in general, fears of not mattering, and mattering to peers). A sample of 172 female psychology undergraduate students aged 18–25 years completed self-report measures of general mattering, mattering to peers, anti-mattering, fear of not mattering, and state and trait loneliness. As predicted, lower levels of both general mattering and mattering to peers were associated with higher state loneliness. Higher feelings of anti-mattering (feelings of being invisible and insignificant to others) and fears of not mattering were associated with greater trait loneliness, as well as a reduced sense of mattering to friends. The findings illustrate that feeling as though one does not matter to others (i.e., feeling insignificant and unimportant) is associated with increased state and trait loneliness among young women. Implications are discussed for loneliness theory and how these results can enhance both clinical understanding and practice.
INTRODUCTION
Loneliness is the distressing feeling that arises when one's desired quantity or quality of social connection has failed to be met (Peplau and Perlman, 1982). In recent years, concerns have been expressed by public health officials and researchers about an "epidemic of loneliness" given the pervasiveness of loneliness across the population and its debilitating effects (Holt-Lunstad, 2017;King, 2018). A recent study conducted in Canada found that 48% of Canadian adults reported feeling lonely, and 62% said they wished their friends and family would spend more time with them (Angus Reid Institute, 2019). Feelings of loneliness are present across the lifespan from adolescence through to old age, and while loneliness is often thought to be a condition that primarily afflicts the elderly, research polling has actually found that young women under the age of 25 report being the loneliest demographic group (Angus Reid Institute, 2019) Frontiers in Psychology | www.frontiersin.org 2 December 2020 | Volume 11 | Article 563420 and are thought to show particular susceptibility (Rokach, 2000). Despite young women being at high risk for experiencing loneliness very little research has been specifically conducted on experiences of loneliness and correlates of loneliness among young women, which is a gap the current study intends to address. Research on correlates of loneliness represents an important area of research given the negative consequences of loneliness, such as increased risk of depression, anxiety, poor immune functioning, and physical health, as well as earlier mortality (Uchino, 2006;Lasgaard et al., 2011;Cacioppo and Cacioppo, 2014;Hostinar et al., 2014;Rokach, 2019). Indeed, a recent study of patients receiving community mental health services found that loneliness predicted subsequent levels of four mental health indicators and was a better predictor than objective social isolation and social capital (see Wang et al., 2020). Previous research has established that several interpersonal variables are associated risk factors for trait loneliness. Lack of social connection, such as living alone, being unmarried, not participating in social groups, and having fewer friends to turn to in times of need have all been associated with greater feelings of loneliness (Holt-Lunstad et al., 2010). Additionally, lack of social support from family, and especially from friends, was correlated with greater feelings of loneliness among young adults (Lee and Goldstein, 2016;Chang et al., 2017). Further, poor attachments to mothers and fathers, and especially peers, were all related to greater loneliness, as was low feelings of general belongingness (Yildiz, 2016).
Taken together, past research seems to reflect that feelings of belonging and social information from others regarding levels of interpersonal support, connection, and one's relative worth have an influence on trait loneliness. It stands to reason that feelings of mattering to others then may also be a relevant interpersonal predictor of loneliness. Mattering can be defined as the sense that other people find us important, depend on us, are interested in us, and care about what happens to us (Rosenberg and McCullough, 1981). If a person feels unimportant to others and as though others are not interested in them or care about what happens to them (i.e., he/she feels as though he/she does not matter to others), it follows that they may then feel a sense of loneliness.
Below we discuss mattering and loneliness from a historical perspective and a contemporary perspective. First, however, we describe the mattering construct in more detail. Rosenberg and McCullough (1981) introduced a form of relational mattering through their focus on the feeling and the need to be important to other people. The person who feels like she or he matters is someone who feels seen and heard by others who are valuing them. It is related to but more specific than Leary's notion of the sociometer, which blends being valued with others with openly being rejected or accepted by others and how this can impact self-esteem (see Leary, 2012;Leary and Acosta, 2018).
Key nuances help distinguish mattering from other constructs. For instance, Prilleltensky (2020) introduced an expanded view that sees mattering as being someone who is both valued by other people and who gives value to other people. This last component is important because it makes mattering less reactive and helps distinguish it from other constructs because people can become "mattering agents" who promote feelings of mattering among people in their social circle (for a related discussion, see Flett, 2018). Most notably, mattering is distinguishable from related constructs. It has been shown empirically in several studies that mattering is related to but distinct from self-esteem (e.g., Rosenberg and McCullough, 1981;Elliott et al., 2004;Dixon and Robinson Karpius, 2008;Flett et al., 2016b). A recent investigation by found that mattering and self-esteem were associated positively but mattering predicted significant variance in depression beyond self-esteem. Indeed, in their original paper, Rosenberg and McCullough (1981) showed in four samples of adolescents that after controlling for self-esteem, mattering was still a predictor of reduced levels of depression. There is also extensive evidence to support the conceptual and empirical distinction between mattering and related concepts such as belongingness and social support (see Elliott et al., 2004;Elliott, 2009;Flett, 2018). The distinction between the need to matter vs. the need to belong is perhaps best reflected by the person who feels part of a broader group and thus belongs but is not valued or extensively noticed within the group and thus feels a sense of not mattering to others.
Mattering and Loneliness
At present, to our knowledge, only one study thus far has examined the association between mattering and loneliness. Flett et al. (2016a) examined loneliness and mattering as part of a broader study. A sample of 232 undergraduate students completed various measures that included a measure of trait loneliness and the five-item General Mattering Scale (Marcus and Rosenberg, 1987). A robust negative association was found between mattering and loneliness (r = −0.65), thus confirming that low feelings of general mattering to others was associated with greater trait loneliness among both men and women. Additional results indicated that feelings of not mattering partially mediated the link between a reported history of maltreatment and loneliness.
The current article re-examines and extends this association between feelings of not mattering and loneliness with multiple measures of each construct. While there has not been an extensive theoretical emphasis thus far on loneliness and mattering seen through a conceptual lens, some useful insights have been provided by Erich Fromm (1941). Flett (2018) describes in his broad review on the psychology of mattering, the influence that Fromm had on the mattering field and how his insights deserve more emphasis not only due to their influence on Rosenberg and McCullough (1981). Most notably, for our purposes, Fromm (1941) proposed that feelings of powerlessness (written presciently in the context of world events at the time) and personal insignificance (i.e., feelings of not mattering) are closely intertwined with feelings of loneliness. Indeed, the mandated exposure of individuals to social isolation around the globe due to the corona virusrelated public health crisis has reignited the need to explore correlates of loneliness and populations at specific risk (Flett and Zangeneh, 2020). Specifically, Flett and Zangeneh (2020) describe the vulnerability to psychological pain of those people who are alone and who feel unimportant, that their lives Frontiers in Psychology | www.frontiersin.org lack significance. Given these historical and contemporary observations, there is a clear need for programmatic research on loneliness and feelings of not mattering.
The current research was based on an expanded view of loneliness (i.e., state and trait loneliness) and mattering. We went beyond mattering in general to also assess mattering with respect to a specific relationship. To our knowledge, this topic has not been examined in terms of how feelings of mattering to specific others (e.g., mother, father, and peers) is related to loneliness. Given past research that has documented stronger associations between loneliness and lack of peer support or attachment, it is anticipated that low feelings of mattering to friends may be an especially important predictor of loneliness among young women and go beyond studies of basic belongingness in this population (Asher and Weeks, 2014).
We also examined mattering with additional focus on two recent extensions of the construct. The concept of anti-mattering is described in Flett (2018) and is evaluated using items that emphasize not mattering and feeling marginalized. According to Flett (2018), feelings of anti-mattering, are distinct from, and not merely the opposite of mattering, in keeping with the notion that constructs such as optimism and pessimism and hope vs. hopelessness are not mere endpoints of the same continuum. A recent review has summarized extensive evidence attesting to the incremental validity of this new measure of anti-mattering when considered along with the General Mattering Scale (see . Accordingly, we include a new measure of anti-mattering to evaluate whether it is also a relevant interpersonal correlate of loneliness. Anti-mattering can be defined as feeling insignificant and invisible to others and feeling as though no one cares about what you have to say or think. Those who strongly endorse feelings of antimattering may feel as though they do not matter at all to anyone. It stands to reason that those who feel as though they do not matter to anyone would be more likely to be dissatisfied with their social relationships and to feel lonely. Further, we included a new measure of fear of not mattering to others that reflects the anxiety that people have about the possibility that they will not matter to others. This concept has been described by Casale and Flett (2020) as distinguishable from other interpersonally-based fears such as a fear of missing out or separation fears. The fear of not mattering is similar to other interpersonally-based constructs (e.g., rejection sensitivity, reassurance seeking, and fear of negative evaluation) in that it reflects unmet interpersonal needs and a need for validation through connection from others, but it is a specific fear reflecting a concern about not being valued and not being seen or heard by other people who show little interest. This emphasis on a fear of being or becoming insignificant to others is in keeping with research that ties individual differences in feelings of not mattering to others with anxious forms of insecure attachment. More generally, the fear of not mattering reflects the overlap that exists between the mattering and anxiety constructs and the anxious arousal and evaluation apprehension that accompanies a sense of not mattering (see Flett, 2019). Fear of not mattering to others may reflect a ruminative preoccupation about the threat of the depreciation of one's worth or value to others and the loss of important social relationships or resources. This fear of not mattering might be experienced by people who have lost people in their lives who were key sources of feelings of mattering. This type of fear could be relevant for older adults given the suggestion from Rosenberg and McCullough (1981) that feelings of mattering may be particularly relevant in understanding elderly people. Given these observations, fear of not mattering to others may be another correlate of loneliness. Certainly past research has found that rumination is associated with greater loneliness among young adults (Gan et al., 2015;Borawski, 2019).
The Current Study
In summary, the current study aimed to expand the sparse literature on how mattering is related to feelings of state and trait loneliness using a sample of young women aged 18-25 years old since they are considered to be the loneliest demographic group. The purpose of the current study was to examine how feelings of general mattering, mattering to peers, anti-mattering, and fear of not mattering related to reports of trait and state loneliness. Based on past research, the following hypotheses were tested: (1) Feelings of both general mattering and mattering to peers will be negatively associated with state and trait loneliness and (2) Feelings of anti-mattering and fear of not mattering will both be positively associated with state and trait loneliness. These hypotheses were based on past research. Our emphasis on going beyond trait loneliness to also consider state loneliness reflected our interest in assessing very current feelings of loneliness that could perhaps more closely reflect the current experience of young women as they adapt to the pandemic and its various challenges. It also reflects past research attesting to the usefulness of examining state loneliness as a form of reactivity in specific contexts (see van Roekel et al., 2018).
Participants
Participants were 172 female psychology undergraduate students recruited through an online research participant pool at York University in Toronto, Canada. Inclusion criteria included being a biological female and being between the ages of 18-25 years. As previously mentioned, this sample was chosen because women in this age bracket have reported being the loneliest demographic group (Rokach, 2000;Angus Reid Institute, 2019). Participant ages ranged from 18-25 years (M = 19.20, SD = 1.68). The self-reported ethnic distribution of the sample was 34.7% Caucasian, 28.9% South-Asian, 10.4% East Asian, 9.8% Middle Eastern, 5.8% Black/African-Canadian, 1.2% Pacific Islander, 0.6% Hispanic/Latino, and 8.1% identified as "Other. " Most participants (71.1%) reported completing a high school diploma as their highest level of completed education, followed by 19.7% who completed some college, 4.0% who completed a 2-year degree, 4.0% who completed a 4-year degree, and 0.6% who had completed a doctorate.
Measures
The General Mattering Scale The General Mattering Scale (GMS; Marcus and Rosenberg, 1987) is a brief five-item self-report scale that is used to measure how much one perceives that they matter to others at an overall level. Participants are asked to indicate how much they agree with each statement, such as "How important do you feel you are to other people?, " by responding on a scale from 1 = not at all to 4 = a lot. Higher scores indicate greater perceived mattering. Internal consistency in the current study was good (α = 0.78). Extensive evidence attests to the reliability and validity of this measure (see Flett, 2018).
The Mattering to Others Questionnaire
The Mattering to Others Questionnaire (Marshall, 2001) is an 11-item self-report scale that can be used to assess perceived mattering to one's mother, father, or friends. We focused solely on assessing levels of mattering to friends/peers in the current study. Participants are asked to indicate how much they agree with each statement, such as "I matter to my friends," by responding on a scale from 1 = not much to 5 = a lot. A mean score for all items are calculated and higher scores indicate greater perceived mattering. The level of internal consistency in the current study was somewhat low (α = 0.59), which is uncharacteristic of the measure given that there is extensive psychometric support for this scale (see Flett, 2018).
The Anti-Mattering Scale
The Anti-Mattering Scale (AMS; Flett, 2018) is another brief five-item self-report scale that is used to measure anti-mattering at an overall level. The measure is designed to parallel the format of the GMS. Anti-mattering can be defined as a feeling of being insignificant and invisible to others. Participants are asked to "please choose the rating that you feel is the best for you, " in regards to each item (e.g. "How much do you feel like you do not matter?"), by responding on a scale from 1 = not at all to 4 = a lot. Higher scores indicate greater perceived anti-mattering. Internal consistency in the current study was surprisingly low (α = 0.58).
The Fear of Not Mattering Inventory
The Fear of Not Mattering Inventory (Besser et al., 2020) is a brief five-item self-report scale that is used to measure fear about not mattering to others. Participants are asked to indicate how much they agree with each statement, such as "To what extent are you afraid that you will not matter to other people?, " by responding on a scale from 0 = not at all to 3 = almost all of the time. Higher scores indicate greater fear of not mattering.
Internal consistency in the current study was excellent (α = 0.91).
UCLA Loneliness Scale
The UCLA Loneliness Scale (Russell et al., 1980) is a 20-item self-report measure of trait loneliness. Participants are asked to indicate how often each of the statements listed is descriptive of themselves, ranging from 1 = never to 4 = often.
Higher scores are indicative of greater trait loneliness. Internal consistency in the current study was excellent (α = 0.95).
State Loneliness
To measure state loneliness a single item visual analog scale was used. The scale consisted of a horizontal line with a slider bar, and endpoints labeled as "not at all" and "very much." Participants were asked to slide the bar to the point on the line that best represents how they felt in that moment in regards to the following statement: "I feel lonely. " Responses to the scale could range from 0 to 100, with higher scores indicating greater loneliness.
Procedure
Eligible participants could view and sign up for the study online through an online experiment management system. Upon sign up, participants gave their informed consent online. Participants then completed demographic questions and selfreport measures of state and trait feelings of loneliness, general mattering, mattering to others, anti-mattering, fear of not mattering, as well as several other questionnaires not pertinent to the current study. Participants completed the measure of state loneliness prior to all other questionnaires, so that state loneliness scores were not influenced or primed by questions on the trait loneliness or mattering measures. Participants then received online debriefing and partial course credit for their participation. This study's research protocol was approved by a university's ethics review board and conforms to the standards of the Canadian Tri-Council research ethics guidelines.
Data Analysis
All statistical analyses were conducted using SPSS version 25. Two separate multiple regression analyses were conducted to determine how general mattering, mattering to friends, antimattering, and fear of not mattering were related to both state and trait loneliness. A priori power analysis was conducted to determine the sample size needed to provide sufficient power (0.80) to detect a moderate effect size (0.25) at a significance level of alpha 0.05 for a multiple regression F-test analysis, using G*Power 9 (Faul et al., 2007). The power analysis indicated that the minimum sample size needed would be 53 participants, which we exceeded. Table 1 presents the means and standard deviations, as well as the correlations among all study variables. The means obtained in terms of levels of general mattering and antimattering are comparable to those summarized elsewhere (see Flett, 2018). It can be seen that state and trait loneliness are correlated substantially but not to the extent that they are redundant. As for the measures assessing the mattering construct, the correlations in Table 1 indicates that there is a substantial emphasis on the importance of fear of not mattering.
Bivariate Correlations
Frontiers in Psychology | www.frontiersin.org 5 December 2020 | Volume 11 | Article 563420 Specifically, the fear of not mattering with others showed significant negative associations with both general mattering (r = −0.44) and mattering to friends (r = −0.46) and it was associated positively to a comparable degree with anti-mattering (r = 0.44).
Regarding the associations between the indices of mattering and loneliness, it can be seen in Table 1 that state loneliness was negatively related to both general mattering and mattering to friends, with correlations of moderate effect sizes. Trait loneliness was also negatively related to both general mattering and mattering to friends, with correlations of large effect sizes, especially for mattering to friends. Both state and trait loneliness were positively related to anti-mattering and fear of not mattering, with stronger correlations for trait than state loneliness. Table 2 presents the results of a multiple regression in which general mattering, mattering to friends, anti-mattering, and fear of not mattering are regressed onto state loneliness and trait loneliness in separate analyses. The analysis predicting state loneliness was significant, F (4, 167) = 11.66, p < 0.001, and accounted for 22% of the variance. As seen in Table 2, and as expected, both general mattering and mattering to friends were significant negative predictors of state loneliness. However, anti-mattering and fear of not mattering were not significant predictors of state loneliness.
Regression Analyses
The same analysis was conducted to predict levels of trait loneliness. The analysis predicting trait loneliness was significant, F (4, 167) = 67.44, p < 0.001, and accounted for 62% of the variance. It is important to underscore here that in this sample, the mattering variables were substantially more relevant to trait loneliness than state loneliness. As seen in Table 2, mattering to friends was a significant and negative predictor of trait loneliness, but contrary to predictions feelings of general mattering was not a significant predictor. Further, as predicted, both anti-mattering and fear of not mattering were significant and positive predictors of trait loneliness. Mattering to friends proved to be the most robust predictor of trait loneliness.
DISCUSSION
The purpose of the current study was to extend what is known thus far about the link between feelings of not mattering and loneliness. Our investigation focused on young women aged 18-25 years old, since recent research suggests they are the loneliest demographic group. Specifically, our aim was to examine how feelings of general mattering, mattering to peers, antimattering, and fear of not mattering related to young women's self-reported state and trait loneliness. This was the first study to examine how mattering, anti-mattering, and fear of not mattering was related to both state and trait loneliness. This focus on multiple elements of mattering and both trait and state loneliness reflected our goal of conducting the most extensive study thus far on mattering and loneliness.
Overall, the pattern of results indicated that variables tapping mattering were related broadly to loneliness, as was expected, though they were much more relevant to understanding trait loneliness. To some extent, this could reflect in part the methodological constraints of very different self-report formats used to assess trait loneliness vs. state loneliness in the current study. Also, as we discuss in more detail below, there was also evidence attesting to the merits of examining various components of the mattering construct in terms of how they relate to loneliness.
Several key findings emerged in terms of how mattering is related to state and trait loneliness. First, as predicted, we found that low feelings of general mattering and mattering to peers were associated with greater feelings of state loneliness among young women. However, in regard to trait loneliness, only low mattering to peers, and not general mattering, was a significant predictor of greater trait loneliness. Our findings from the regression analysis qualified and extended the finding of Flett et al. (2016a) who found that low feelings of general mattering were associated with greater trait loneliness among young women. While both general mattering and mattering to peers was a significant predictor of state loneliness, we found that only mattering to peers was a significant predictor of trait loneliness among young women. These results highlight the importance of feeling valued and important to peers, on protecting young women from feelings of loneliness. This finding is in line with past research, which has found that social support and positive relationships with peers is a protective factor against feelings of loneliness among young women (Lee and Goldstein, 2016;Yildiz, 2016;Chang et al., 2017), and that greater perceived mattering to peers is associated with better well-being (Matera et al., 2020). Feelings of mattering to peers may be especially important to women who are emerging adults, as this is a time in their life when many young women are moving away from home to live independently to pursue work or start post-secondary education. Therefore, they may be losing immediate contact with family, and relying on support from friends during this time (Lee and Goldstein, 2016). The second key finding was that greater levels of antimattering were associated with greater trait loneliness (but not state loneliness). Therefore, unsurprisingly, results indicated that feeling as if one does not matter at all to anyone, and feeling insignificant and invisible to others, is associated with greater trait feelings of loneliness. As such, these findings extend past complementary research, which has found that low social support and connection are associated with greater loneliness among young adults (Holt-Lunstad et al., 2010;Lee and Goldstein, 2016). The current findings add to this body of research, by noting that it is not just low social connection that is linked with increased feelings of loneliness, but also perceived subjective feelings of being insignificant to those with whom one does have social contact.
Finally, greater fear of not mattering was associated with greater trait loneliness (but not state loneliness). Fears over not mattering may reflect a ruminative cognitive style about the negative consequences of not mattering to others, or coming to matter less than one currently does. Chronic engagement in this kind of rumination may result in increased feelings of trait loneliness over time, as one consistently reflects on how awful it would feel to not matter to others. In turn, it is possible that those who are already lonely are predisposed to ruminating about these kinds of concerns. This finding is consistent with past research that has found that increased rumination is associated with greater loneliness (Gan et al., 2015;Borawski, 2019).
Taken together, the findings provide evidence for theoretical formulations of loneliness. Loneliness is not merely about being physically isolated, but rather the condition of feeling alone -feeling disconnected from others, and feeling that no one cares (Rokach, 2019), as well as feeling unimportant. In addition, the results further our clinical understandings of loneliness in the high risk group of young women by pointing to a need to focus on their concerns about whether they matter to their peers as well as their fears of not mattering. The problem of loneliness has been previously identified as a public health concern (Leigh-Hunt et al., 2017) but one of the ironies of the current public health pandemic crisis is that it has now heightened awareness to mobilize community efforts to reach out in novel ways to our most isolated and loneliest individuals (Flett and Zangeneh, 2020). With respect to the heightened risk of young women, this may involve not only "connecting" using social media, but finding ways to deepen these connections, not just through the sheer number of "friends" accumulated on social media (Hood et al., 2018) but through meaningful listening and discovering ways of reaching out that show kindness, compassion and true caring for others.
There are several implications that follow from the current evidence of the extensive and robust links between loneliness and feelings and fears of not mattering. At a theoretical level, conceptual models of phenomena such as the link between loneliness and physical health problems should consider how loneliness and feelings of being unimportant may combine to produce outcomes from a mediator or moderator perspective. Similarly, at a practical level, interventions designed to enhance the mental health and degree of interpersonal relatedness of profoundly lonely people should perhaps consider whether loneliness is accompanied by a sense of having little perceived value to other people. Both themes need to be addressed in clinical and counseling interventions. Of course, longitudinal or experimental research on the relationship between mattering and loneliness would better help to inform these interventions, as the current data only examined cross-sectionally how mattering is related to loneliness and cannot discuss whether this relationship is causal or not.
Like all research, the findings of the current study must be taken into consideration with some limitations. First, the data collected were based on self-report responses, which are always prone to subjective biases. Second, it was a limitation that the mattering to friends and anti-mattering measures had somewhat low reliabilities in the current study, which could have reduced correlations and the relation between anti-mattering on state loneliness. Additionally, the study design was crosssectional and therefore the directionality of the relationships examined cannot be determined. While we proposed that low mattering, high anti-mattering, and fear of not mattering to others are correlates of loneliness that could increase risk for loneliness, it is also possible that loneliness could be a risk factor for decreased feelings of mattering, as well as increased anti-mattering and fear of not mattering. Longitudinal research is therefore needed to establish the directionality of the relationships examined, or experimental research to determine causality. While it was a strength of the study that we measured both state and trait loneliness and differentiated between two types of mattering, we only measured loneliness and mattering at one timepoint. Indeed, state loneliness was measured at a time prior to mandated physical isolation due to a public health crisis and most likely the ratings would have been quite different compared to a period of physical quarantine. Longitudinal research should be conducted that examines if mattering can predict loneliness over time, or vice versa. Also, it was the strength of the study that we investigated correlates for loneliness among young women, which is an under-researched demographic group in the loneliness literature. However, while our study only examined relations between loneliness and Frontiers in Psychology | www.frontiersin.org mattering in young women, we are confident that the findings of the current study would replicate among other demographic groups such as adolescents or the elderly, who also experience significant feelings of loneliness (Rokach, 2000;Fazio, 2009). In particular, feelings of mattering and fears of not mattering may be an especially important construct to study among the elderly, who often lose important social contacts as they age and experience greater loneliness. Past research has documented that loneliness increases from middle adulthood to old age, and that the loss of social contact through widowhood, having no spouse or cohabiting partner, and little contact with friends was associated with greater loneliness among the elderly (von Soest et al., 2020). Further, qualitative research among the elderly has documented that many elderly persons report gloomy feelings about not mattering and aching loneliness, which attributed to no longer having children who were dependent upon them and to losing their occupations through retirement (van Wijngaarden et al., 2015). Therefore, fears of not mattering may also be an especially relevant construct to study among those who are approaching old age and facing retirement. Feelings of mattering and fears of not mattering would also be relevant constructs to examine among adolescents who also report increased feelings of loneliness from childhood (Laursen and Hartl, 2013). Adolescence is a time where teens are trying to establish an identity separate from parents and where peer relations become increasingly important to selfdevelopment. Therefore, mattering to peers may be especially important in protecting against feelings of loneliness in adolescence (Laursen and Hartl, 2013).
In summary, the results of the current study confirmed that a reduced sense of mattering, especially to peers, and an increased sense of anti-mattering or fear of not mattering to others is associated with greater trait loneliness. Additional results showed that a reduced sense of general mattering and mattering to peers was associated with greater state loneliness. Overall, our results suggest that having a reduced sense of being significant and cared for by others could lead to state and trait feelings of loneliness among young women. These findings highlight the importance of young women forming strong social relationships, particularly with peers who make them feel important and cared for, in order to protect themselves against feelings of loneliness. The current findings suggest that bolstering feelings of mattering through therapy and other treatment avenues may help young women who are seeking to overcome feelings of state and trait loneliness.
DATA AVAILABILITY STATEMENT
The datasets presented in this article are not readily available because the requesting source must be affiliated with an academic institution. Requests to access the datasets should be directed to the corresponding author jgoldber@yorku.ca.
ETHICS STATEMENT
The current study was reviewed and approved by the Human Participants Review Committee at York University. The patients/ participants provided their written informed consent to participate in this study.
AUTHOR CONTRIBUTIONS
SM collected the data and wrote the first draft under the direct supervision of JG in fulfillment of her graduate student academic breadth requirements with further supervisory committee membership guidance by GF. JG and GF supervised the data analysis and edited drafts of the manuscript. AR provided further editorial and literature review support. All authors contributed to the article and approved the submitted version.
|
2020-12-17T14:17:27.045Z
|
2020-12-17T00:00:00.000
|
{
"year": 2020,
"sha1": "8f8376d3e21fdce6173d0b9598c57652564edffa",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fpsyg.2020.563420/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "8f8376d3e21fdce6173d0b9598c57652564edffa",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
236277888
|
pes2o/s2orc
|
v3-fos-license
|
DNA methylation and transcriptome comparative analysis for Lvliang Black goats in distinct feeding pattern reveals epigenetic basis for environment adaptation
Abstract Different feeding patterns exhibit completely different and heritable growth properties in livestock, but the underlying epigenetic mechanisms remain unclear. Here, we investigated genome-wide DNA-methylation and gene expression under grazing and confinement regimen feeding strategies with Lvliang Black goat. We identified 102 differently expressed genes and 7,833 differentially methylated regions (DMRs) between the two groups. Integrating DNA-methylation and gene expression showed that genes in DMRs exhibit significantly different expression level (FDR < 0.05). KEGG pathway analysis indicated that most of the genes involved in environment adaptation pathways like lipid transpor and metabolism immunity. In sum, our data provided insight into the epigenetic mechanism underlying growth property difference resulting from distinct feeding patterns in goat, and also offered theoretical basis for the rational utilization of germplasm resources of local breeds. Supplemental data for this article is available online at https://doi.org/10.1080/13102818.2021.1914164 .
Introduction
Variation in environmental factors, including light, humidity, temperature and salinity, could alter the gene expressional patterns of individuals, which may lead to their morphological changes [1]. The genetic mechanism underlying synchronous changes of environment-morphology may relate to epigenetic moditication, which could regulate gene expression patterns. In animals, DNA methylation preferentially occurs at the 5'position of cytosine in CpG dinucleotide agglomeration region, which is known as CpG islands (CGIs). Cytosine methylation is a kind of classical epigenetic marker and usually presents three forms (CG, CHG and CHH, where H is any base but G). DNA methylation is catalyzed and maintained by specific DNA methyl transferases like Dnmt3 and Dnmt1. The maintenance of DNA methylation can be heavily influenced by environment, diet, aging and behaviors [2][3][4][5][6]. It has been reported that DNA methylation could affect gene expression levels by reducing the rate of transcriptional elongation [7,8]. Although gene expression and methylation exhibit complex relationships, high levels of gene expression usually associate with low levels of methylation in the promoter region of genes [9]. Recently, numerous studies have investigated genome-wide DNA methylation patterns in livestock [10][11][12][13][14][15][16], but most of these studies focus on the relevance between DNA methylation and economic traits [17,18]. The DNA methylation variation and relative gene expression changes underlying different feeding strategies in livestock still remains unknown.
The Lvliang Black goat (LBG) is an excellent economic breed of Capra hircus, which supply fleece, meat, leather and delicious mutton for aborigines. The LBG is mainly distributed in Lvliang mountainous area of Loess Plateau in Shanxi where the landscape is barren with serious soil erosion, little rain and sparse vegetation. The breed is famous for reliable resistance to crude feed and numerous diseases and having strong adaptability to extreme environmental habitats. Nowadays, with the rapid development of commercial feeding model, like other strains, LBGs are mainly reared in the comfortable conditions where they have enough food, water and favorable environment. Thus, the LBG strain provides a good model to investigate the genetic divergence resulting from different feeding strategies of livestock.
In the present study, we conducted a comparative analysis about the genome-wide DNA methylation pattern and transcriptome between the grazing and confinement regimen group to explore the genetic difference of difference feeding strategies of LBG.
Ethics statement
Animal care and experiments were performed according to the guideline established by the Regulation for the Administration of Affairs Concerning Experimental Animals (Ministry of Science and Technology, China, 2004), and approved by the Animal Welfare and Research Ethics Committee at Shanxi Agricultural university.
Samples collection
Six LBG individuals were randomly collected from two groups of different rearing systems. Three individuals were obtained from a grazing group in the natural pasture. The other three individuals were obtained from the confined rearing group with enough hay and free access to feed and water. All the six individuals are female and two-year-old, with no significant differnces in body weight. Blood samples from jugular vein were collected via the precava from these individuals and immediately frozen in liquid nitrogen.
RNA isolation and library construction
The RNA sequencing design of this study is described in Scott et al. [19]. Total RNA was extracted from the jugular vein blood of the six individuals. Concentration of RNA was measured using NanoDrop 2000 (ThermoFisher, uSA), and its integrity was assessed using the RNA Nano 6000 Assay Kit of the Agilent Bioanalyzer 2100 system (Agilent, uSA). Strand-specific RNA library was prepared using the NEBNext ultra Directional RNA Library Prep Kit for Illumina (NEB, uSA) according to the manufacturer's instructions. Library validation and quantification was conducted using the Agilent Bioanalyzer High Sensitivity Kit (Agilent, uSA Cat#5067-4626) and qubit dsDNA HS Assay kit (ThermoFisher, uSA). The 100 bp paired-end sequencing was performed on the Illumina HiSeq2500 platform (Illumina, uSA).
DNA extraction and whole-genome bisulfite sequencing
The DNA extraction and whole-genome bisulfite sequencing in this study was performed as described [20]. Three replicated DNA samples from the groups were pooled into one sample, respectively. Whole-genome bisulfite sequencing DNA libraries were prepared following the standard protocol. Briefly, a total amount of 5.2 μg DNA was spiked with 26 ng lambda DNA and was fragmented to 200-300 bp by a Covaris S200 sonicator (Covairs, uSA). After the end repair and adenylation reaction, the fragments were ligated with the cytosine-methylated barcodes. Next, these DNA fragments were treated twice with bisulfite using an EZ DNA Methylation-GoldTM Kit (Zymo Research, uSA). The generated single-strand DNA fragments were amplified by polymerase chain reactin (PCR) using the KAPA HiFi HotStart uracil + ReadyMix (2×) and quantified by qubit Fluorometern. Bisulfite sequencing library was sequenced at the Illumina Hiseq 2500 platform.
Quality control and mapping
The base calling of RNA-seq and whole-genome bisulfite sequencing was performed according to the standard Illumina pipeline. Paired-end reads with 100 bp length of 6 RNA-seq and 2 whole genome methylation sequencing (WGBS) libraries were generated. Raw reads of both DNA and RNA were firstly checked using the FastqC tool, and subsequently trimmed at the 3′ end to remove the adaptor and low-quality nucleotides by Trimmomatic. The obtained good-quality clean reads were reserved for futher analysis. The goat (Capra hircus) genome (GCA_001704415.1) was used as a reference genome for clean reads alignment and assembly. Software STAR and Bismark were used with default parameters for RNA-seq and WGBS reads mapping, respectively.
Expression analysis
The mapped short reads of RNA-seq data were extracted and processed to assemble transcripts. Transcripts quantification was performed by Cufflinks. Per Kilobase of transcript per Million fragments mapped (FPKM) was used to measure the expression level. FPKM values were logarithm transformed for further analysis for the right skew of transcriptional level distribution. The Spearman's correlation coefficients of the expression pattern among the sample pairs were calculated to evaluate the similarity of the sample biological background. The significantly differentially expressed genes (DEGs) were identified by the cuffdiff tool in the Cufflinks software. FDR method was applied in P value adjustment in order to control the multiple testing error.
Identification of putative DMR
In order to evaluate the DNA methylation level (ML), the sliding-window approach, with window size of 3,000 bp and step size of 600 bp, was employed. The sum of methylated and unmethylated read counts was calculated in each window and the ML was calculated according to a previous study [21]. The corrected ML was estimated as: ML (corrected) = ML-r/1-r (where r represents the non-conversion rate). The percentage of methylation levels was calculated as the proportion of mC sites (mCs) on the total C sites (Supplemental Table S2). The relative proportion of mCs in three contexts was calculated as the proportion of mCG, mCHG and mCHH on the total mC sites, respectively. Differentially methylated regions (DMRs) were identified by swDMR software (http://122.228.158.106/swDMR/) with a sliding-window approach (window = 1,000 bp, step length = 100 bp). The Fisher test was applied to detect significant DMRs. After DMRs were identified, genes located in DMRs were characterized.
Functional enrichment analysis
Gene Ontology (GO) and Kyoto Encyclopedia of Genes and Genomes (KEGG) enrichment were employed for DE genes and DMR-associated genes function clustering and pathway analysis. GO analysis was implemented by the GOseq R package, which corrects for gene length bias. GO terms with corrected P-value less than 0.05 were considered as significantly enriched by DMR-related genes. KOBAS software was used to test the statistical enrichment of DMR-related genes in KEGG pathways.
Integrative analysis
Genes that were expressed differentially between any pair of tissues were selected by cuffdiff, a tool package within Cufflinks. To explore whether these genes had methylation variation as well, we performed t tests for beta values between each pair of tissues, and performed analysis of variance (ANOVA) to compare beta values. The numbers of genes that show significant differences between tissues both in transcriptional and methylation levels were counted. The distance of CpG markers to TSS was used as a covariate factor for a regression analysis to test if it was a confounding factor in this analysis. To detect the correlation between DNA methylation and gene expression, we performed ANOVA to select the CpG markers that showed methylation variation. Regression analysis followed to test whether the changes in methylation were correlated with gene expression. All tests were controlled by FDR to adjust for multiple tests. The distribution of p values of regression analysis was drawn to check whether the distribution was uniform. CpG markers were then grouped into their locations, inside or outside CpG islands, and the effects of location on the distribution of Pearson's correlation was tested. All data processing and analyses were conducted in a Dell PowerEdge R910 worktation.
Transcriptome profiles of Lvliang Black goats under grazing and confined feeding conditions
To explore which genes respond to grazing and confined rearing feeding patterns, LBG blood cells transcriptome data from the 6 individuals were obtained by RNA-Seq. After filtering for low quality reads, the number of clean reads ranged with 22,030,293-29,015,974 with a relatively high base quality (q30 > 92.70% at all samples) ( Table 1). The mapping rate of all samples ranged from 84.60% to 87.25% when mapped against the Capra hircus reference genome. We obtained a total of 15971 expressed genes after transcript assembly. DEG analysis towards the genes revealed 102 genes which were differentially expressed between the two groups, with 74 of which were up-regulated and 28 were down-regulated in the confined rearing group (Figure 1 and Supplemental Table S1). Five DEGs were randomly selected and examined by qRT-PCR using three additional goats from each group. The qRT-PCR results were in good agreement with the RNA-seq data, indicating the reliability of our RNA-seq data ( Figure 2). Functional enrichment analysis for the DEGs shows that 75 genes enriched in 45 GO terms involved in development and stress response like biological regulation, development process, immune system process, metabolic process and response to stimulus (Supplemental Figure S1). KEGG pathway analysis showed that 27 DEGs were mapped to 53 pathways which involved in cellular process, environmental information process, genetic information process, diseases, metabolism, organismal system (Supplemental Figure S2 and Supplemental Table S2).
Genome-wide methylation profiles of Lvliang Black goats under grazing and confined rearing conditions
To explore genome-wide DNA methylation difference between grazing and reared groups, we performed methylation sequencing. High-quality data (confined rearing: 77.58% of raw reads; grazing: 79.39% of raw reads) was obtained by filtering out low quality reads/ bases at each group. The high quality reads were mapped against the goat reference genome with an extremely high bisulfite conversion rate (> 99.52% at all samples). In addition, the average depth of the two groups was respectively 28× (grazing) and 27× (confined rearing) ( Table 2). Over 92% regions on the genome were covered by at least one read and more than 83% regions in the whole genome were covered by at least 10 clean reads. These results indicated our methylation signals have a very low false positive rate and could be employed for the following analysis. The overall distribution and levels of DNA methylation on the LBG genome were evaluated with a window size of 100 kb across every chromosome. Our results showed that all three types of DNA methylation were detected at the 5′ position of cytosine. The methylation at CG sites reached 87.95% in the grazing group and 88.7% in the confined rearing group (Table 3 and Supplemental Table S3). However, the CHG and CHH Figure 1. Differentially expressed genes between lBg under grazing and confined feeding conditions. Six lBg individuals were randomly collected from the two groups under different rearing systems. three individuals were obtained in grazing group in the natural pasture. the other three individuals were obtained from the confined feeding group with enough hay and free access to feed and water. methylation occur in the LBG genome with a low level. The incidence rate of the CHG methylation was less than 3%, and the CHH methylation was less than 10% in both groups. The overall methylation level of CG sites was obviously higher than that of the CHG and CHH, suggesting that CG methylation is the dominant methylation type in LBG genome. Furthermore, the CG type methylation majorly emerged on the 2 kb up-stream of genes, and its level was gradually attenuated with the positions approaching to the 1 st exon of genes and reached the lowest level in the TSS region (Figure 3), while the CHH and CHG type methylation was uniformly distributed in gene regions. In accord with the differentially methylated regions (DMRs), sequences downstream and upstream of genes generally have higher level of differentially methylated positions (DMPs) than the gene body. However, DMPs generally enriched in transcription start site at both feeding patterns, which may indicate they play roles in the regulation of gene expression (Supplemental Figure S3). Comparative analysis identified 7833 DMRs and 280,550 DMPs, with 7811 CG, 21 CHH and 1 CHG DMRs, and 273,322 CG, 1,717 CHH and 5511 CHG DMPs across the LBG genome. Interestingly, we observed that the grazing feeding pattern had a higher level of DNA methylation than the confined rearing feeding pattern. We observed that 2,057 genes were enriched by DMPs (FDR < 0.01) as well as 1,528 genes were enriched by DMRs (Supplemental Table S4), but 3432 genes had at least one DMR, which therefore were annotated as differentially methylated genes (DMGs) (Supplemental Table S4). Functional enrichment analysis showed that these genes are involved in cellular process, single-organism process, biological regulation, metabolic process, cell part, organelle terms and binding and catalytic activity (Supplemental Figure S4 and Supplemental Table S5). Interestingly, functional enrichment for the DMGs from CHH regions showed that they were enriched in the GO terms that were also enriched by DMGs from CG DMRs. However, genes in CHG regions did not enrich into any terms. KEGG analysis showed that 1034 out of the 3432 DMGs in CG DMRs were mapped to 282 pathways (Supplemental Tables S5 and S6), with large proportion of them being mapped to 49 pathways (Supplemental Figure S5).
Integrative analysis of the transcriptome and methylome profiles
The correlation analysis was conducted to determine whether the expression level of genes was affected by DNA methylation. The distances of methylation sites to transcription initiation site (TSS) was used as a weighted coefficient for adjusting the methylation levels. Our results showed that the weighted methylation level of CG context across the region upstream to the gene body was significantly associated with the FPKM values both in the grazing and confined rearing group (Supplemental Table S7), which suggests that the methylation levels on upstream and gene body regions may affect the expression level of genes. However, the mechanism of the impact of DNA methylation on the gene expression level may unlikely be linear, as the values of the Pearson correlation coefficient were quite low (ranging from −0.03 to −0.11).
Furthermore, 48 DMRs were overlapped with 21 DEGs between the grazing and confinement regimen group (Supplemental Table S8), with 39 DMRs located in gene upstream regions and 9 DMRs in introns.
Discussion
It has been reported that many flora and fauna species exposed to extreme environmental conditions usually exhibit multi-aspect adaptation and genetic modifications [1,5,22]. In this study, we conducted a transcriptome and genome-wide methylation profiling towards LBG under grazing and confinement regimen feeding conditions to exploit the genetic divergence underlying different feeding patterns.
Our results showed that 102 genes exhibit different expression between the grazing and confinement regimen feeding conditions. These differentially expressed genes were identified to be involved in various functional categories. Of these DEGs, 27 genes were involved in cellular process, environmental information process, genetic information process, diseases in human, metabolism and organismal system, which suggests that different feeding strategies may have a wide range of impact on individual condition, from genic and cellular process, metabolism, to tissue assembly and environment adaptation. On the other hand, we detected numerous DMRs, and both functional and pathway enrichment showed that the difference in genomic methylation between the two groups also involves various functional categories, which further confirms that different feeding methods result in genetic deviation in LBG.
Genomic methylation analysis showed that CG methylation is the major type of DNA methylation in LBG, which is consistent with a previous study [5]. The significantly reduced CG methylation level from genic distal regions to TSS regions suggests that the methylation level may be negatively correlated with the transcriptional level in LBG, but their correlation is generally not strong. The hypothesis could be further confirmed by the low methylation level in highly expressed genes in the TSS. Our results are in agreement with the previous observations that gene expression is influenced by DNA methylation [7,8]. Although hyper-DNA methylation is generally associated with gene expression silencing, the intricate relationships among genes and relative pathways had made us unable to clarify their casual relationship unless we have large-scale multiple omics data. The complexity is also demonstrated by the low correlation level between DNA methylation and gene expression.
Integrative analysis of DNA methylation and transcriptome showed that 12 DEGs were identified in 48 DMR regions. These genes were involved in signal transduction mechanisms, lipid transport and metabolism and transcription, etc. Numerous studies have confirmed that all of these processes are crucial to organismal homeostasis and the response to the changed environment and various stress factors [23,24]. Thus, the methylation and expression changes of these genes may be associated with the genetic divergence of feeding strategies in LGB. Furthermore, PLCL1 gene which encodes Inactive Phospholipase C-like protein 1 is involved in the inositol phospholipid-based intracellular signaling cascade. This gene is also involved in the GABAergic synapse pathway. It has been reported that GABA is a predominant inhibitory neurotransmitter which regulates glutamatergic activity [25]. Function analyses about GABA in rats showed that its different levels can regulate heat production and loss associated with the changes of ambient temperature [26]. The climate of the Loess plateau is harsh with extreme torridity in summer and frigidity in winter. The grazing group has to cope to such serious climate switch, while the confined feeding group enjoys comfortable environment at any time. Therefore, the different expression of gene PLCL1 between the two groups may reflect the differences in the strategies in dealing with changing environment temperature.
The INPP4B gene (Inositol polyphosphate 4-phosphatase type II) is ubiquitously expressed in various tissues and acts as an important regulatory factor by regulating the PI3K/Akt signal pathway in various tumors [27]. Both functional loss and overexpression of the gene would result in carcinogenesis. The different expression of INPP4B in LGB may reflect genetic discrepancy in the resistance to environment pathogens. The hypothesis is also supported by the other DEG, IL6ST gene. IL6ST (IL-6 signal transducer) is expressed in various tissues. The results showed that the gene is involved in Jak-STAT signaling pathway and pluripotency of stem cells. The Jak-STAT signaling pathway widely participates in many biological processes, including proliferation, differentiation, migration, apoptosis and cell survival [28]. It has been reported that the pathway plays a very important role in numerous developmental and homeostatic processes, including hematopoiesis, immune cell development, stem cell maintenance, organismal growth, and mammary gland development [29]. Thus, different expression levels of gene IL6ST may exhibit multiple aspects of divergence in biological processes such as immunity, homeostasis, growth and development.
Conclusions
In summary, the genetic and epigenetic basis underlying the response to different feeding strategies were investigated by whole genome methylation and transcriptomic profiling in Lvliang goat inhabiting the Loess Plateau. Several genes in DMRs exhibited significant differential expression patterns and were involved in lipid transport, metabolism and immunity. The results suggested that the two different feeding patterns may involve widely modified physiological processes at multiple hierarchies. Further elaborate functional studies about the candidate genes in the study would validate the systematic genetic and molecular basis associated with different feeding strategies in the livestock.
Disclosure statement
We declare that we do not have any commercial or associative interest that represents a conflict of interest in connection with the work submitted.
Data availability statement
All data that support the findings in this study are available from the corresponding author upon reasonable request.
|
2021-07-26T00:06:27.143Z
|
2021-01-01T00:00:00.000
|
{
"year": 2021,
"sha1": "0b04614a3570c62360e6d1c91cc0a7c85b6b2ae2",
"oa_license": "CCBY",
"oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/13102818.2021.1914164?needAccess=true",
"oa_status": "GOLD",
"pdf_src": "TaylorAndFrancis",
"pdf_hash": "3c71e415b9c9427a00d67c1d568356daca7e0d6c",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology"
]
}
|
240033135
|
pes2o/s2orc
|
v3-fos-license
|
Understanding Wheat Starch Metabolism in Properties, Environmental Stress Condition, and Molecular Approaches for Value-Added Utilization
Wheat starch is one of the most important components in wheat grain and is extensively used as the main source in bread, noodles, and cookies. The wheat endosperm is composed of about 70% starch, so differences in the quality and quantity of starch affect the flour processing characteristics. Investigations on starch composition, structure, morphology, molecular markers, and transformations are providing new and efficient techniques that can improve the quality of bread wheat. Additionally, wheat starch composition and quality are varied due to genetics and environmental factors. Starch is more sensitive to heat and drought stress compared to storage proteins. These stresses also have a great influence on the grain filling period and anthesis, and, consequently, a negative effect on starch synthesis. Sucrose metabolizing and starch synthesis enzymes are suppressed under heat and drought stress during the grain filling period. Therefore, it is important to illustrate starch and sucrose mechanisms during plant responses in the grain filling period. In recent years, most of these quality traits have been investigated through genetic modification studies. This is an attractive approach to improve functional properties in wheat starch. The new information collected from hybrid and transgenic plants is expected to help develop novel starch for understanding wheat starch biosynthesis and commercial use. Wheat transformation research using plant genetic engineering technology is the main purpose of continuously controlling and analyzing the properties of wheat starch. The aim of this paper is to review the structure, biosynthesis mechanism, quality, and response to heat and drought stress of wheat starch. Additionally, molecular markers and transformation studies are reviewed to elucidate starch quality in wheat.
Introduction
Wheat is a major cereal crop that provides the world's population with calories and protein. Total wheat utilization is expected to reach nearly 746 million tons by 2020, and about 68% of total wheat use is projected to be consumed primarily as food by 2020 ( Figure 1) [1]. Wheat is mostly used as food, seeds, feed, and fuel. Wheat grain is composed of 13% water,~70% carbohydrates, 7~15% proteins, and 1.5~2% lipids [2]. In particular, wheat grains contain an important protein called gluten, which is needed in the basic structure to form a dough system for bread, cakes, cookies, cereals, pasta, and noodles. Among the different wheat species, Triticum aestivum is used to make bread and noodles, and T. durum for spaghetti and macaroni. T. monococcum, T. dicoccum, and T. spelta, customarily referred as einkorn, emmer, and spelt, respectively, are some of the ancient species used in grain berries, farro, and salads [3].
Wheat starch is an important by-product of gluten production [2]. Wheat endosperm is composed of about 70% starch. The difference in the quality and quantity of starch affects the flour processing characteristics. Wheat starch is obtained by removing protein from flour and is comparable to corn starch or flour in its processed state. It is now an essential
Characterization of Wheat Starch
Wheat starch is a major storage carbohydrate and contains about 60~75% grain and 70~80% flour [3]. Starch granules located in starchy endosperm cells are composed of two polymers called amylose and amylopectin ( Figure 2) [5]. Starch consists of two granules, a large A-granule (5~40 µm) and a small B-granule (<10 µm). Amylose is a linear α-1, 4 glucan, comprising 25~30% of wheat grain starch [6]. Amylopectin is a highly branched larger glucan comprising 70~75% of wheat grain starch. Moreover, starch also contains relatively small amounts of minerals, which are not functionally significant, except for phosphorus [7]. Phosphorus is found mostly in three main forms, i.e., phosphate monoester, phosphate, and inorganic phosphate. Phosphate monoesters are bonded to a specific region within the amylopectin molecule [8,9].
Starch plays a significant role in the texture of many kinds of food and serves as a major source of energy for humans [10]. Native starch does not have a functional character for food treatment requirements such as thickening and stabilization. Therefore, starch used in the food industry is modified during processing and storage to overcome undesirable changes in the product textures due to the decomposition of starch.
The waxy starch used in the food industry is typically chemically modified [11]. Crosslinked waxy starch typically exhibits shorter textures, higher paste stability, cooking shear, temperature and resistance to lower pH than native starch [12]. Modified starch made from waxy wheat has a lower gelatinization temperature and paste clarity than the modified starch made from corn starch. However, the freeze-thaw stability of modified wheat starch is generally better than modified waxy corn starch [13].
Starch Structure
Starch granules are composed of two types of α-glucan, amylose and amylopectin, and make up about 98-99% of the dry weight [7]. The proportion of both polysaccharides varies depending on the plant origin of the starch. "Waxy" starch contains less than 15%, ("normal" 20-35%), and "high" (amylo-) amylose starch contains more than 40%. Amylose and amylopectin ( Figure 2) differ in structure and properties and have been discussed and reviewed by many authors [14][15][16][17]. Amylose is a relatively long linear α-glucan containing about 99% of (1→4)-α-and (1→6)-α-linkages and differs in size and structure according to plant origin. The molecular weight of amylose is approximately 1 × 10 5~1 × 10 6 [17]. Amylopectin ( Figure 2) has a molecular weight of 1 × 10 7 -1 × 10 9 [17,18] and is much larger than amylose. Amylopectin is a heavily branched structure consisting of about 95% (1→4)-α-and 5% (1→6)-α-linkages. The starch is a main component in wheat grain, which accounts for 60-70% of its dry weight [19], followed by protein, which defines the grain quality [20]. The wheat kernel starch in the endosperm has three types (A-, B-, and C-type) of starch granules, each distinguished by its properties [20]. Each type has a unique physiochemical property that determines the quality of starch. The dynamics of starch granule size distribution, the activity of starch synthase, and the expression of genes encoding starch synthase were studied in superior and inferior grains during grain filling. The superior grains showed higher grain weight, starch, amylose, and amylopectin contents compared to inferior grains. The genotype X environmental interactions affect the polymers and alter grain starch and protein formation [21]. Recently, the effects of climate change on grain quality and food safety have been considered. The content and quality of wheat protein is also affected by plant nutrition and crop management. Additionally, under elevated temperatures between anthesis to grain maturity, the grain yield is reduced because of the reduced time to capture resources [22].
Starch Biosynthesis Mechanism
Starch is the main storage compound in plants, present in both production and storage organs. As starch biosynthesis is a complex process [7,17], higher plants use prokaryote-like starch biosynthetic pathways for the formation of adenosine 5'-diphosphate glucose (ADP-glucose) [23], a soluble precursor and substrate for starch synthase [24]. ADP-glucose initiates the starch biosynthesis by the action of the enzyme ADP-glucose pyrophosphorylase (AGPase, E.C. 2.7.7.27), which catalyzes the reaction of glucose-1phosphate with ATP in the plant cells [25]. The AGPase reaction is the first step carried out in the biosynthesis of transient starch in chloroplasts and chromoplasts, and subsequently imported into amyloplasts, following different mechanisms of post-translational regulation by related genes. The biosynthetic pathway for starch is summarized in Figure 3 [26]. Sucrose produced by photosynthesis moves to the amyloplast and is metabolized to hexose phosphate. These hexose phosphates act as a substrate for starch, protein, and oil biosynthesis. When the endosperm develops, most of the hexose phosphate is used for starch biosynthesis. In order to induce such an energy-intensive reaction, phosphorylation and ATP production are required.
Starch synthase enzymes separate glucose residues from ADP-glucose and bind them to the ends of amylose and amylopectin to elongate polymer polysaccharide chains. In the polysaccharide chain constituting amylose, the OH groups of carbon 1 and carbon 4 of glucose inside the chain are continuously connected. Amylopectin shows a regular branch shape by connecting the OH groups of carbon 1 and 6 in addition to the polysaccharide chain of amylose. The formation of these branches involves the starch branching enzyme (SBE) [26]. These two polymeric compounds form semi-crystalline starch granules, where the exact proportion, size, and shape of the starch granules vary according to plant species and organs [16]. A schematic diagram of the granular structure is shown in Figure 4. When the endosperm of wheat, corn, barley, and rice is developed, the cytosolic isoform of AGPase accounts for 65 to 95% of the total AGPase activity [25]. In higher plants, AGPase is a heterotetramer, consisting of two large (AGP-L) subunits and two small (AGP-S) catalytic subunits encoded by two or more different genes [27]. Plants have multiple genes that encode AGP-L or AGP-S subunits, which are differentially expressed in different plant organs. The multiple genes encoding AGP-L subunits show strong specificity in expression as they are limited to leaves, roots, and endosperm of barley [28], wheat [29], and rice [30], or derived from certain conditions, such as increased sucrose or glucose levels in potatoes [31,32].
Resistant Starch Wheat
We consume grains, pasta, and potatoes in our daily lives, and most of these carbohydrates are starch. Starch quality is largely determined by the ratio of amylose to amylopectin [22]. Normal starch is rapidly digested and absorbed as glucose. During this process of digestion, our body produces a hyperglycemic response. To alleviate this, insulin is secreted, and the process of becoming hypoglycemic again is repeated. If this process is repeated, our body is easily exposed to various diseases such as obesity and diabetes. This is why carbohydrates are considered a public enemy. Resistant starch, a type of starch, is a carbohydrate with an inverted attraction. Resistant starch is starch that is not easily broken down by digestive enzymes in the body [33]. Amylase cannot break it down into glucose, so it is not absorbed by the body. Instead, it is broken down by bacteria in the large intestine. A type of starch called resistant starch (RS) is not easily decomposed by digestive enzymes in the body. Starch with an increased amylose ratio is of great interest because it contributes to RS in food and has a beneficial effect on human health. Recently, high amylose starch showed a positive effect on health in a study related to obesity in humans [34]. RS can generally be divided into five types [5,35]: (1) types such as seeds, legumes, and whole grains that are difficult to digest; (2) types that contain a lot of resistant starch when raw, but which disappears when ripening; (3) low resistant starch content when warm after cooking, but low in resistant starch content when cooled (higher varieties); (4) chemically manufactured starch varieties; (5) a type that combines with the type of fat to change its structure and improve digestion. Foods rich in resistant starch include oats (oatmeal flakes), cold rice, cooked legumes, cooked potatoes, and unripe bananas. Unripe bananas, for example, contain about 20% resistant starch. The resistant starch in bananas helps in weight loss because it stimulates glucagon, which promotes fat breakdown without raising blood sugar.
In 1982, during in vitro analysis of non-starch polysaccharides, Englyst and workers discovered that some starch remained after enzyme hydrolysis [36][37][38]. A follow-up study of healthy ileostomy confirmed that similar starch resists digestion in the stomach and small intestine. Further analysis has shown that such starch can be fermented in the large intestine. This type of starch is named resistant starch (RS) [39]. RS can reach the colon and serve as a substrate for microbial fermentation, and its final products are hydrogen, carbon dioxide, methane, and short chain fatty acids [40]. According to Wong et al. [35], resistant starch acts similarly to dietary fiber, nourishing intestinal bacteria and increasing the production of short chain fatty acids such as butyrate. Brouns et al. [41] also found that resistant starch had the effect of keeping mucosal cells in the colon healthy and preventing cancer cell division. Based on the causes of enzyme resistance, RS is classified into five different types [42,43]. Table 1 summarizes the different types of RS, their classification criteria, and food sources. Table 1. Classification of types of resistant starch (RS), food sources, and factors affecting their resistance to digestion in the colon [42,43].
RS1
Physically inaccessible starch Whole or partly milled grains and seeds, legumes [44]
Some fiber drinks, foods in which modified starches have been used (e.g., certain breads and cakes) [46] RS5 Amylose-lipid complex Stearic acid-complexed high-amylose starch [47] RS1 is starch that is physically inaccessible to digestion, such as that in whole grains or tubers. RS2 is a native starch granule that is protected from digestion by the conformation or structure of the granule and is found in raw potatoes and green bananas. RS3 is retrograded starch formed when starchy foods (e.g., potatoes, pasta) are cooked and then cooled. Cooling allows the amylose and linear parts of amylopectin to form crystalline structure that reduces digestibility. RS4 is chemically modified starch formed by crosslinking, etherization, or esterification and it is found in foods containing modified starches such as some bread and cakes. RS5 is a starch wherein the amylose component forms complexes with lipids (amylose-lipid complex). The amylose-lipid complex is generally found in native starch granules and processed starch. This complex also entangles amylopectin molecules, restricting the swelling of starch granules and enzyme hydrolysis [47,48]. The formation of amylose-lipid complexes is an immediate reaction, and RS5 is considered thermally stable because the complex can regenerate after cooking [49]. The presence of the amylose-lipid complex in starch granules increases their enzyme resistance by restricting granule swelling during cooking [47].
Grain Filling Stage under Heat/Drought Stress
Wheat development stages include "germination, emergence, tillering, floral initiation, terminal spikelet, stem elongation, spike emergence, anthesis and maturity" (http://www. fao.org, accessed on 20 October 2021). Starch synthesis was observed, that this mechanism was strongly relevant during the stages of anthesis and maturity [50]. The wheat starch production losses are caused more by abiotic stresses such as drought and high temperature than by biotic insults or other abiotic stresses [51]. Thus, understanding the effects of these stresses becomes indispensable for wheat starch improvement programs that have depended mainly on environmental factors such as heat and drought stresses.
Wheat plants are frequently subjected to varying degrees of heat and drought stress during their growth stage [52]. Heat stress is assessed by the degree and rate of temperature rise, as well as the amount of time spent exposed to the elevated temperature [53]. Globally, the rise in daily minimum temperatures was more than twice that of daily maximum temperatures between 1950 and 1995 [54]. Greater temperature variability and an increase in the frequency of warm days will also have an impact on future climates [55]. Wheat yield losses will increase by up to 30% by 2050 as a result of climate change and a 2-3 • C rise in global temperature [56]. Optimal temperature for wheat anthesis and grain filling period is between 20 and 25 • C. Wheat grain filling rate is reduced when exposed to temperatures above 30 • C during the anthesis and grain filling stages, resulting in lower yield and quality [57,58]. As a result, heat stress is a significant challenge to wheat production and optimal yields [59]. Drought stress decreases cell elongation and growth by causing water loss, turgor loss, and stomatal closure [60,61]. It also causes early senescence and reduces the duration of grain filling stage since photosynthesis trigger and metabolism is disrupted, resulting in death of cell [62]. Therefore, it is important to elucidate wheat tolerance mechanisms in response to drought and high-temperature stress during the grain filling period. During the reproductive phase of wheat growth, drought and high-temperature stress has emerged as a serious problem for starch synthesis. In the case of Korea and other countries, anthesis and maturity periods were mostly from April to June. In Miryang, one of Korea's largest wheat cultivation areas, rain precipitation has continued to decrease, while the temperature continues to increase during the anthesis and maturity period for the past three years (rainfall : 2018 (93.5 mm), 2019 (50.3 mm), 2020 (48.6 mm)/Temperature : 2018 (28.5 • C), 2019 (29.1 • C), 2020 (30.2 • C)) (https://data.kma.go.kr/, accessed on 20 October 2021). To overcome these problems, early ripening wheat cultivars through crossbreeding (Jokyung (accession no. 102005000184), Jopum (102000200523), Joeun (102001000044) etc.) were released from the 1990s to 2010s in Korea. Additionally, transcriptome analysis under heat stress was performed on Korean wheat cultivars during the ripening period [63].
Starch & Drought and Heat Stress during Anthesis and Grain Filling Stage
Starch is more sensitive than storage protein to heat stress [64]. Although wide genetic variability was observed among the wheat species for heat tolerance in grain starch content [65], changes in amylose and amylopectin deposition, as well as changes in starch granule formation are of specific importance [26]. During the wheat grain filling period, high temperatures reduced the starch content and modified the size distribution of starch granules in grains [6]. This also changed the chain length of amylopectin in endosperm starches [66] and caused poor starch granule structure [67]. Starch synthesis is highly susceptible to high-temperature stress due to the susceptibility of the soluble starch synthase in developing wheat kernels [68,69]. During the grain filling period, short periods of very high temperature (35-40 • C) could have a negative impact on grain quality [70]. However, the high-temperature acclimation effectively improved the carbohydrate remobilization from stems to grains during anthesis. This resulted in less modified starch content and starch granule size distribution in wheat grains [20]. Since pollen maturation necessitates the use of starch as an energy reserve, starch accumulated in stem tissue is used as a temporary soak during the reproductive process of plants [71]. Pollen production is interrupted, and pollen mortality is increased as a result of a high-temperature-induced impediment in starch mobilization within the anther [72]. Drought stress can cause grains to lose their total starch and amylopectin content during the flowering stage [73]. However, it affects starch size distribution and branch chain length during anthesis [74]. Under both stress conditions, drought reduced the size of small starch granules, while heat stress reduced the size of large granules. Thus, the changes in morphology and size distribution of starch granules resulted in a decrease in starch content and total grain yield [75]. Despite the significant deleterious effect of high-temperature and drought stress on wheat production, the plant's starch response mechanism during the grain filling period could not be clearly elucidated.
Starch & High Night Temperature during Anthesis and Grain Filling Stage
High temperatures of 20 to 23 • C at night reduced the grain-filling period by 3 to 7 days [76]. There has recently been a critical decrease in the rate of grain filling in wheat cultivars when the day/night temperature is 32/22 • C compared to 25/15 • C [77]. High temperatures of 31/20 • C during the day and night can cause changes in the aleurone layer and endosperm structures [78]. Higher night temperature reduced the transcript levels of the adenosine diphosphate glucose pyrophosphorylase small subunit but increased the starch-degrading enzymes isoamylase III, alpha, and beta-amylase by a factor of two in developing grains [79]. Likewise, the increase in night temperature shortens the grainfilling period and reduces the grain structure more so than that of day temperature. To improve wheat yield and quality under heat stress, a thorough study of grain weight stability in terms of starch components between day and night temperatures is needed.
Sucrose and Starch Biosynthetic Pathway Carbohydrate Metabolism under Stress
Heat stress decreases cereal starch content while increasing protein content during grain filling [80]. It did not affect the swelling power or starch solubility of wheat starches, but it did significantly reduce the swelling ability of wheat flours and enzymatic digestibility of wheat starches [81]. Sucrose is processed by invertases, sucrose synthases, and sucrose phosphate synthase after it enters the grain [82]. The activity of these enzymes appears to be an effective target for control in wheat under heat stress to improve grain filling processes and yield [83]. Multisite protein phosphorylation modulates sucrose phosphate synthase in response to temperature [84]. In developing pollen grains, heat stress inhibits sucrose synthase, as well as many cell walls and vacuolar invertase. As a result, the sucrose and starch turnover is impaired, and soluble carbohydrates accumulate at lower levels [85]. Hence, there is the necessity to analyze the sucrose and starch biosynthetic pathway mechanism under heat stress.
Carbohydrate Metabolism under Stress
During heat stress, carbohydrate availability is a significant physiological feature linked to heat stress resistance [86]. Survival strategies of plants subjected to environmental influences such as high temperature depend on efficient carbohydrate metabolism as a source of energy and carbon skeletons [87]. Due to changes in photosynthetic carbon metabolism, heat stress prevents plant development, disrupts mineral-nutrient relationships, and impairs metabolism [88]. Invertase is required for the hydrolysis of sucrose into glucose and fructose. A central enzyme in sucrose metabolism, Cell Wall Invertase (CWIN), catalyzes the irreversible breakdown of sucrose into glucose and fructose, and downregulates the genes involved in carbohydrate metabolism under heat stress [89]. Drought stress inhibits plant growth, disrupts mineral-nutrient relationships, and impairs metabolism due to changes in photosynthetic carbon metabolism [90]. It is well known that stress can alter the activity of an enzyme, and the changes to sucrose-metabolizing enzyme activities also modify the sucrose metabolism in leaves. However, no consistent conclusion on the impact of stress on sucrose metabolism has been drawn, and various studies have reached different conclusions.
Regulation of Starch Metabolism under Stresses
Starch metabolism enzymes consist of sucrose phosphate synthase (SPS), sucrose synthase (SuSy), ADP-glucose pyrophosphorylase (AGPase), glucokinase, soluble starch synthase (SSS), and starch branching enzyme (SBE) [91]. Heat stress during grain filling decreased these activities of enzymes, which restricted the accumulation of starch [91]. The functions of these main enzymes, as well as their genes associated with the conversion of sucrose to starch, were decreased, which was the major cause of starch content reductions [92]. AGPase is one of the enzymes that is presumed to be the primary site of starch deposition regulation in storage tissue [93]. Sucrose-6-phosphate synthase activity was measured in mature leaves, and sucrose synthase, AGPase, and UDP-glucose pyrophosphorylase activities were measured in the growing tubers of plants. Tuber sucrose synthase and ADP-glucose pyrophosphorylase activity were decreased but at a slower rate than leaf sucrose-6phosphate synthase activity [94]. Sucrose synthase and adenosine guanine pyrophosphorylase activity is high in growing tubers but decreases as tubers mature [95]. Heat stress increased the accumulation of foliar sucrose and decreased starch accumulation. Drought conditions influence the activities of starch biosynthesis enzymes such as GBSS, SS, and ADP-glucose pyrophosphorylase (AGP) [26]. Hexokinase catalyzes committed steps in glucose metabolism by forming hexose phosphate [96]. In both hexokinase-dependent and independent pathways, glucose serves as a signal molecule in addition to its structural function [97]. Drought stress increased the expression of two hexokinase transcripts [98]. Heat and drought stress suppressed the starch deposition by lowering the activity of all enzymes involved in starch synthesis except hexokinase.
Starch Synthetic Metabolism under Stresses
Heat stress reduced the activities of SPS and SuSy, resulting in lower sucrose levels during the grain filling period [99] and increased the activities of SuSy and SBE during the early stages of grain production but decreased subsequently [100]. It reduced the activities of enzymes involved in starch synthesis (AGPase, SSS, and SBE) and suppressed the grain weight and starch deposition during the grain filling period [101]. It also has a negative influence on SSS activity and starch granule synthesis [102]. SSS is highly sensitive to high temperatures [103], with relatively tolerant cultivars having higher catalytic efficiency of SSS at elevated temperatures and higher heat shock protein content (HSP 100). The relation between SSS activity at higher temperatures and HSP 100 levels in wheat grains may be due to SSS denaturation defense mechanism [104]. Limit dextrinase (LD) is the only endogenous hydrolase that can cleave α-1−6 linkages amylopectin and β-limit dextrin [105]. Lower LD activity results in lower fermentable sugar production and a higher level of dextrin [106]. The LD activity decreased sharply during thermo treatment [107]. As such, heat stress causes a decrease in several enzymes involved in the starch synthesis mechanism. Drought treatments reduced LD activity in all genotypes, but the degree of the reduction differed by genotype and treated time [108]. The production of endosperm starch granules and the physicochemical properties of starches may be affected by drought, affecting the consistency of final wheat products [109]. Both heat and drought stresses, which have a great influence on the grain filling period and anthesis, also have a negative effect on starch synthesis in combination.
Starch and Other Stresses during Anthesis and Grain Filling Stage
In the case of India and China, waterlogging limited wheat production [110]. Total rainfall was up to 500-800 mm from March to May, coinciding with anthesis and maturity, which made the starch of wheat grain [111]. After anthesis, waterlogging caused poor production [112]. Waterlogging reduces grain number and grain weight depending on exposure waterlogging time lapse [113]. GBSS activities were declined as was ADP-Gppase under water stress [114]. Waterlogging affected several starch properties via downregulating the expression of soluble starch synthase, amylopectin content, and number of starch granules [115]. In another piece of research, waterlogging depressed ADP glucose pytophosphorylase and the amylopectin/amylose ratio [116]. As such, waterlogging also caused damage to the starch mechanism during the early anthesis period.
Pre-harvest sprouting (PHS) is the premature germination of grain before harvest in wheat. PHS occurs in several wheat-growing regions [117]. When germination begins, causing starch and protein degradative enzymes to be produced, which break down endosperm starch and protein to germination [118]. Alpha amylase is a starch-degrading enzyme that is generated during the PHS process [119]. Increased endoprotease such as amylase, protease, and lipase activity in sprouted wheat causes protein or starch degradation, resulting in decreased wheat quality [120]. There are several strategies used to reveal PHS such as QTL [121] and MALDI TOF [122]. Because PHS induced several enzymatic reactions, genetic research was also needed.
One of the environmental factors that limits crop development and agricultural output is salt stress. High salinity has been shown to have an impact on carbohydrate metabolism. Salt stress induced GBSS expression that was highly controlled at the transcriptional level [123]. Salt stress is regulated by ADP-glucose pyrophosphorylase (AGP), starch synthase (SS), and starch branching enzyme (SBE) [124]. Synthesized triticale starch showed a decreased population of small granules and an increased ratio of A-type to B-type granules under salinity stress [125]. Salinity stress was observed to increase starch synthesis to regulate several enzymes; however, little is known about the molecular mechanism by which NaCl regulates starch accumulation.
Molecular Marker Development and Application for Wheat Starch
In the past, breeding research has relied on measuring only the characteristics of interest to select the superior lines. For example, it is easy to choose simple morphological properties such as plant height to select large amounts of offspring. Of course, the size and yield of each grain can also be considered. However, most of these characteristics require laboratory analysis or bioassay. Many characteristics are difficult to measure (e.g., grain dormancy and late maturity), so the resources available to breeders impose significant constraints on the speed and scale of their choice. In such cases, the use of markers is of great value to wheat breeders who indirectly represent the characteristics of interest and are relatively easy to score [126]. Markers can be linked (i.e., likely inherited with genetic proximity of markers and gene-dependent properties of interest) or diagnosed if they are directly related to genes. These diagnostic markers do not require independent verification for each parent line used in breeding programs and have an important advantage of having an absolute association with the selected characteristics.
In order to develop an efficient breeding program in common wheat, four techniques (SDS-PAGE, 2-DE, MALDI-TOF-MS, and PCR) were compared to evaluate the suitability [127]. Of these, PCR-based markers showed the easiest, most accurate, and rational technique, recommending the identification of Glu-A3 and Glu-B3 alleles in breeding programs. Seventeen allele-specific markers have been reported for the Glu-A3 and Glu-B3 loci (Table 2), and, in fact, multiple PCR protocols have been developed to reduce screening costs in breeding programs [128]. Additionally, the application of functional markers for the identification of LMW-GS in wheat germplasm of various types has been reported [143]. Functional markers are developed from a functional polymorphism in the gene coding sequence, which can be a single nucleotide polymorphism (SNP) or InDels [142]. Map-based cloning and micromapping are the most effective strategies to isolate functional genes from plants [144].
Molecular marker technology has provided a new and efficient tool to improve the quality of bread wheat. To improve and support bread-making quality, high-throughput Kompetitive Allele-Specific PCR (KASP) analysis was performed and verified for key genes including the wbm gene on the 7AL chromosome and the overexpressed glutenin Bx7OE (Glu-B1al) gene [141]. These high-throughput marker resources have provided and made available the opportunity to improve bread-manufacturing quality in wheat breeding. As a PCR-based marker for each allele of waxy wheat, genes such as Wx-A1, Wx-B1, and Wx-D1 can identify wild-type and null waxy alleles at the waxy locus [139,145,146]. These PCR marker sets were used to identify and characterize waxy mutations occurring in the Wx-A1, Wx-B1, and Wx-D1 genes of 168 wheat lines [147].
An important factor in determining the amylose content of grain starch is the 59 kDa granule bound starch synthase (GBSS) protein [148,149]. In wheat starch, amylose levels are affected by the activity of GBSS1 in the process of endosperm development [150]. Low amylose content in wheat has the effect of increasing starch viscosity and flour swelling volume (FSV) [151,152], and this property is preferred for white salt (udon style) noodle production [153,154]. In durum wheat, the Wx-B1 null mutation resulted in decreased amylose content with increased starch dough viscosity and FSV [155]. In addition, pasta derived from the Wx-B1 null line had lower cooking losses. Furthermore, cooking losses have shown a correlation with amylose content, peak starch viscosity, swelling power of semolina, and adhesiveness of cooked pasta [155].
Two types of GBSS genes, GBSSI and GBSSII, are present in wheat (T. aestivum L.), barley (Hordeum vulgare L.), corn (Zea mays L.), and rice (Oryza sativa L.) [156]. The GBSSI gene responsible for amylose synthesis in endosperm tissue is located at the waxy locus, and the GBSSI gene product is known as the Waxy (Wx) protein [157]. Waxy (GBSSI triple null) particles can be identified by a simple potassium iodide staining [158]. Each GBSS protein can be detected by 2D-electrophoresis [139] and SDS-PAGE under optimal conditions [159]. In wheat, three GBSSI genes located on chromosomes 7A (Wx-A1), 4A (Wx-B1), and 7D (Wx-D1) encode GBSSI. In the absence of the GBSS enzyme in the grain endosperm, this tissue consists almost entirely of amylopectin [158]. Meanwhile, in order to identify wheat with the desired texture for udon noodles, a specific PCR analysis method was developed to identify molecular markers linked to the GBSS 4A locus. [160]. These PCR markers can be tested easily and accurately because they use the leaves of young seedlings or mature seeds compared to conventional methods used to screen the quality of udon noodle starch. In addition, this PCR marker analysis is advantageous to identify a breeding line that is heterogeneous for the 4A allele.
With the development of Next-Generation Sequencing (NGS), NGS-based genotyping techniques were applied to the development of molecular markers for grain starch or quality [161]. A genome-wide association study (GWAS) is considered an attractive approach for assessing grain quality. Starch contents and starch-related parameters in rice were studies using GWAS analysis [162,163]. Recently, GWAS was performed to identify genetic factors of wheat grain quality including grain protein content, grain starch content, and grain hardness [164]. This kind of studies, especially GWAS analysis for wheat quality, could become a growing trend in digital big data-based precision breeding.
Genetic Modification of Starch Composition in Wheat
Grain is the part harvested from wheat, and its nutrition and properties are determined by its biochemical composition. In wheat seeds, starch accounts for 55 to 75% of the total dry grain weight and contains 10 to 15% of the storage protein. In addition, starch and protein have a significant impact on the quality of products made from flour. Optimal starch and protein, and the right levels of essentials (iron, zinc, calcium, phosphorus, and antioxidants) are indispensable elements in healthy wheat products. Most of these quality traits in recent years have been developed for research using genetic modification interventions.
The starch portion, which comprises about 70% of the total dry matter of wheat grains [165], can have a significant impact on products made from wheat kernels. For example, the quality of noodles manufactured from flour depends primarily on the characteristics of starch [166]. The physicochemical properties and end uses of wheat starch are related to starch structure and the distribution of two major glucan macromolecules, amylose and amylopectin [167]. A clear strategy for modifying the properties of wheat starch by genetic engineering involves changes in the level of starch biosynthetic expression.
Wheat starch granules accumulate at least three types of starch synthase (SS) with molecular masses of about "60, 77, 100 to 115 kDa" [159]. Most SS synthesis appears to be in the soluble portion of the endosperm [168]. Several genetic studies of mutations lacking 60 kDa granule-bound SS (GBSS1; waxy protein) in cereals strongly suggest their role in amylose synthesis [158,169].
Temporary starch produced in photosynthetic tissue may not be fully applicable to starch build-up in sink parts, but the same enzymatic function is also believed to be involved in starch biosynthesis in both localities [93]. Genes encoding starch biosynthetic enzymes can affect the fine structure of starch, with differences in spatial and temporal regulation, substrate specificity, concentration, and movement. To further understand the starch biosynthesis processes in wheat, molecular strategies to change the level of expression of starch biosynthesis genes have been adopted in combination with plant breeding techniques [167]. The new information collected from hybrid and transgenic plants is expected to help develop novel starch for understanding wheat starch biosynthesis and commercial use.
Traditional breeding techniques or genetic modification can be used to produce novel starch with modified properties [170]. Using genetic modification techniques, high amylose starch (starch with up to 70% amylose content) and wax starch (99-100% amylopectin content) were produced [171]. It also produced starch that transformed the amylopectin structure by adjusting the phosphate content and granule size. Currently, research on wheat transformation using plant genetic engineering technology is reported to constantly control and analyze the characteristics of wheat starch (Table 3). RNA interference (RNAi) is a powerful tool for functional gene analysis and engineering of novel phenotypes, which is a common regulatory mechanism for gene expression in eukaryotic cells. This technique directs gene silencing after transcription in a sequencespecific manner based on the expression of antisense or hairpin RNAi constructs, or other forms of short interfering RNA molecules. The application of RNAi contributed to the manipulation of wheat particle size [185,186] and quality [187,188]. The NAC gene that controls aging improves the grain protein, zinc, and iron content of wheat [186]. The ancestral wild wheat allele encodes the NAC transcription factor (NAM-B1) to accelerate aging, while modern wheat varieties have a non-functional NAM-B1 allele. Thus, reduction in RNA levels of multi-NAM homologues by RNAi delayed aging by more than 3 weeks and reduced wheat grain protein, zinc, and iron content by more than 30%. The RNA interference expression vector of TaCKX2.4 was constructed and transformed in bread wheat NB1, and the number of grains per spike was improved due to RNAi of the cytokinin oxidase 2 (CKX2) gene in the transgenic line [187]. That is, the expression level of TaCKX2.4 was negatively correlated with the number of grains per spike, and the number of grains per spike was increased in wheat with decreased TaCKX2.4 expression. RNA silencing of the waxy gene by the RNAi strategy confirmed a decrease in amylose levels in transgenic wheat seeds [188]. According to iodine staining and amylose content analysis in these transgenic seeds, the level of amylose in the endosperm was significantly reduced in transgenic seeds. In addition, RNAi was used to suppress the expression level of the 1Dx5 high-molecular-weight glutenin subunit, resulting in a transgenic wheat line [189]. The silence of the 1Dx5 expression significantly reduced the quality of flour processing based on Farinograph, Gluten, and Zeleny tests. Consequently, it was found that RNAi is useful for silencing the HMW-GS gene.
Silencing of the SBEIIa gene increased the amylose content in durum wheat [184]. The starch granules of these transgenic lines have a deformed, irregular, constricted shape, and are smaller than the unmodified control. In durum wheat, silencing of the SBEIIa gene causes changes in granule morphology and starch composition, resulting in highamylose wheat. High-amylose durum wheat was produced through mutagenesis of starch synthase II (SSIIa or SGP-1) [182]. Therefore, high-amylose durum may be useful for making valuable pasta with increased elasticity and reduced glycemic index. An EMSinduced mutant population for amylose and resistant starch mutations of bread wheat (T. aestivum) was developed, and candidate genes responsible for the amylose mutation were identified [173].
Starch composition, structure, and properties were modified through editing of TaS-BEIIa in both winter and spring wheat varieties using CRISPR/Cas9 [177]. TaSBEIIa determines the starch composition, structure, properties, and end-use quality across a variety of genetic backgrounds. It also improves RS content through multiple breeding and end-use applications in grain crop species, thus utilizing genome editing for health benefits. Novel NAC transcription factors, TaNAC019-A1 (TraesCS3A02G077900) and NAC019-A1, negatively regulate starch synthesis in wheat and rice (Oryza sativa L.) endosperm, and provide new insights into improving wheat yield (citation). TaMTL was edited using an optimized Agrobacterium-mediated CRISPR system to efficiently induce haploid plants in wheat [174]. Two endogenous genes, TaWaxy and TaMTL, were edited with high efficiency by the optimized SpCas9 system, and the highest efficiency (80.5%) was achieved when targeting TaWaxy using TaU3 and two sgRNAs.
After genetically modified organisms (GMOs), the era of 'gene-edited crops' is coming. The United States, Canada, Israel, Japan, and Australia have already begun to approve the production of gene-edited crops. Targeted gene editing, especially CRISPR/Cas9, is a tool with significant potential for plant development and breeding [190,191]. Gene-edited crops are one kind of crop in which DNA is deleted or inserted to improve genetic traits within an organism other than a foreign gene by techniques such as gene scissors. This technique can be used to enhance the good nutrients of a crop or remove the bad nutrients. Gene editing is a transient step that enables editing of a target gene, requiring the introduction of foreign DNA (a zinc finger protein, TALEN, or a structure plus guide RNA for Cas9 and CRISPR/Cas9) or protein into the plant genome or plant cell [192]. Foreign DNA is isolated from the next generation and is not present in the final gene editing line and final product. To address these issues, several approaches must be combined, and, almost certainly, genes edited from different lines must be combined through crosses and selection within breeding programs. It is also suitable for determining the safety and quality of grains screened and produced during these breeding programs under stringent regulations. Additionally, the advent of genome editing has sparked enthusiasm, but, at the same time, it has sparked controversy and raised regulatory and governance concerns around the world. In gene-editing research, human embryos are subject to strict regulations due to ethical concerns, which poses challenges to research activities [193,194]. As agriculture faces major challenges to provide food and nutritional security, producing more food with sustainable production requires the development of crops that will significantly contribute to the achievement of several sustainable development goals [195]. In the case of plants, since ethical issues are somewhat insignificant, flexible regulation should be carried out. Moreover, transgene-free genome-edited plants can be easily generated by ribonucleoproteins (RNP) or Mendelian segregation [196,197]. Therefore, if policy and governance issues are addressed at national and international levels, plant genome editing can play a key role in developing useful crops, along with rapid scientific progress.
Kernel hardness, a quality characteristic of common wheat (T. aestivum L.), is primarily regulated by the Pina and Pinb genes. Mutation or deletion of Pina or Pinb increases kernel hardness, resulting in hard wheat kernels. Transformation of Pinb-D1x into soft wheat using bombardment technology produces a hard wheat kernel texture [179]. According to the data from the single kernel characterization system and scanning electron microscopy, the introduction of Pinb-D1x into the soft mill significantly increased the kernel hardness and changed the internal structure of the kernel. The low molecular weight glutenin subunit LMW-N13 improved the dough quality of transgenic wheat using Agrobacteriummediated technology [175]. To analyze the contribution of LMW-N13 to dough quality, three transgenic wheat lines overexpressing LMW-N13 were generated. Compared to the non-transgenic (NT) line, the transgenic (TG) line showed excellent dough properties. These excellent dough properties resulted in higher glutenin macropolymer (GMP) and total protein content.
Conclusions
Wheat starch is an important by-product of gluten production, and wheat endosperm is composed of about 70% starch, so differences in the quality and quantity of starch affect the flour processing properties. Wheat starch, in particular, is the main storage carbohydrate and contains about 60 to 75% of grains and 70 to 80% of flour. In plants, starch is a major storage compound present in both production and storage organs, and starch is synthesized through a complex biosynthetic process. These starches are rapidly digested and absorbed as glucose. In the process of digestion, the human body responds to hyperglycemia. In order to relieve this, insulin is secreted and the process of becoming hyperglycemic is repeated. If we repeat this process, the body is more likely to be exposed to various diseases such as obesity and diabetes. Recently, many studies conducted on resistant starch (RS), a type of starch, proved that it is not easily decomposed by digestive enzymes in the body [198]. Resistant starch acts similar to dietary fiber, providing nutrients to intestinal bacteria. Cereals high in amylose content (AC) and resistant starch (RS) have potential health benefits [30,33]. Grains with higher amylose content (AC) are good sources of RS [199]. Grains high in RS are reported to help improve human health and reduce the risk of serious non-infectious diseases [172]. Currently, there is an increasing need for developing crops with high RS to address the rapidly growing nutritional challenges for public health [172,200,201]. Amylose and amylopectin are synthesized through two different pathways. Amylose synthesis requires active granular binding starch synthase (GBSS), whereas amylopectin is a complex pathway involving other isoforms including starch synthase (SS), starch branching enzyme (SBE), and starch debranching enzyme (SDBE) [202]. Starch synthesis can be directed to amylose production by overexpressing the appropriate GBSS (Waxy) allele to further increase AC [203,204] or inhibiting the expression of enzymes involved in amylopectin biosynthesis to increase the AC content [172,205,206]. In addition to the studies related to wheat starch properties, many researchers are providing new and efficient techniques that can improve the quality of bread wheat using molecular marker technology. Starch and protein have a major impact on the quality of flour products. Optimal starch and protein and the right levels of essential ingredients (iron, zinc, calcium, phosphorus, and antioxidants) are required in order to produce a healthy wheat product. Wheat starch metabolism should be studied in the anthesis and maturity stage. Looking at the climate changes in Korea or around the world, the stresses that affect starch metabolism are mostly high temperature and drought. Several pieces of research, such as on relative enzymes and metabolism, have been carried out regarding these stresses. When temperatures are elevated between anthesis to grain maturity, grain yield is reduced due to the reduced interval to capture resources. Among the resources, starch is especially sensitive to heat and drought stress compared to storage proteins. Both heat and drought stresses, which have a great influence on the grain filling period and anthesis, have a negative effect on starch synthesis. To improve wheat yield and quality under heat and drought stress, a thorough study of grain weight stability in terms of starch components and enzymes is needed. Sucrose and starch biosynthetic pathway mechanisms can alter the activity of an enzyme, and the changes to sucrose-metabolizing enzyme activities also modify the sucrose metabolism. Heat and drought stress suppress the starch deposition by lowering the activity of all enzymes such as sucrose phosphate synthase, sucrose synthase, ADP-glucose pyrophosphorylase, glucokinase, soluble starch synthase, and starch branching enzyme, which are involved in starch synthesis. Therefore, it is important to elucidate the mechanism of wheat starch synthesis in response to drought and high-temperature stress during the grain filling period. In recent years, many studies have revealed that most of these quality traits are undergoing development through genetic modification. The new information collected from hybrid and transgenic plants is expected to help develop novel starch for understanding wheat starch biosynthesis and commercial use. In addition, traditional breeding and genetic modification can be used together to produce new starches with modified properties. However, chemical or physical radiation-induced mutations can be accompanied by un-desirable and uncharacterized mutations in the whole genome [207,208]. Furthermore, RNAi-mediated interference of gene expression is often incomplete and transgene expression varies in different lineages. In addition, transgenic lines are considered genetically modified and must undergo a costly and time-consuming regulatory process [209]. Currently, wheat transformation research using plant genetic engineering technology is the main purpose of continuously controlling and analyzing the properties of wheat starch.
|
2021-10-28T15:18:45.783Z
|
2021-10-25T00:00:00.000
|
{
"year": 2021,
"sha1": "f69b0706452fc9f9a9b51bd3a423b4545e2a6760",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2223-7747/10/11/2282/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "69303df336db44cc6617874377ae61c1d66fa904",
"s2fieldsofstudy": [
"Environmental Science",
"Agricultural and Food Sciences",
"Biology",
"Materials Science"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
265160551
|
pes2o/s2orc
|
v3-fos-license
|
Romantic Love and Behavioral Activation System Sensitivity to a Loved One
Research investigating the mechanisms that contribute to romantic love is in its infancy. The behavioral activation system is one biopsychological system that has been demonstrated to play a role in several motivational outcomes. This study was the first to investigate romantic love and the behavioral activation system. In study 1, the Behavioral Activation System—Sensitivity to a Loved One (BAS-SLO) Scale was validated in a sample of 1556 partnered young adults experiencing romantic love. In study 2, hierarchical linear regression was used to identify BAS-SLO Scale associations with the intensity of romantic love in a subsample of 812 partnered young adults experiencing romantic love for two years or less. The BAS-SLO Scale explained 8.89% of the variance in the intensity of romantic love. Subject to further validation and testing, the BAS-SLO Scale may be useful in future neuroimaging and psychological studies. The findings are considered in terms of the mechanisms and evolutionary history of romantic love.
Introduction
Research investigating the mechanisms that contribute to romantic love is in its infancy.The behavioral activation system (BAS) is one biopsychological system that has been demonstrated to play a role in several motivational outcomes.To our knowledge, no studies have investigated the role the BAS may play in romantic love.Using a biological conceptualization of romantic love, we develop a means of assessing BAS sensitivity to a loved one and assess its association with the intensity of romantic love.The result is the formulation of a new means of assessing one biopsychological system that may contribute to the expression of romantic love.
Romantic Love
The topic of love in romantic relationships is riddled with definitional inconsistency and ambiguity.Sociological [1,2], anthropological [3], psychological [4,5], and biological [6] conceptions of love in romantic relationships all have their own terminology and formulations.While in many such disciplines, it is common to refer to all types of love within romantic relationships as "romantic love," the biopsychological focus of this article leads us to choose a different approach.In the discipline of biology, "romantic love" tends to refer to the period of intense feelings that often accompanies the early stages of romantic relationships [6,7].As such, we use the term "romantic love" to refer to a motivational state associated with a range of reproductive functions, including mate choice, courtship, sex, and pair bonding [6] (p. 21).It is the basis of long-term romantic relationships and family formation throughout much of the world.It is associated with a range of cognitive, emotional, and behavioral activities in both sexes.It is sometimes referred to as "passionate love" in certain areas of psychology [8].The expression of romantic love is partly socially or culturally influenced, and differences in its presentation are found across cultures (e.g., [1][2][3][9][10][11][12]).
Cognitive activity of romantic love includes intrusive thinking or preoccupation with the partner, idealization of the other in the relationship, and desire to know the other and to be known.Emotional activity includes attraction to the other, especially sexual attraction, negative feelings when things go awry, longing for reciprocity, desire for complete union, and physiological arousal.Behavioral activity includes actions toward determining the other's feelings, studying the other person, service to the other, and maintaining physical closeness.
Romantic love often happens at the early stages of a romantic relationship (referred to as early-stage romantic love) and usually lasts months or years (see [13,14]) but can sometimes last many years or decades (referred to as long-term romantic love) [15][16][17].The psychological characteristics of both types of romantic love are similar, except that long-term romantic love is not characterized by intrusive thinking or preoccupation with the partner [15,16].The neural mechanisms that cause each type of romantic love are similar but are not identical.
Romantic love is most strongly associated with neural activity in systems associated with reward and motivation (e.g., ventral tegmental area, nucleus accumbens, amygdala, and medial prefrontal cortex), emotions (e.g., amygdala, anterior cingulate cortex, and the insula), sexual desire and arousal (e.g., caudate, insula, putamen, and anterior cingulate cortex), and social cognition (e.g., amygdala, insula, and medial prefrontal cortex), as well as higher-order cortical brain areas that are involved in attention, memory, mental associations, and self-representation [6,18].Functional connectivity is increased in people experiencing early-stage romantic love within the reward, motivation, and emotion regulation network (dorsal anterior cingulate cortex, insula, caudate, amygdala, and nucleus accumbens) as well as the social cognition network (temporo-parietal junction, posterior cingulate cortex, medial prefrontal cortex, inferior parietal, precuneus, and temporal lobe [19].Early-stage romantic love is also associated with lower network segregation and altered connectivity degree [20] and with the endocrinal activity of sex hormones, serotonin, dopamine, cortisol, oxytocin, and nerve growth factor [6].To our knowledge, no research has investigated the endocrinological correlates of long-term romantic love.
The Behavioral Activation System (BAS)
One biological mechanism that is thought to play a role in the promotion of behavior is the BAS.This system is believed to be associated with dopaminergic reward and motivation circuitry [21][22][23][24].The BAS works as a system that involves both inputs and outputs.Inputs are stimuli that serve as cues for goal-directed behavior.They include life events involving goal salience or goal attainment.Behavioral activation system outputs include motor activity, energy, confidence, interest, pleasure in rewards, and, potentially, sociability and exploration.The general outputs of the BAS have been compared with symptoms of mania, including initiation to locomotor activity, activity and exploration, and anger (see [25]).
The BAS and Romantic Love
People experiencing romantic love display a range of cognitions, emotions, and behaviors suggestive of heightened BAS activity.These include increased reward valuation, willingness to expend effort to gain reward, heightened initial hedonic response to success in the form of learning deficits, and lack of satiety in response to success (see [25]).
People experiencing romantic love demonstrate an increased reward valuation of the loved one.The loved one takes on a "special meaning" [26] (p.32).The perception of the loved one changes, and idealization ensues, as does the belief that the loved one is the "perfect romantic partner" [10] (p.391) for them and that their loved one satisfies their preferred standards of physical attractiveness [27] (p.395).The loved one becomes the most important person in their life.
People experiencing romantic love appear to demonstrate a willingness to expend effort to gain reward.Romantic lovers often engage in courtship (see [6] for a review of the costs and benefits of courtship among people experiencing romantic love), which involves a series of signals and behaviors that serve as a means of assessing potential partner quality and willingness to invest in a relationship [28,29].People experiencing romantic love are also willing to reorder daily priorities, make themselves available to their loved one, and take steps to make themselves desirable to their loved one by changing their "clothing, mannerisms, habits, or values" [26] (p.33).
Some people experiencing romantic love may demonstrate some aspects of heightened initial hedonic response to success in the form of learning deficits.The most cogent example of this is the instances of obsessive pursuit (usually committed by men), which occur in the absence of rewarding interaction from the loved one.Men, in particular, but not exclusively, have a tendency to misinterpret politeness or friendliness for sexual interest from potential sexual partners (see [30] for review).Such a false positive bias is potentially present in people experiencing romantic love and can result in repeated attempts by an individual to court a loved one despite there being obvious indications that such efforts will be fruitless.That both females and males can be subject to ineffective courting demonstrates the potent motivational effect romantic love can have on both sexes.This is one BAS sensitivity component that warrants further investigation in people experiencing romantic love.
People experiencing romantic love demonstrate a lack of satiety in response to success.For example, even when an individual in love feels emotionally close to their loved one, there can be a desire to be even closer.A sense of avolition and uncontrollability is a feature of romantic love [26] (p.33).This is evidenced by an individual reordering their daily activities to spend increasingly long periods with their loved one and, in the modern environment, the obsessive monitoring of social media pages of the loved one.More generally, people experiencing romantic love experience prolonged affect, confidence, and increased energy over prolonged periods, as is indicated by the hypomanic symptoms found to be present in adolescents experiencing romantic love reported by Brand and colleagues [31].
Salience of Loved One-Related Stimuli and the BAS
There is evidence that when an individual is in love, the loved one takes on a special meaning [26].This can be considered in terms of loved one-related stimuli having increased salience, probably as a result of oxytocin activity in one or more motivation pathways [32] (see also [33]).This has been demonstrated empirically in terms of memory and attention [34], as well as the heightened BAS sensitivity characteristics of romantic love detailed above.Because the BAS is situated within a motivational system, we believe that this salience of the loved one and loved one-related stimuli means the BAS probably responds in a particularly sensitive manner to loved one-related stimuli.
This heightened salience of loved one-related stimuli among individuals experiencing romantic love suggests that BAS sensitivity, somewhat analogous to anxiety (see [35]), may exist in a trait and state form.General BAS sensitivity may be relatively stable and influence behavior over the life course in a consistent manner.This is a type of trait BAS sensitivity.There are also periods when the BAS may become particularly sensitive, such as during a manic episode (see [25]), or in relation to a particular person, such as in circumstances of romantic love.This is a type of state of BAS sensitivity.The foci of the current studies are this state of BAS sensitivity that is characteristic of romantic love.
The reward-responsiveness subscale assesses the tendency to respond to rewards with energy and enthusiasm, the drive subscale assesses motivation to pursue goals, and the fun-seeking subscale assesses the tendency to pursue positive experiences without regard to potential threats or costs [36] (see [25] for a summary of findings in relation to BAS Scale subscales and bipolar disorder).It seems feasible that all three subscales could contribute to aspects of romantic love, as the BAS responds to loved one-related stimuli.
The Current Studies
This is the first attempt to investigate the Behavioral Activation System and romantic love.As a result, we undertake preliminary work to shed light on the relationships between these two constructs.We amended the BAS Scale to assess BAS Sensitivity to a Loved One (BAS-SLO; described below).In Study 1, we validate the BAS-SLO Scale.This was a necessary step in developing an initial understanding of the relationship between the behavioral activation system and romantic love.We used confirmatory factor analysis to assess the suitability of three factor structures: (i) a one-factor model; (ii) a three-factor, 13-item structure; and (iii) a three-factor, 12-item structure.We determined that a threefactor, 12-item structure possessed the best goodness of fit.We calculated Cronbach's alphas to test internal reliability and correlated subscales with a related measure to assess convergent validity for this structure.In Study 2, we tested the hypothesis that the BAS-SLO Scale will be positively associated with the intensity of romantic love.Findings are considered within an evolutionary framework, which helps elucidate the mechanisms and evolutionary history of romantic love.
Participants
Participants were 1556 English-speaking young adults who self-identified as being in love taken from the Romantic Love Survey 2022 [45].Appendix A presents the characteristics of participants used in Study 1 and the country of residence of participants.We use the majority of the ideas for sample characteristics reporting from Bode and Kowal [7].
Measures
The Behavioral Activation System Sensitivity to a Loved One (BAS-SLO) Scale was created by amending each item of the BAS Scale to relate to an individual's loved one or relationship with their loved one.Participants were asked, "Indicate how much the following applies to you".Responses were scored on a four-point scale (1 = very true for me; 4 = very false for me).Scores for each item are reverse coded, and subscale scores are summed.Table 1 presents the original BAS Scale items and the BAS-SLO Scale items for each subscale.
We also used the Passionate Love Scale-30 (PLS-30) to assess the convergent validity of the BAS-SLO scale.The PLS-30 is a 30-item measure of the cognitive, emotional, and behavioral characteristics of romantic love.Each item records scores by assessing agreement with statements on a nine-point Likert scale (1 = not at all true; 9 = definitely true).It is the most commonly used measure of romantic love in biological studies of romantic love [7].Cronbach's alpha for the PLS-30 in this sample was 0.944.We conducted a confirmatory factor analysis (CFA) on the 13 items of the BAS-SLO Scale using techniques/suggestions from a guidance paper [46]) and predicted a threefactor solution in line with the original BAS Scale factor structure [36].A CFA using a one-factor solution was also conducted, as there is some literature suggesting that the BAS can be explained by a single factor [38,39,43].At the suggestion of one reviewer, following an initial round of peer review, we then conducted another three-factor CFA of 11 items from the proposed BAS-SLO (removing two poorly loaded items; reward responsiveness item 5 and fun-seeking item 3).
A weighted least square mean and variance adjusted (WLSMV) method of confirmatory factor analysis was used as the data were ordinal [47].The comparative fit index (CFI), standardized root-mean-square error of approximation (RMSEA), and standardized root-mean-square residual (SRMR) were used to assess the appropriateness of all three models in accordance with common practice (see [46]).
The following criteria, based on work by Hu and Bentler [48] and the model CFA example by Knetka and Runyon [46], were used to assess the adequacy of the model: CFI > 0.95 (although 0.90 is required to ensure mis-specified models are not deemed acceptable), RMSEA < 0.06, and SRMR < 0.08.Internal reliability was assessed by calculating Cronbach's alpha for each BAS-SLO subscale.Values of >0.70 were considered acceptable (see [49]).We assessed convergent validity by correlating BAS-SLO Scale subscales (i.e., reward responsiveness, drive, and fun-seeking) with the PLS-30 and the amended HCL-32.Factor loadings, covariances, and goodness of fit indices were calculated using the Lavaan package for R version 4.2.2 in R Studio.The CFA diagram was created in AMOS version 26.Convergent validity analyses were conducted using SPSS version 27.
Results
No items from the BAS-SLO were missing data.Two cases were missing data for the PLS-30.These two cases were not included in the correlation analysis.Table 2 presents the means, standard deviations, skewness statistics, and kurtosis statistics for the 13 items of the BAS-SLO.Most of the data were moderately skewed, but this was deemed acceptable as the robust maximum likelihood method has been shown to be robust against violations of normality (see [47]).
Three-Factor, 13-Item Model
Results from the three-factor 13-item CFA indicated that, in our sample, the model had adequate but not good psychometric properties (see Appendix B for a summary table of goodness of fit statistics for all models).CFI was 0.944, indicating an acceptable (but not quite good) fit.RMSEA was 0.055, indicating good fit.SRMR was 0.041, indicating good fit.Factor loadings ranged from 0.44 to 0.80, with the majority above 0.60 (see Appendix C), suggesting that the factors explained most of the items reasonably (but not very) well.Factors correlated with each other from 0.40 (drive and fun-seeking) to 0.66 (reward responsiveness and fun-seeking), suggesting the discriminate validity was acceptable.Two items (R5 and F3) loaded poorly onto the reward responsiveness and fun-seeking factors (0.44 and 0.52), respectively.Appendix C presents the results of the three-factor, 13-item CFA.
One-Factor, 13-Item Model
Results from the one-factor, 13-item CFA indicated that, in our sample, the model had very poor psychometric properties.CFI was 0.709, indicating very poor fit.RMSEA was 0.121, indicating very poor fit.SRMR was 0.086, indicating poor fit.Because this model had very poor fit, we do not report further on the results.
Three-Factor, 11-Item Model
Because R5 and F3 loaded substantially lower than all the other items in the threefactor, 13-item CFA, we removed these items and ran another three-factor CFA, this time with 11 items.Results indicated that, in our sample, the three-factor, 11-item model had good psychometric properties, but loadings were not generally improved from the three-factor, 13-item CFA.CFI was 0.966, indicating good fit.RMSEA was 0.048, indicating good fit.SRMR was 0.037, indicating good fit.Factor loadings ranged from 0.55 to 0.80, with the majority above 0.60 (see Figure 1), suggesting that the factors explained most of the items reasonably (but not very) well.Factors correlated with each other from 0.40 (drive and fun-seeking) to 0.68 (reward responsiveness and fun-seeking), suggesting the discriminate validity was acceptable.Figure 1 presents the results of the three-factor, 11-item CFA.
Discussion
Study 1 reported three CFAs of the BAS-SLO Scale.A three-factor model for the BAS-SLO Scale with 11 items that aligned with the three factors of the original BAS Scale (reward responsiveness, drive, and fun-seeking) was deemed to be an appropriate model by CFA, as well as the reliability and convergent validity analyses.This is especially the case when considered in light of the psychometric properties of the original BAS Scale and subsequent studies indicating a three-factor model of the BAS Scale utilizing confirmatory Cronbach's alpha for the three-factor, 11-item BAS-SLO Scale was 0.725 for reward responsiveness, indicating acceptable internal reliability; 0.786 for drive, indicating acceptable internal reliability; and 0.629 for fun-seeking, indicating marginally questionable internal reliability.Cronbach's alphas for all subscales aligned closely with those of the original BAS subscales (reward responsiveness = 0.73, drive = 0.76, fun-seeking = 0.66; [36]) and with subsequent studies (e.g., [37,39]).
Convergent validity was assessed by correlating each of the BAS-SLO Scale subscales with the PLS-30.We anticipated that each BAS-SLO Scale subscale would correlate highly with the PLS-30.Table 3 presents the correlations between the BAS-SLO Scale subscales and the PLS-30.PLS-30 had a large association with reward responsiveness and a medium association with drive and fun-seeking.This suggests good convergent validity.
Discussion
Study 1 reported three CFAs of the BAS-SLO Scale.A three-factor model for the BAS-SLO Scale with 11 items that aligned with the three factors of the original BAS Scale (reward responsiveness, drive, and fun-seeking) was deemed to be an appropriate model by CFA, as well as the reliability and convergent validity analyses.This is especially the case when considered in light of the psychometric properties of the original BAS Scale and subsequent studies indicating a three-factor model of the BAS Scale utilizing confirmatory factor analysis (e.g., [42][43][44]).Indices of fit generally supported the notion of an acceptable model with good fit.Factor loadings were lower than would be ideal, suggesting the factors did not explain the data well.Correlations among the factors suggest the discriminate validity of the BAS-SLO is moderately low.Internal reliability of the subscales ranged from marginal to acceptable.This is in line with alphas for these three subscales in previous studies [42][43][44].Correlations between subscales of the BAS-SLO Scale and the PLS-30 were roughly as expected, suggesting good convergent validity.In sum, we think the BAS-SLO is a measure that could be used in studies investigating BAS sensitivity to a loved one and romantic love, as well as a range of other related phenomena.Appendix D presents the final items of the proposed BAS-SLO Scale.
Participants
Participants were a subsample of study 1 participants, 812 English-speaking young adults who self-identified as being in love from the Romantic Love Survey 2022 [45].Participants who had been in love for 23 months or less and scored above 130 on the PLS were included in the analysis.Two years is a likely period of time in which individuals experience early-stage romantic love rather than long-term romantic love (see [6]).Two cases were missing one data point, and these cases were removed.One intersex participant was removed.Appendix E presents the characteristics of participants used in Study 2 and the country of residence of the participants.We use the majority of the ideas for sample characteristics reporting from Bode and Kowal [7].
Measures
Behavioral Activation System sensitivity to a loved one was measured using the three subscales of the 11-item BAS-SLO Scale validated in Study 1. Intensity of romantic love was measured using the Passionate Love Scale (PLS-30; [10]; described in Study 1).Sex was measured using a simple question asking, "What is your biological sex?" Data were coded as 1 (female) or 2 (male).Some studies have suggested that females experience romantic love marginally more intensely than males [50,51].Love in romantic relationships has been thought to follow a specific trajectory of intensity related to intimacy, passion, and commitment [52].As such, the length of time an individual has been in love may be associated with the waxing or waning intensity of romantic love.Months in love was assessed by asking participants how long they had been in love with their loved one.Obsessive thinking is definitive of early-stage romantic love (see [10,53,54]) and one proposed biological component of romantic love [33].It therefore follows that it may have a direct influence on the intensity of romantic love.Percent of time thinking about a loved one (obsessive thinking) was measured by asking participants, "What percentage of your waking hours do you spend thinking about the person you love?" Responses were on a scale from 0% to 100%.Commitment was measured by using five items from the TLS-15 commitment subscale [55] but with a nine-point scoring approach.Each item records scores by assessing agreement with statements ranging from 1 (not at all) to 9 (extremely).Romantic love is believed to serve as a commitment device [56,57] and, therefore, may have a direct association with the intensity of romantic love.
Procedure
To test the hypothesis that BAS sensitivity to a loved one would predict romantic love, we undertook a hierarchical linear regression whereby the BAS-SLO Scale predicted PLS-30.Step one included controls.Step 2 included controls and each of the three BAS-SLO Scale subscales.
Results
Table 4 reports the correlations among all variables used in Study 2 analyses and their descriptive statistics.Our hypothesis predicted that the BAS-SLO Scale would be positively associated with the intensity of romantic love.To test this hypothesis, we undertook a hierarchical linear regression whereby the BAS-SLO Scale predicted PLS-30 scores after controlling for sex, months in love, obsessive thinking, and commitment.All assumptions for linear regression were met.The hierarchical linear regression predicting the intensity of romantic love revealed that, at Step 1, control variables contributed significantly to the regression model, with F(6, 805) = 166.987and p < 0.001, and accounted for 45.02% of the variance in intensity of romantic love.Adding the BAS-SLO Scale to the regression model (Step 2) explained an additional 8.89% of the variation in the intensity of romantic love, and this change in adjusted R 2 was significant; F(3, 802) = 136.519and p < 0.001.Each individual BAS-SLO Scale subscale contributed significantly to the model (reward responsiveness, p < 0.001; drive, p < 0.001; and fun-seeking, p = 0.017).Table 5 presents the regression statistics for this analysis.
Discussion
Study 2 used the BAS-SLO Scale to examine the associations between the BAS sensitivity to a loved one and the intensity of romantic love in young adults experiencing romantic love for less than two years.We hypothesized that the BAS sensitivity to a loved one would be positively associated with the intensity of romantic love.Our hypothesis was confirmed.The BAS-SLO Scale explained 8.89% of the variance in the intensity of romantic love (measured by the PLS-30), confirming our hypothesis.This amounts to a medium effect [58] of BAS sensitivity to a loved one on the intensity of romantic love.All three subscales contributed significantly to the model, and this suggests that the BAS plays a role in romantic love.That all three subscales contributed to the model raises the question as to whether each subscale contributes to specific components that characterize the intensity of romantic love in the PLS-30.
The findings of Study 2 are important because they demonstrate that the BAS-SLO Scale may be useful in investigating romantic love and provide the first evidence that the BAS plays a role in romantic love.The findings suggest that future studies may be able to identify the unique components of romantic love caused by the BAS and its state of sensitivity to a loved one.Future studies could use the BAS-SLO Scale to predict individual features of the intensity of romantic love.The use of the BAS-SLO Scale could also potentially be extended to investigate aspects of established pair bonds and relationships characterized by pair bond maintenance and not characterized by the presence of pair bond formation and romantic love (see [33]).Further, the BAS-SLO Scale's use could be combined with fMRI analyses to identify the neurobiological components of the BAS and their contribution to romantic love.
This study is not without limitations, however.The sample is constituted entirely of young adults in the first two years of romantic love.As a result, the sample is neither representative of the entirety of the human population who experiences romantic love nor the entire spectrum of romantic love (see [7] for issues of generalizability).Further, the analysis was undertaken on a subsample of that used to validate the Scale in Study 1.This limits the implications of the findings, given that the Scale may possess different properties in a different sample.Nonetheless, the study has demonstrated the potential usefulness of the BAS-SLO Scale and provided the first evidence that the BAS plays a role in romantic love.
General Discussion
This article presents the first direct evidence of the relationships between BAS sensitivity and romantic love.Study 1 demonstrated that it is possible to measure BAS sensitivity to a loved one.Study 2 demonstrated that this means of measurement can be useful in empirical studies investigating the relationship between the BAS and romantic love.Combined, these two studies suggest that BAS sensitivity to a loved one is a real phenomenon and that the state of romantic love is probably associated with BAS sensitivity to a loved one.This has implications for understanding the mechanisms and evolutionary history of romantic love (see [59]).
The reason the BAS can be particularly sensitive to a loved one may relate to the concept of salience.Froemke and Young [32] have suggested that oxytocin acts on motivation pathways to increase the salience of specific social stimuli.In humans, this may take place in the ventral tegmental area (VTA).The VTA is consistently implicated in fMRI studies of people experiencing romantic love (see [6,60,61]).Although void of oxytocin receptors, the human VTA has been identified as the area in which oxytocin attaches salience to socially rewarding cues [62].This increased salience probably results in further up-regulation of dopamine pathways, presumably including those that characterize the activity of the BAS.This supports Bode's [33] contention that the bonding attraction system in romantic love is characterized by both oxytocin and dopamine activity, among other factors.
Several studies have shed light on the neural structures associated with BAS sensitivity (assessed with the BAS Scale) in normal samples.BAS sensitivity has been associated with activity in the VTA-nucleus accumbens pathway and the orbitofrontal cortex [21], and BAS reward responsiveness has been associated with lateral prefrontal cortex, anterior cingulate cortex, and ventral striatum [22] in healthy samples.Interestingly, BAS drive has been associated with less activity in the putamen, caudate, and thalamus, and BAS reward responsiveness has been associated with increased activity in the left precentral gyrus in response to different intensities of infant cries among mothers [23].Variation in regional gray matter volume in the ventromedial prefrontal cortex and inferior parietal lobule has also been associated with BAS Scale scores [24].There is also evidence that reward network glutamate levels contribute to individual differences in BAS reward responsiveness [63].These structures generally overlap with those found in romantic love (see [6]).
Knowledge about the neural structures associated with BAS sensitivity, their overlap with the structures associated with romantic love, and now, a means of measuring BAS sensitivity to a loved one provide the means of measuring specific bio-psychological mechanisms that likely contribute to romantic love.Functional magnetic resonance imaging studies can begin to isolate the specific contribution of the BAS to the intensity of romantic love or specific features of romantic love.The implications of the studies reported in this article extend beyond a better understanding of the mechanisms of romantic love.They also provide insights into the evolutionary history of romantic love.
The findings support the notion that romantic love evolved by using pre-existing neural mechanisms (see [6]).The BAS is evolutionarily old, and romantic love made use of this system in a novel way.Instead of increasing general sensitivity, it generates a salience of a particular social stimulus (the loved one), which in turn increases sensitivity to the loved one.This increased salience is possibly the same mechanism that results in increased sensitivity among a plethora of other mechanisms.For example, evidence that lovers have an attentional and memory bias towards loved one-related stimuli [34] suggests that the stimuli possess greater importance or value than other stimuli.This is not the result of state changes to the attentional or memory system but rather the result of increased salience of loved one-related stimuli.This is a simple and elegant way of recruiting cognitive, emotional, and behavioral efforts in response to stimuli that have been identified at the input to be of great importance.This salience presumably required an internal schema of the loved one, an assessment of stimuli to identify their concordance, and then the application of increased value or weight to those stimuli.The concepts of salience and sensitivity are fundamental to a better understanding of the mechanisms of romantic love.This process of increasing salience is probably at the core and very beginning of the evolution of romantic love and may be associated with the left VTA [61].Mutation permitted the increased valuation of particular social stimuli, and that was possibly the first step in its evolutionary history.The particular features of romantic love, and perhaps some of its functions, such as pair bonding, may have evolved long after this initial step.Courtship attraction, which is also associated with an increased salience of social stimuli (see [33]), and sexual desire probably become intertwined with romantic love over the following generations.This is in line with previous suggestions that a precursor to contemporary romantic love emerged prior to the evolution of pair bonds [33,64].
The findings of these two studies also highlight the likelihood that BAS sensitivity exists in both a trait and state manner.The traditional BAS Scale assesses dispositional trait sensitivity, whereas the BAS-SLO Scale assesses what can be considered a type of state sensitivity.Parallels with anxiety may help to guide future researchers when elucidating these distinct but related phenomena.To better understand the similarities and differences between trait and state BAS sensitivity, it will be necessary to identify the role of trait sensitivity in romantic love.
Conclusions
This article reported two studies related to the behavioral activation system and romantic love.In Study 1, the BAS-SLO Scale was validated in a sample of 1556 partnered young adults experiencing romantic love.The validation determined that the characteristics of the BAS-SLO Scale were sufficient to justify its use in future psychological and imaging studies.In study 2, hierarchical linear regression was used to identify BAS-SLO Scale associations with the intensity of romantic love in a subsample of 812 partnered young adults experiencing romantic love for two years or less.The BAS-SLO Scale explained 8.89% of the variance in the intensity of romantic love.The findings shed light on one of the biopsychological mechanisms that contribute to romantic love and provide insights into the specific functions of regions associated with romantic love from fMRI studies.The BAS-SLO Scale should be used in future psychological and imaging studies.
20 Figure 1 .
Figure 1.Results of the three-factor, 11-item CFA model for the BAS-SLO Scale.Note: Survey items (for description, see Table3) are represented by rectangles, and latent factors are represented by ovals.Reward = reward responsiveness; Drive = drive; Fun = fun-seeking.The numbers above the one-directional arrows connecting factors to items represent standardized factor loadings.The numbers to the right of the bi-directional arrows connecting factors represent correlations between the factors.
Figure 1 .
Figure 1.Results of the three-factor, 11-item CFA model for the BAS-SLO Scale.Note: Survey items (for description, see Table3) are represented by rectangles, and latent factors are represented by ovals.Reward = reward responsiveness; Drive = drive; Fun = fun-seeking.The numbers above the one-directional arrows connecting factors to items represent standardized factor loadings.The numbers to the right of the bi-directional arrows connecting factors represent correlations between the factors.
3
Figure 1.Results of the three-factor, 11-item CFA model for the BAS-SLO Scale.Note: Survey items (for description, see Table3) are represented by rectangles, and latent factors are represented by ovals.Reward = reward responsiveness; Drive = drive; Fun = fun-seeking.The numbers above the one-directional arrows connecting factors to items represent standardized factor loadings.The numbers to the right of the bi-directional arrows connecting factors represent correlations between the factors.
Figure 1.Results of the three-factor, 11-item CFA model for the BAS-SLO Scale.Note: Survey items (for description, see Table3) are represented by rectangles, and latent factors are represented by ovals.Reward = reward responsiveness; Drive = drive; Fun = fun-seeking.The numbers above the one-directional arrows connecting factors to items represent standardized factor loadings.The numbers to the right of the bi-directional arrows connecting factors represent correlations between the factors.
PLS- 30 =
Passionate Love Scale (divided by number of items); TLS-Commitment = five items from the TLS-15 Commitment subscale[55] but using a nine-point response option (divided by number of items); Obsessive thinking = Percent of time thinking about loved one; HCL-32 = Amended Hypomanic Checklist-32; AQOL-4D = Assessment of Quality of Life-4D; A small number of data points are missing among these variables.
Table 1 .
Original BAS Scale items and the equivalent BAS-SLO Scale items.
Table 2 .
Means, standard deviations, skewness statistics, and kurtosis statistics for BAS-SLO Scale items in Study 1.
Table 3 .
Correlations among each BAS-SLO Scale subscale and the PLS-30 in Study 1.
Table 4 .
Correlations among variables used in Study 2 and their descriptive statistics.
Table 5 .
Hierarchical regression model of intensity of romantic love.
Table A2 .
Country of residence of Study 1 participants.
Table A5 .
[55].Passionate Love Scale (divided by number of items); TLS-Commitment = five items from the TLS-15 Commitment subscale[55]but using a nine-point response option (divided by number of items); Obsessive thinking = Percent of time thinking about loved one; HCL-32 = Amended Hypomanic Checklist-32; AQOL-4D = Assessment of Quality of Life-4D; A small number of data points are missing among these variables.A6.Country of residence of Study 2 participants.Rest of sample = Finland, Ireland, Israel, New Zealand, Belgium, Sweden, Switzerland, Japan, South Korea, Denmark, Luxembourg, Norway.
|
2023-11-15T06:29:45.420Z
|
2023-11-01T00:00:00.000
|
{
"year": 2023,
"sha1": "88d6681231b4e59505e12444794c1b6a3a3230b2",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "ScienceParsePlus",
"pdf_hash": "88d6681231b4e59505e12444794c1b6a3a3230b2",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": []
}
|
55907649
|
pes2o/s2orc
|
v3-fos-license
|
Observations of fast ion loss to the plasma facing wall during quiescent H-modes on DIII-D
The Quiescent H-mode exhibits H-mode levels of confinement and edge pedestal pressures, but does not exhibit ELMs. To date this mode has only been observed in tokamaks during beam heating with some or all of the beams injected counter to the direction of plasma current. During QH-mode, fast ion loss to the low field side plasma facing surfaces has been observed. Some of the fast ion loss is calculated to be the result of outwardly directed banana orbits of the energetic beam ions created in the edge region. Other fast ion loss has been observed to be associated with bursts or oscillations in broadband, high-frequency, magnetic fluctuations. The relationship of the fast ion loss to the ELM stabilization or edge particle transport during QH-mode is not yet understood.
Introduction
The quiescent high confinement mode (QH-mode) represents a clear demonstration that an edge transport barrier producing good global confinement, equal to that in standard H-mode, can be achieved in a quasistationary plasma with no edge localized modes (ELMs) [1][2][3]. This mode of operation is of interest for future burning plasma experiments, where H-mode level of confinement is required for high fusion gain, but where ELMs that typically accompany H-mode confinement are predicted to be very destructive to plasma facing surfaces. During QH-mode operation the edge barrier remains stable and provides a high pedestal pressure as required for high core confinement, yet allows good particle and impurity control without the deleterious effects of ELMs.
QH-modes are a valuable tool for pedestal stability research [4][5][6], as well as core plasma profile control [2,7,8]. Long duration QH-modes have been observed in DIII-D, ASDEX-Upgrade [6], and JT-60U [9]. QHmodes are always accompanied by continuous MHD activity, usually in the form of a saturated, coherent multi-harmonic mode with toroidal mode numbers ranging from 1 to 10. The nature of this mode, called the edge harmonic oscillation (EHO) and located near the region of the pedestal, has not yet been identified.
In all cases QH-modes are observed only in neutral beam heated discharges with some beams injected counter to the direction of plasma current. Usually the beam heating is dominantly counter. Counter injection is known to be prone to fast ion loss to the wall, and previous work has shown that the resulting wall heating can be significant [10], equal to about half of the total heat flux to the divertor region. In this paper we will focus on fast ion loss to the wall. Past work recognized the importance of prompt fast ion loss to the main wall due to the banana orbits of energetic ions born through ionization of the beam neutrals in the edge region. Recently, QH-mode has been observed in counter injected discharges designed to avoid prompt fast ion loss due to these banana excursions to the wall [5,6]. In this paper we will report on the observation of new fast ion loss mechanisms during these counter injected discharges that are associated with high frequency, broadband magnetic fluctuations.
Fast ion loss in QH-mode
In counter-injected, beam-heated discharges on DIII-D, some fast ions created by ionization of beam neutrals in the region of the pedestal are lost to the outer wall in a single banana orbit [10]. To minimize outer wall heating by the beam ions, these discharges are typically operated with a large (>10 cm) gap between the separatrix location at the midplane and the closest limiting surface. However, as shown in Fig. 1, orbits from the four beams on DIII-D that are injected nearly tangentially (Tangbeams) are still lost at the entrance to the upper divertor in a single banana orbit. Promptly lost ions may contribute to localized heating of the baffle, as reported in [10]. The beam ion losses may also contribute to a very deep ($À100 kV/m) and narrow radial electric field well in the region of the pedestal [1,5]. Another indirect indicator that prompt fast ion loss may be important to QH is the fact that a database of QH-mode operation on DIII-D showed a relationship to QH-mode duration and the injection of Tang-beams [11]. With the large outer gap, the three beams on DIII-D that are injected at a more perpendicular angle (Perp-beams) are not promptly lost to the wall. Motivated by these observations, QH-modes were attempted with only Tang-beams and only Perpbeams. Time traces from two of these QH-mode discharges are shown in Fig. 2. The plasma shapes are very similar to that shown in Fig. 1. As discussed in [4,5] QHmode is initiated and sustained for a significant duration with Perp-beams only, indicating that prompt beam ion loss is not essential.
Recent analysis has indicated that another fast ion loss mechanism exists in the Perp-beam only discharge that is not present in the Tang-beam shot. Shown in Fig. 2(e)-(f) are the time traces from a fast ion detector [12] located just outside the limiting surface at the midplane of DIII-D (Fig. 1). The signal is very quiet in the Tang-beam discharge, as might be expected since ion orbit calculations indicate that the edge beam ions are scraped off by the upper baffle ( Fig. 1) before they reach the midplane wall. However in the Perp-beam shot, there is significant bursty ion loss activity seen by the midplane detector. Ion orbit calculations indicate these lost ions may originate in the outer part of the plasma, near the lower turning point of the banana orbit for the Perp-beam ions shown in Fig. 1. The lost ion bursts are highly correlated in time with broadband high-frequency magnetic fluctuations seen on _ B probes located on the outboard side, just below the outer midplane (the magnetic signal is much quieter in the Tang-beam discharge). ion loss detector signal. There is an apparent correlation of the timing of the magnetic and fast ion bursts. Unfortunately, the digitization rate (10 kHz) of the fast ion signal was insufficient to completely characterize the bursts, and some bursts might have been missed. It is clear the majority of the bursts have a duration of <100 ls. The red stars in Fig. 4 represent the time locations where the integrated power above 200 kHz in FFTs of the _ B signal, taken on a 100 ls time step, exceed $2 times the average value. Over the period of time from 2900 to 3100 ms, approximately 272 lost ion bursts were observed, i.e. an average appearance rate of 1.4 kHz. Of those, 212 fell within 100 ls of a high-frequency magnetic burst, whereas if they had been randomly distributed in time, on average only 90 would be expected to correlate. These bursts have not been identified in any other fluctuation diagnostic, and their spatial location in the plasma has not been established. During the QH period from 2900 to 3100 ms of the Tang-beam only discharge shown in Fig. 2, only 43 lost ion bursts are observed; many of which are correlated with magnetic bursts.
The edge radial electric field profiles measured during the QH phase in these two discharges are very similar, [5].
The bursts are present during both the QH phase and the ELMing phase. During the ELMing phase, the ELMs are a source of intense high-frequency magnetic bursts as well as fast ion loss. Bursts similar to those seen during the QH phase are seen between ELMs, but are not present for a few ms after each ELM.
Examination of Langmuir probe ion saturation current at the upper baffle (Fig. 1) during a mixed beam discharge, 106 999, discussed in [10] reveals a slightly different relationship of ion loss to magnetic activity. The observed time behavior of the ion saturation current over a 20 ms period of the QH phase in this discharge is shown in Fig. 5. The ion current shows intermittency, with bursts of longer duration of about 0.5 ms, a variable amplitude, and a more regular but slower appearance rate of about 800 Hz compared to the fast ion loss in the Perp-beam only shot in Fig. 4. These bursts are also associated with high-frequency MHD activity. Shown in Fig. 5 is the time derivative of the _ B signal, averaged over 200 ls. It is clear, without any statistical analysis, that there is a strong correlation in the frequency and phase of the oscillatory behavior of fast ion loss amplitude and the amplitude of the high frequency MHD activity.. Unfortunately, the Langmuir probe was not functional during the discharges in Fig. 2, and the fast ion loss detector was not functional during 106999.
Fast Fourier transforms of the _ B signals indicate the bursty, high-frequency MHD activity is broadband, extending from 150 to 500 kHz (the Nyquist limit). In DIII-D no evidence has been found of the 400 kHz coherent magnetic mode seen in ASDEX-Upgrade [6].
Discussion
The radial electric field well, which exists just inside the pedestal in most discharges, increases in depth going from L-mode to H-mode, and in counter injection it increases again in going to QH-mode. In QH-mode the well can extend to a depth of À100 kV/m, $2X deeper than in ELMing H-mode, and is very narrow with a width of $1 cm. While the details of the Er structure are likely determined primarily by the sharp gradients of the plasma profiles in the pedestal [13], the fast ion loss would tend to enhance the depth over that found in co-injected discharges.
Beam ion orbit modeling indicated that the Tangbeams on DIII-D produce prompt beam ion loss in QH-mode, while the Perp beams do not. A database of QH-mode performance over a period of three years indicated a dependence of the duration of QH-mode phases on the power of the Tang-beams, but no data was available in discharges with only Perp-beams. To test the hypothesis, discharges were carried out in coun-ter injection with three Perp-beams, and with three Tang-beams. QH-modes were obtained in both cases, indicating that prompt beam ion loss is not required for QH formation. It was noticed that QH-mode forms more quickly and lasts longer with Tang-beam injection. E r measurements showed the well width, depth, and location are very similar in these two cases. Recent analysis has revealed that a new mechanism for fast ion loss is active in the Perp-beam discharges. This mechanism is observed to eject ions in bursts to the midplane fast ion detector, rather than the upper divertor baffle. The bursts are strongly correlated with high frequency, broadband magnetic fluctuation activity. The observed bursty ion loss is about an order of magnitude more active in the Perp-beam discharges than in the Tang-beam discharges. Even though the Perp-beam discharges do not suffer prompt beam ion loss, this new fast ion loss mechanism may be making some contribution to the edge E r well. A reexamination of QH discharges with mixed beam injection has revealed that a similar mechanism, correlated with high-frequency, broadband magnetic activity, is making some contribution to ion loss to the upper baffle as well.
More data is needed to draw stronger conclusions concerning the role of fast ion loss and the E r well in QH-mode. Simultaneous data from the midplane fast ion loss detector, the Langmuir probe on the upper baffle, and the local heat flux measurement do not exist on a set of Perp vs. Tang beam discharges. More QH-mode experiments will be conducted this year to investigate the fast ion loss in more detail. We will also attempt to localize the position of the high frequency mode inside the plasma using the extensive spatially resolving fluctuation diagnostics on DIII-D. A better characterization of the fast ion loss mechanisms, intensity, and wall interaction is needed for the first wall design for future devices based on the QH-mode.
|
2018-12-06T00:27:11.184Z
|
2005-03-01T00:00:00.000
|
{
"year": 2005,
"sha1": "1fb29de4a7ddf941561575f927f1d3361fa75a0b",
"oa_license": "CCBY",
"oa_url": "https://escholarship.org/content/qt6vf3x554/qt6vf3x554.pdf?t=p15nhg",
"oa_status": "GREEN",
"pdf_src": "Adhoc",
"pdf_hash": "34cbe0ef89de7498640fefcfa973c0dcef4eed95",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Materials Science",
"Chemistry"
]
}
|
4925809
|
pes2o/s2orc
|
v3-fos-license
|
A smartphone application supporting patients with psoriasis improves adherence to topical treatment: a randomized controlled trial
Adherence to topical psoriasis treatments is low, which leads to unsatisfactory treatment results. Smartphone applications (apps) for patient support exist but their potential to improve adherence has not been systematically evaluated.
What does this study add?
• This randomized controlled trial investigates the effects of a supporting app on adherence to a once-daily topical calcipotriol/betamethasone dipropionate cutaneous foam preparation over a 28-day period.
• The app provided daily reminders and informed patients whether they had applied their treatment. Information on adherence was obtained with a chip attached to the dispenser that synchronized to the app.
• The app significantly improved adherence rates and reduced psoriasis severity in the short term.
Psoriasis is a chronic inflammatory disease affecting 2-4% of the Western population. 1 Psoriasis has a severe impact on quality of life 2,3 and creates a large socioeconomic burden. 4,5 Mild-to-moderate psoriasis can be treated with topical corticosteroid preparations, [6][7][8] but adherence rates to these treatments are generally low and present a barrier for treatment success. 9 Previous studies including patients with psoriasis treated with topical corticosteroids in Western dermatology outpatient clinics have reported nonadherence rates from 8 to 88%. 10 Patients tend to self-report higher adherence rates than those obtained by objective measurements, 11,12 therefore it is recommended to measure adherence objectively by using either an electronic monitor (gold standard) or medication weight. 13,14 Two studies have reported interventions improving adherence to topical corticosteroid treatment. One study tested the effects of weekly self-reporting of psoriasis status to a webpage for 1 year. 15 That intervention improved adherence to topical fluocinonide ointment in the intervention group relative to the control group. The other study did not use a control group and reported that 2 months of an individualized multifactorial patient-supporting intervention provided at dermatology clinics led to improved adherence rates relative to baseline. 16 There is a new and growing field of eHealth interventions for adherence improvement; 17 however, there is a little evidence for their effectiveness. 18 The aim of this study is to test whether the use of a studyspecific smartphone application (app, Table 1) for 4 weeks improves short-term adherence to a recommended standard topical treatment regimen with calcipotriol/betamethasone dipropionate (Cal/BD) cutaneous foam (LEO Pharma, Ballerup, Denmark). As secondary outcomes, we also evaluated (i) short (week 4) and long-term (week 8 and 26) psoriasis severity [Lattice System Physician's Global Assessment (LS-PGA) 19,20 ] and (ii) quality of life [Dermatology Life Quality Index (DLQI) 21 )].
Patients and methods
A 6-month investigator-initiated single-site, parallel-group, phase-IV superiority block randomized controlled trial (RCT) with an allocation ratio of 1 : 1 was conducted according to the principles expressed in the Declaration of Helsinki as revised in 1983, the International Conference on Harmonisation Good Clinical Practice Guideline E6 (R2), and Danish national laws (clinicaltrials.gov registration NCT02858713). The protocol was approved by the Regional Ethics Committees on Health Research Ethics for Southern Denmark and the Danish Medicines Agency (EudraCT 2016-002143-42). 22 The study was conducted between 9 January 2017 and 29 August 2017 at an outpatient clinic for dermatology at Odense University Hospital. Written informed consent was obtained from all patients at inclusion and prior to randomization.
Potential study patients were recruited at the dermatology outpatient clinic and by advertisement. We included legally competent patients between 18 and 75 years of age who owned a smartphone or had skills for the use of a smartphone provided by the investigator (if the study-specific app was not supported by the patient's smartphone's operating system), who were diagnosed with mild-to-moderate psoriasis, and who were candidates for topical treatment with Cal/BD cutaneous foam.
Individuals were excluded if they: (i) had a known sensitivity to topical Cal/BD, (ii) were unable to complete all study- related visits, (iii) had inadequate internet access or skills for use of a smartphone with an English-language app, (iv) had extensive disease not amenable to topical treatment, (v) were reluctant to be treated with a foam product, (vi) were breastfeeding or pregnant women, or (vii) were fertile women who did not use reliable contraception. Patients were block randomized in eight blocks based on sex and age and the investigator was masked to allocation sequence using a computer-generated sequence in a 1 : 1 ratio. Patients were not paid for participating in the study. They received study medication free of charge (estimated market value £33 after reimbursement from the National Health Service). The medication was prescribed for once-daily application in a 28-day treatment period, excluding body sites for which treatment with topical Cal/BD cutaneous foam is contraindicated (face, axillae and genitals).
Cal/BD cutaneous foam was delivered in canisters with foam dispensers containing an electronic monitor with a chip registering the day and time the patient used the dispenser. Patients were given Cal/BD cutaneous foam in the canister with attached dispenser at the initial study visit, the canister could be replaced whenever empty. Patients were told to bring their medication canisters and dispensers for destruction at the week 4 return visit, but were not told in advance about the use of the data obtained by the electronic monitor or that each medication canister was weighed before and after use [on a precision balance Mettler Toledo PR802 (Mettler Toledo Ltd, Leicester) weight with 0Á01 g accuracy] until the final study visit (week 26). The appropriate quantity for each application on diseased skin was calculated by determining the involved area expressed as body surface area (BSA) and multiplying by 0Á5 g foam per 1% BSA. This dosage was then multiplied by 28 for once-daily application during the 28-day treatment period. The intervention group additionally received a supporting app, which provided once-daily compulsory treatment reminders and information on number of treatment applications and amount of prescribed Cal/BD cutaneous foam applied. The information was obtained by the electronic monitor chip synchronized to the app via Bluetooth â (Table 1). A laboratory assistant provided guidance on how to install and synchronize the app to the electronic monitor. The patients were also entitled to telephone support provided by the laboratory assistant, who answered any questions regarding use of the supporting app and electronic monitor. The app design was informed by previous research published by members of this research team, 10,18,[22][23][24] and the tested prototypes were MyPso SmarTop TM Version 1.0 (the app, LEO Pharma, Ballerup, Denmark) (see Fig. S1 in the Supporting Information for a detailed description of the app) and SmarTop TM number 053776 (the electronic monitor, LEO Pharma) (Fig. 1). After 28 days, use of the app was terminated and no further adherence data were obtained. From week 4 to 26 all patients were provided with Cal/BD cutaneous foam to be used once daily when needed.
To make the visits similar to a normal visit, the investigator and laboratory assistant were not masked to the intervention and data. Data were reviewed by a nonmasked Good Clinical Practice-experienced person. All sociodemographic and clinical data 10 were obtained by the investigator through interviews and medical chart reviews at baseline visits prior to randomization (Table S1; see Supporting Information).
Return visits were scheduled for weeks 4, 8 and 26. The primary outcome variables for adherence rates over 28 days were collected at week 4 by the chip in the electronic monitor measuring number of treatment applications, an electronic balance at the clinic and by patient self-reporting on a study-specific scale (four-point ordinal scale). The secondary variables were collected using the validated measurements for psoriasis severity (LS-PGA, eight-point ordinal scale 25 ). The LS-PGA was chosen as a measurement of psoriasis severity, as it takes less time than, for example, the Psoriasis Area and Severity Index (PASI) score and, unlike the PASI, is consistent with the European Medicines Agency's recommendations for psoriasis scoring in clinical trials. 19 Data on the LS-PGA and DLQI (30-point ordinal scale 21 ) were obtained at baseline and weeks 4, 8 and 26. Secondary long-term variables were obtained long term after termination of the intervention, as recommended by the Cochrane Group. 26
Sample size calculation
The study was powered assuming that use of the app would increase treatment applications by at least 8% in the intervention group compared with the nonintervention group. Based on findings from Alinia et al., 15 the mean number of treatment sessions in the nonintervention group was assumed to be 63% of the recommended number of applications/28 days, the mean number of applications in the intervention group was assumed to be 71% of the recommended number of applications/28 days and the standard deviation in the nonintervention and intervention groups was assumed to be 15% of the recommended number of applications/28 days. We required a power of 80%, a two-sided significance of 95%, 1 : 1 treatment allocation, and expected dropout of 12Á5%. We applied a sample size calculation for an unpaired t-test as we modelled the mean adherence of each patient (numerically on a percentage scale, expected to be normally distributed due to the Central Limit Theorem). This calculation resulted in a planned sample size of 128 participants (Stata-script provided in File S1; see Supporting Information).
Statistical analyses
Normality assumptions were checked by quantile plots. No adjustments for baseline covariates were considered relevant in the main analyses. 27,28 P-values < 0Á05 were considered statistically significant, 29 and we conducted all analyses using Stata 15 (StataCorp, College Station, TX, U.S.A.). Baseline characteristics for the two treatment groups are presented as counts and percentages.
Analyses of the primary outcome: adherence
For chip data, all registered applications within 1 h were regarded as a single treatment session. We set chip adherence as binary, defined as treated or nontreated each day, to avoid errors related to multiple treatments in 1 day. Data were analysed using an intention-to-treat approach.
For the main analysis of adherence we dichotomized adherence rates obtained by chip and medication weight with a selected cut-off of 80%, with adherence rates above 80% considered adherent (a cut-off typically used when studying adherence in other chronic diseases 30 ). We compared the dichotomized adherences by using logistic regression.
For the sensitivity analysis of adherence the adherence measures or their natural logarithm (if necessary to ensure normality of model residuals) were compared between treatment groups using linear regression. The analyses were carried out excluding missing data and after 100 multiple imputations by multivariate normal regression on the logarithms of the three adherence measures, without included covariates in addition to with an imputation including treatment, age, sex and smoking as covariates. 10
Analysis of secondary outcomes
Changes in LS-PGA and DLQI measurements from baseline to week 4 and from baseline to weeks 8 and 26 were compared between the two treatments by linear regression. LS-PGA and DLQI measurements including means are presented in box plots.
Results
In total, 134 patients with mild-to-moderate psoriasis and a mean age of 48 years (21-75 years) were enrolled (Table S1; see Supporting Information). The study participants were mostly men under 50 years of age, who were married, nonsmokers and employed full-time in a vocational or academic profession. The majority of patients had been diagnosed with psoriasis for more than 20 years and only a few had a history of using systemic antipsoriatic treatments (Table S1).
The included patients were randomized into nonintervention (n = 66) and intervention (n = 68) groups at the baseline visit. The two groups were comparable based on measured baseline covariates (Table S1). Smartphones were borrowed from the investigator for the intervention period for 21 of 68 (31%) of the patients in the intervention group. In total, 122 of 134 (91%) of all patients returned for the week 26 visit (Fig. 2), and the numbers of patients lost to follow-up were equally divided between the nonintervention and intervention groups. Missing data on primary outcome measurements obtained at week 4 were comparable for both groups (nonintervention vs. intervention group), whether they were chipregistered applications [ (Fig. 2). Comparisons between missing data for the three adherence measurements in the two groups are provided in Tables S2-S4 (see Supporting Information) and considered missing at random. No serious adverse reactions were observed.
In the main analysis of chip adherence data (data were coded for adherent patient rates, defined as medication applied ≥ 80% of days in the treatment period), more patients in the intervention group were adherent than patients in the nonintervention group (65% vs. 38%, P = 0Á004) ( Table 2). The sensitivity analysis of chip adherence data revealed that patients in the intervention group were more adherent to number of treatment sessions compared with patients in the nonintervention group (82% vs. 69%, P = 0Á001) ( Table 2), similar results were obtained when allowing for multiple treatments sessions on the same day (data not shown), and imputing for missing data did not change the results (Table 2).
Adherence to amount of cutaneous foam in the main analysis showed that more patients in the intervention group were adherent compared with patients in the nonintervention group, although not reaching statistical significance (14% vs. 8%) ( Table 2). Also, in the sensitivity analysis, adherence to amount of cutaneous foam used revealed that patients in the intervention group were more adherent than those in the nonintervention group (43% vs. 33%, P = 0Á026) ( Table 2); data imputed for missing values revealed similar results ( Table 2).
Adherence rates reported by patients were higher than those objectively obtained by weight, but there was no significant difference between the nonintervention and intervention groups (59% vs. 67%), or when imputed for missing values ( Table 2). were reported by patients on a study-specific ordinal scale from 0 to 4, from 0 (did not use treatment) to 4 (used all prescribed medication). h nonintervention groups (mean 1Á86 vs. 1Á46, P = 0Á047) ( Table 3). A similar trend was seen at weeks 8 and 26, although it did not reach statistical significance (Fig. 3). DLQI initially changed from baseline to week 4 in the nonintervention vs. intervention group (4Á54 vs. 4Á12) ( Table 3), which is considered a reduction above the minimal clinically important difference (MCID). 31 DLQI was further reduced at week 8, followed by a minor relapse at week 26 ( Fig. 4 and Table 3).
Discussion
This RCT demonstrates that an app designed to support daily topical treatment by patients with psoriasis improved treatment adherence (as measured by electronic monitors or medication canister weight) and reduced psoriasis severity (as measured by LS-PGA).
The app improved adherence rates to topical treatment during a 28-day intervention period, in agreement with one study reporting improved adherence rates when patients reported their psoriasis status weekly. 15 Another study reported improved adherence rates for use of systemic treatment in patients with psoriasis when they received daily text messages. 32 The app used in this study also improved severity of psoriasis, in agreement with reports of adherence-improving interventions for psoriasis 32 and other chronic diseases. 33,34 Inspired by previous adherence studies, we dichotomized adherence rates obtained by chip and canister weight with a cut-off of 80%, and classified adherence rates above 80% as adherent. 29 The optimal cut-off should be based on the adherence level necessary for the drug to work. 30 In this case we do not know how forgiving the drug is to missed doses, which represents a weakness of the study.
Adherence was measured by the number of treatment sessions, and patients in the nonintervention group had a 69% adherence rate, meaning that they used medication on 69% of days. This result is in agreement with Alinia et al., 15 who measured adherence to topical fluocinonide ointment by number of treatment days in patients with psoriasis over 1 year and reported that adherence among patients receiving standard treatment of care was 63% during the first month.
Adherence was also measured by canister weight, and we found that patients in the nonintervention group used 33% of the prescribed amount of medication. This is in agreement with a report by Storm et al., 35 who found that patients seen at a dermatology clinic used 35% of the expected doses of topical treatments over a 2-week treatment period. 35 The low rate of patients who were adherent to amount of medication in both the nonintervention and intervention group (8% vs. 12%) suggest that the estimated amount of cutaneous foam used during the 4 weeks was too high. Measuring adherence by weight is challenging and requires that the prescriber first estimate the amount of topical treatment to be used during a treatment period. One limitation of the study is that we do not know the amount of medication that should be applied to get the full benefits of treatment. The majority of the patients in this study had been diagnosed with psoriasis for over 20 years and may be less inclined to follow a dosing instruction that would pose a risk of side-effects 36 (mainly pain, erythema and pruritus). 37,38 The generally low rates of adherence as measured by weight might also indicate a need for clinicians to provide patients with specific advice and motivation for the appropriate quantity of medication to be used.
A strength of the study is the collection and comparison of adherence measurements by number of treatment sessions, applied medication weight and patient self-report. 39 It is important that adherence studies reflect what is considered to be clinically relevant; that is we consider it more important for patients to apply the topical product regularly than in large amounts.
The LS-PGA and DLQI improved considerably over the study period as an effect of the topical treatment (Figs 3 and 4), in agreement with the international literature. 7,38,40 The PASI as a tool for measuring severity of psoriasis was not applied in this study, because the European Medicines Agency recommended the use of LS-PGA in clinical trials. The reduction in DLQI for both groups was caused by the Cal/BD foam treatment. 8 The DLQI measurement should be interpreted with caution: the DLQI is unidimensional and under-represents the emotional aspects of dermatological patients' lives. 41 In order to capture the full range of the quality of life aspect, we could have combined the DLQI measurement with one of the available psoriasis-specific quality of life instruments. 42 It is a Nonintervention Intervention limitation of the study that we did not obtain outcomes on patient-perceived severity and patient-physician relationship as reported in other adherence-improving interventions, 32 as an improved patient-physician relationship may motivate patients and improve treatment adherence and outcome. 43 We did not report patients' use of the optional diary functions or patients' satisfaction with the app, which is a limitation for interpreting the results for app designers and medical device engineers. The patients received study drugs, which may provide better results than those obtained in real-life settings, such as that reported by Storm et al. in which a third of prescriptions were never redeemed. 44 Our study patients were partly recruited by advertisement, which poses a risk of including patients who are more motivated to adhere to prescribed topical treatment than the background psoriasis population. 35,45 The local ethics committee would not approve masking patients to the fact they were in a trial until the end of the study, a method used in other adherence studies. 46 The assessors were not masked, which introduced a risk of attrition and observer bias. 47 This study was performed simultaneously with the introduction of the new Cal/BD cutaneous foam on the Danish market. The patient information session at the initial study visit was focused on the new drug reformulation 48 and to a lesser degree on the adherence measurement, which partially concealed that the primary outcome of the study was adherence.
In conclusion, this RCT demonstrated that a study-specific patient-supporting app improved adherence rates and psoriasis severity in a statistically and clinically significant manner. There is potential for implementing patient-supporting apps in the dermatology clinic.
Supporting Information
Additional Supporting Information may be found in the online version of this article at the publisher's website: Fig S1. The smartphone application and instructions for its use.
File S1 Sample size calculation, script from Stata 15. Table S1 Baseline characteristics. Table S2 Missing outcomes on chip-measured adherence rates. Table S3 Missing outcomes on weight-measured adherence rates. Table S4 Missing outcomes on patient-reported adherence rates.
Powerpoint S1 Journal Club Slide Set. Video S1 Author video.
|
2018-04-27T02:33:15.961Z
|
2018-07-05T00:00:00.000
|
{
"year": 2018,
"sha1": "7506e54de05ac6ce50781d7ef1941b3b366ddba1",
"oa_license": "CCBYNC",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/bjd.16667",
"oa_status": "HYBRID",
"pdf_src": "Wiley",
"pdf_hash": "7506e54de05ac6ce50781d7ef1941b3b366ddba1",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
153957769
|
pes2o/s2orc
|
v3-fos-license
|
Nineteenth-Century Anglo-Irish Cervantine
To commemorate the fourth centenary of the publication of the first part of the Spanish masterpiece of all times Don Quixote by Miguel de Cervantes, this article approaches in an introductory manner some of the literary productions which sprang from Cervantes’s original within the Irish context. In the case of Ireland the Cervantine inspiration, albeit minor and neglected, has also been present; and, it is most probably the nineteenth century which provides the most ample and varied response to Cervantes’s masterpiece in many a different way. Our aim is to see briefly how the legacy of Don Quixote found distinct expression on the Emerald Isle. Indeed, all these Cervantine contributions from Ireland during the nineteenth century were also deeply imbued with the politics of literature and society in a country which experienced historical, social and cultural turmoil. The reference to Cervantes as a key writer in Spanish letters will not only be reduced to his masterpiece of all times; but, will also be tackled in critical pieces of importance in Ireland.
The year 2005 marks the celebration of the fourth centenary of the publication of the first part of the Spanish masterpiece of all times Don Quixote by Miguel de Cervantes.The influence of Cervantes's work has already been covered in a myriad of scholarly studies in many languages in the course of these four centuries and it would be impossible to trace, even in the age of computers nowadays, the extensive amount of interference, intertextuality, inspiration and critical approaches Cervantes's original has produced.To commemorate this literary event it is, therefore, not merely coincidental in time, but also peremptory, that the first issue of Estudios Irlandeses should approach in a brief introductory manner some of the literary productions which sprang from Cervantes's original within the Irish context at large.In the _______________________________ ISSN 1699-311X case of Ireland the Cervantine inspiration, albeit minor and neglected, has also been present; and, it is most probably the nineteenth century which best provides the most ample and varied response to Cervantes's masterpiece in many a different way.Accordingly, from many a number of critical responses in most of the contemporary Irish and Anglo-Irish periodicals and magazines, in which Cervantes's mastery would be linked to the very essence of the Spanish character, through a minor theatrical adaptation of one of Miguel de Cervantes's most famous independent episodes in Don Quixote -'The Novel of the Curious Impertinent'-to other, more Irish perhaps, novelistic forms of adaptation and interpretation of the Spanish masterpiece, the legacy of Don Quixote found distinct expression on the Emerald Isle.Indeed, all these Cervantine contributions from Ireland during the nineteenth century were also deeply imbued with the politics of literature and society in a country which experienced historical, social and cultural turmoil at the time: a country that view Spain and Spanish culture as a beacon in terms of continuity and nationhood in many respects.But, also, the place in which Britain's supremacy over Europe was established, reinforcing therefore the aesthetics of Anglo-Irish unionism too.Thus, we will outline, firstly, the numerous critical references to Miguel de Cervantes and his work, which can be found in the principal literary periodicals of the time in Ireland, or should we say, Anglo-Ireland.These, as we will see, were undertaken by key figures -journalists, literary critics and writers -of both the Irish and Anglo-Irish discourses.After this overall and brief approach, we shall briefly pay attention to some of the works, which were produced within literary the discourse in Ireland, in a minor or major way, clashing at times with the idea of an Anglo-Irish canon.A Cervantine inspiration which would culminate, among many other examples, at the beginning of the twentieth century with a closeto-the-original theatrical adaptation by Lady Augusta Gregory for The Abbey Theatre in Dublin of Miguel de Cervantes's masterpiece in her intriguing and not much-approached by contemporary or even by today's criticism, Sancho 's Master (1927).
Many critical approaches on Spain were present in periodicals and literary magazines in Ireland during the nineteenth century.For many, Spain provided a pivotal exemplar of nationhood, patriotism and a continuous cultural and literary discourse wished by Ireland.Lady Francesca Wilde in one of his reviews of the translation of Calderón by the main nineteenthcentury Hispanist in Ireland, Denis Florence M'Carthy, already advocated the defence of culture and not war conflict as the solution to many international problems and also those in Ireland.For Lady Wilde, much imbued with the Young Ireland movement, this literary epoch in Spain in which Calderón and, especially Cervantes "reigned" was "a brilliant literary era, and no doubt the world thought it impossible successors should ever rise fit to wear the laurel wreath when death lifted it from such brows" (Wilde 1854: 353).But, critical approaches to Cervantes's work would not only be tinged by words of praise, but would present a more detailed study.Interestingly, most of these critical accounts would be published by unionist The Dublin University Magazine.The importance of this literary magazine in the case of the representation of Spain has been clearly underestimated.It represented a wide-ranging analysis of many periods of Spanish history, literature and, most importantly, it included in its pages the best translations and novelistic forms of Spanish inspiration.Many of the novels with a Cervantine vein would be first published in the magazine in monthly instalments.
Another important article, which explains why Cervantes's work was of interest for The Dublin University Magazine, came out to life in 1866.Entitled 'Miguel de Cervantes y Saavedra' it traced not only the life of Cervantes; but, also the repercussion of his work in Ireland during the nineteenth century, especially at the beginning of the century when " "The Life and Adventures of Don Quixote" was found among our Irish hedge-school manuals in the beginning of the present century, but was by no means such a favourite with the youngsters as the "Nine Worthies," the "Trojan Wars," "Monteleon the Knight of the Oracle," or "Don Belians" (Dublin University Magazine 1866: 123).Indeed, it says a lot about the much-attacked school system in Ireland at the beginning of the century and the hedge-schools.In fact, the article reveals that it was the parents who favoured the reading of Miguel de Cervantes's masterpiece rather than the preferences of the students of those Irish schools: The reverse was the case with the cottage elders when they could induce the children to read aloud for them on winter evenings.Daring adventures happily terminated formed the favourite topic of the young folk who did not trouble themselves about trifling inconsistencies or infringements of probability.The parents had experienced the woeful illusiveness of youthful hopes, they had witnessed the fading away of many a bright rainbow and charming cloud landscape.So that they saw much more likelihood in the cudgellings and blanket tossings inflicted on knight and squire, than in the mysteries of enchanted castles and the wholesale massacre of giants in the ordinary books of chivalry.Valued as the deathless work is by ourselves, we have never urged its perusal on any of our young friends.(123) This lengthy article will abound on the life of Cervantes and his influence upon a Spain in turmoil at the end of the sixteenth century, although the reaction of his contemporaries was not welcoming at times, albeit the lengthy article purports, following extensively the much-abused didactic character of The Dublin University Magazine, that during the years of Cervantes's life: In his numerous works he had it in purpose to improve the state of things in his native country, and to correct this or that abuse, but he obtained no striking success till the publication of this his greatest work [Don Quixote de la Mancha].Alas!While it established his character as a master in literature it excited enmities and troubles in abundance.(126) The magazine will end the article after probably one of the best summary accounts of Miguel de Cervantes's life to come out in nineteenth-century Ireland with a very and highly canonised version of what high literature constituted for the unionist editorial board of the magazine in which authors such as William Shakespeare and Walter Scott had to be included side by side with Miguel de Cervantes.It is because of this that Cervantes's masterpiece inclusion in the curriculum of education in Ireland at the time had to be emphasised along with its political stance on how to design this literature list, because Cervantes's example had deeply influenced the character and patriotism of his nation, Spain, in what constitutes a clear exemplar of the at-times Catholic-proned didactic enterprise of the magazine within the Irish and Anglo-Irish discourse: And indeed in our meditations on the characteristics of the author and man in Cervantes, we have always mentally associated him with Shakespeare and Sir Walter Scott.We find in all the same versatility of genius, the same grasp and breadth of intellect, the same gifts of genial humour, and the same largeness of sympathy.The life of Cervantes will be always an interesting and edifying study in connection with the literature and the great events of his time.We find him conscientiously doing his duty in every phase of his diversified existence, and effecting all the good in his power.When he feels the need of filling a very disagreeable office in order to afford necessary support to his family, he bends the stubborn pride of the hidalgo to his irksome duties, and it is not easy for us to realize the rigidity of that quality which he inherited by birth, and which became a second nature in every gentleman of his nation.(137)(138) Let us now pay attention to theatrical and fictional works that drew heavily on Miguel de Cervantes's Don Quixote de la Mancha and constitute clearly the nineteenth-century Anglo-Irish Cervantine.The minor case of Richard Chenevix (1774-1839) deserves a small mention with respect to his Cervantine inspiration.Chenevix of French ancestry was born at Ballycommon, near Dublin.After graduating from the University of Dublin he travelled to Paris where he was imprisoned during the reign of terror and shared a cell with French chemists.It is as a chemist and mineralogist Richard Chenevix is best known.Although he was a fellow of the Royal Society Irish Academy and was acquainted with the novelist Maria Edgeworth (Usselman 2004), his interaction in Irish affairs is scarce.In 1812 he published Two Plays: Mantuan Revels, a comedy in five acts, and Henry the seventh, an historical tragedy in five acts.The first play -"inspired partly by a novel by Cinthio and partly by an episode in Don Quixote ('The Curious Impertinent')" (Rafroidi 1980: II, 103)-is Chenevix's Cervantine contribution, although none of his dramatic works seems to have been performed according to the French expert Patrick Rafroidi.The London Critical Review specified the play was clearly "a precise copy of Cervantes's novel" (Critical Review 1812: 378) and the critics of the Critical Review were "most happy to be able to say with confidence" that Richard Chenevix really possessed a "genius that might be turned to better account" (381).
The second quarter of the nineteenth century witnessed a curious Anglo-Irish genre that would encapsulate much Cervantine inspiration.Two Anglo-Irish authors turned into Don Quixote for thematic inspiration although their characterisation and setting were "more" Irish: one of the originators of the military novel, the Newry-born William Hamilton Maxwell (1792Maxwell ( -1850) ) and a more successful Dublin-born Charles Lever (1806-1872).Maxwell was also a student of Trinity College Dublin, and his life changed radically when he decided to get directly involved in Wellington's European campaigns.After his service in the British forces, Maxwell was posted to the village of Ballagh in Co. Mayo, where he acted as Church of Ireland clergyman.Maxwell's fiction has not been the subject of any serious critical study to date although some of his works do shed light not only on Wellington's achievements -in the latest re-issue of some of Maxwell's works Robert Lee Wolff believes that "Wellington remained Maxwell's hero and the Peninsular War one of his favourite subjects" (Maxwell 1979: v) and Julian Moynahan attaches "determinate values in Lever's lifelong admiration for the British uniformed service, for its code of honour, courage, and patriotism … most sharply focused in the cult the novelist made of Wellington" (Moynahan 1995: 91) but, also, on stereotypical and stock characterisation and landscape description of some literary merit, very much imbued with Irish gothic features.
Even, if some of Maxwell's and Lever's fictions that have references to Spain can be part of what could be called late Anglo-Irish Wellingtoniana, their works underpin a social, cultural and literary debate still much alive at the time in Ireland.We will briefly approach the case of Maxwell, as he has been widely forgotten -except for his Wild Sport of the West (1832), which has been reprinted several times, was also translated into Irish in 1933, 2 (Moynahan 1995: 87)-saw Ireland as the subject matter of his novel, as the reader is taken from the South of Ireland through Dublin and London to the Iberian Peninsula, a much popular issue and theme of Anglo-Irish fiction at the time.
In particular, the novel shows a clear influence of the Peninsular conflict, Wellington and -more importantly as the revealing title already shows-of Miguel de Cervantes's masterpiece Don Quixote.Tapping the picaresque vein, Maxwell's exemplar of rollicking Anglo-Irish fiction -he is the initiator of the style before Charles Lever, 3 also a friend of W.H. Maxwell-was imbued with an Irish background epitomised by highway life, country and village settings and above all Irish picaresque, in which Maxwell's young Quixote, Hector O'Halloran, enters a world of adventures and wanderings always followed by his Sancho Panza, Mark Antony O'Toole, in a clear Anglo-Irish exemplar of camaraderie.Accordingly, we are taken in a trip in space and time that includes one of the most detailed and precise accounts in the Anglo-Irish, and even English, fiction of the time of the Peninsular War.For McCormack, the start of Maxwell's novel is very much "equipped to inaugurate a gothic novel" (McCormack 1991: 835) -and, to my mind, it resembles in many ways Charles Robert Maturin's Melmoth the Wanderer (1820), which also has a Spanish setting in some of the stories, although it is an inquisitorial attack on Catholicism Maturin was more interested in when he published his popular work, at a time when he was starting to being ostracized by his own denominational group, most importantly his parish congregation in Dublin, a sermon to his parishioners is said to have triggered his work as he states in the preface of his novel.(See Maturin 1824 and1989) A Gothic mode that would find ample coverage in nineteenth-century Ireland, rather Anglo-Ireland, with Charles Robert Maturin and, especially, during the second half of the nineteenth century in clearly Anglo-Irish writers such as Isaac Butt, Joseph Sheridan Le Fanu and Bram Stoker later in the century, within a creatively literary atmosphere and discourse Luke Gibbons (2004) has recently denominated as "Gaelic Gothic", a clearly "Irish only" genre, enmeshed in a distinctive aura of political, philosophical, economic and racial traits in an imperially and colonially subdued nineteenthcentury Ireland.For W.J. McCormack, Gothic Irish novelists, among which some dealt with here are included, were "unable to impose the master's [Scott's] distinction between past history and present politics, and as a consequence the gothic mode endured there [Ireland] in a fugitive and discontinuous manner throughout the nineteenth century" (McCormack 1991: II, 831).
In Maxwell's The Fortunes of Hector O'Halloran the reader is introduced to the character of hector.Born in Knockloftie, the stronghold of the O'Hallorans, of an English catholic mother and a protestant father with a Gaelic surname, Hector is Colonel O'Halloran's son, who fought together with the Anglo-Irish Arthur Wellington in the Low Countries as we are told later in the novel when Hector has an interview with the very "Iron Duke".Hector's father was "'every inch' a soldier; and in all relations between landlord and tenant, it was universally admitted that he was both liberal and kind" (Maxwell 1979: 2).The house will be attacked by a secret society, the Whiteboys, enabling Maxwell, with such a short narrative strategy, to present briefly "all the ideological combatants of the epoch, so that it appears to be not just the embodiment of a political class but of all classes and creeds" (McCormack 1991: 835) in contemporary Ireland.Thus, whereas the main topic and theme will be that of adventure and rollicking experience, Maxwell leaves a stamp of the social discourse that was still much of an issue in Ireland.
However, the rush abandonment of an analysis of the Irish social, political and religious situation of the start of the novel for a more comic and remote set of adventures has produced different appreciations of Maxwell's narrative technique.Maxwell's style and accounts about the Ireland of the time could be deemed as light -or "not profound", as Wolff states (Maxwell 1979: ix)-at a time when the dire plight of pre-Famine Ireland started to grain ground in Irish and Anglo-Irish writing and an analysis of causes and effects in historical background seemed peremptory.W.J. McCormack also deems Maxwell as "third-rate novelist" and prefers to include him in "the comic side of Irish Gothic".For McCormack, Maxwell's work "identifies several anxieties that underpin Irish gothic fiction.One of these is simply the pressure that contemporary, local and actual events exercise upon an imagination seeking to represent things that are remote in time or space" (McCormack 1991: 834).
The way in which Hector O'Halloran gets suddenly involved in the Iberian Peninsular conflict is a clear instance of Maxwell's technique.For The Dublin University Magazine in 1841, Maxwell is said to unite "with the sparkling wit of his native country the caustic humour and dry sarcasm of the Scotch" (222).Maxwell is "unrivalled in the easy portraiture of the Irish gentleman" although within the unionist editorial bias characteristic of the Dublin University Magazine, it is in Maxwell's account of British military victories that the reader turns with "a proud swelling at his bosom to think that he also is a Briton" (222).Accordingly, he is more Anglo-Irish.Hector's wanderings constitute an array of soldierly life together with a descriptive analysis of the main battles in which Irish regiments and soldiers took active part.Hence Maxwell combines the approaches to battles like Ciudad Rodrigo, Salamanca, Talavera with accounts of the picaresque life in the posadas or inns and the atmosphere of the guerrillas.It is in the world of these patriotic defenders of Spain that Maxwell develops his rollicking mastery.Together with references to historical characters such as Juan Diez "el Empecinado" -the Spanish guerrillero was not unknown for the Irish and Anglo-Irish readers.In 1823 Miss Alicia Le Fanu, R.B. Sheridan's niece, had published Don Juan de las Sierras, or, El Empecinado-and his followers Jose Martinez "the Student", El Manco, or "The Maimed", "El Cura" [the priest], Maxwell offers the Cervantine Sancho in Mark Antony, strewn with stereotypical Irish wit, Hiberno-English lines and picaresque resolution.
Hector's and Mark Antony's lives start together when Mark Antony is adopted by Hector's parents on knowing that his parents are dead and his father had served in Colonel O'Halloran's battalion.To Hector Mark Antony is his "foster-brother", although from an early stage Hector will be educated to enjoy a military career; whereas, Mark will be taught by the "village pedagogue" and become a county boxer later.Maxwell's social stance is shown and will be repeated throughout the whole novel.Accordingly, whereas Hector will part riding his mare, Mark Antony, the Sancho Panza of the novel, is characterised as a hero and "true Milesian", always carrying "a few necessaries required for his journey [which] were formed into a bundle of small dimensions, and suspended from the extremity of a well-tried shillelagh" (Maxwell 1979: 65).After Hector's education he leaves for Dublin and it is on his trip to the Irish capital that the first Cervantine adventures occur: he is assailed by countrymen, as he was mistaken for a gauger, a big mistake in Ireland as Hector remarks, although his kidnappers vanish into thin air as a stranger appears and directs him to a house in which he will be introduced to a semi-Dulcinea, Isidora, who will give him a token, as a real knight and his lady.Maxwell sums up his Cervantine inspiration in Hector O'Halloran's words after his first adventure, words which could apply to a Quijote character rather than to what is a clear epitome of his down-to-earth squire: What a "whirligig world" we live in!I was but one day fairly flown upon it, and what a medley of adventure had it not produced!In the morning, starting full of "gay hope," and for the first time master of myself; in the evening, captive of a gang of ruffians, who, in drunken barbarity, would have consigned me to the bottom of the lake, with less compunction than that with which a schoolboy drowns a kitten.At night, inmate of a strange mansion, doubtfully received, half rejected afterwards, and now domesticated, as if I had been undoubted heir to every barren hill in view.All this was passing strange; and, lost "in wild conjecture," and unable to read riddles, I betook myself to sleep.( 47) The narrative strategy Maxwell develops in the novel presents Cervantine tinges as well; not only because of the use of Hector as narrator of his and Mark Antony's stories -which he did not live-but, also because the novel is intertwined with a series of stories within stories, as in Cervantes's Don Quixote, a closer influence can also be seen in William Carleton's popular Traits and Stories of the Irish Peasantry (1830), a work with which Maxwell was familiar.Thus, among many others we find: 'The Story of the Wandering Actress', 'The Sailor's Story', 'The Robbery of Tim Maley', 'The Matrimonial Adventures of Dick Macnamara', 'My Uncle's Story' (a long digression on the plight of Spanish South America) and 'The Voltigeur'.Besides, Maxwell leaves the reader in suspense with the manner in which he finishes his chapters, very much like in Don Quixote: "but we must leave the reader in temporary suspense, as, with this occurrence, we intend to commence another chapter" (108).As in Don Quixote, Hector "quitted the Emerald Isle, on the pleasant and profitable pursuit of 'the bubble reputation'" (103).But, already here Maxwell makes a difference as Hector leaves Ireland "for glory"; whereas, Mark Antony does it "for love" (104).The next step in their journey for adventure takes both protagonists to London, although the London Maxwell wants to portray conveys an idea of a much Irish London, as the reader is taken through streets and people that could well have been in any other place of the Ireland at the time.
In fact, Maxwell makes a statement, socially and politically about the Irish condition at the time; and also the condition of the Irish in London.Maxwell was much influenced by another Anglo-Irish writer John Corry (1770Corry ( -1825)), born in county Louth, whose Satirical View of London (1801) would be considered by many as a "tourist guide" of the age and the metropolis, in which we find a description of all levels of society in London and among them, especially, the numerous Irish community.Corry's work was a source of the very much ingrained figure of stock-characterisation of the Irishman that had already been extensively shown by many Anglo-Irish playwrights, such as Charles Stuart and John O'Keeffe, whose productions would also be popular on the London stage for a number of decades.Maxwell, as well as Lever, was one among many Anglo-Irish writers who made use of the stockcharacterisation of the Irishman for political and social purposes, mainly those of the Anglican Anglo-Irish ascendancy.In The Irishman in Spain (1792) Charles Stuart had already offered a stereotypical and much used version of the stage-Irishman.Stuart presents a plot enmeshed with the tribulations of servants and busybodies.For Truninger, two main reasons account for the appearance of the Irish as a servant far from the literary sphere which evince the representation of the Irish colonial subjugation by England.Accordingly, Truninger highlights "their excellence [and] exotic appearance"; but, most importantly, the fact that these Irishmen represent "a triumphant sign of the British mastery over the smaller island" (Truninger 1976: 23).The first presentation of Kilmainham, the Irish servant in Stuart's play, already displays all the traits expected from a picaresque stage-figure: an Irish clown at the mercy of his master's will.For Christopher Murray, "this particular version of the stage Irishman … was a continuing temptation … not to stress the 'well known humanity' of the Irish, but to depict the stage Irishman as 'vacuous'" (Murray 1991 Stuart depicts a character seen as illogic, prone to excess, with a language in which mispronunciation and wit abound.Stuart's representation of speech and behaviour as "deformed" accounts for the Irish "social and political condition as deformed" (Deane 1997: 55).Most Anglo-Irish authors of the time defended "some form of sobriety": "a rational articulation that was beyond the capacity of the [Irish] national character to produce" (55).Behind Stuart's "vacuous" representation of Kilmainham we find the belief in the need of English order and not French aid for Ireland.Indeed, Stuart's caricature of an inarticulate Kilmainham advocates the need of English "orderly" presence in Ireland, i.e. the imperial colonisation of the union: Irish eloquence became the index of Irish inarticulacy, speech removed from factblarney.Speech of this kind could not accurately define a condition; for Irish speech to be trusted, and for its account of the Irish experience to be acceptable, it must be subjected to the protocols of English speech and, in consequence, to the 'improving' English account of the Irish condition that accompanied the Union.(55) Charles Stuart's way of pleasing the London stage audience also corroborates his views to "normalcy" in Ireland at the end of the eighteenth century.Charles Stuart's process of normalization "depends on the success of a system of representation in which all that is extreme is brought under narrative control" (19).For Deane, the function of the author is "to communicate to an audience that shares her or his values a sense of the radical difference of the other territory or condition and … to claim that this territory and condition, once relieved of the circumstances or causes of its extraordinary condition, can be redeemed for normality "(19).
William Hamilton Maxwell's depiction of Mark Antony does not celebrate difference or otherness in the "vacuous" stage Irishman.On the contrary, he mocks and subjugates identity and characterisation through his approach to this Irish character in London and the Iberian Peninsula.
He does not formulate the principles of "equality" between Irish and English either, and if he does, these are "overwritten by the values of the dominant subject" (Smyth 1998: 16).Maxwell is directly opposing the "liberal, egalitarian and universalist strategies" -which sprang from the Enlightenment and the French revolution-in the portrayal of the Irish national character.Instead he uses a representational system "which confirm [s] the original opposition between coloniser and colonized" (16).Mark Antony O'Toole, the real Irishman in London, does not want to disappear in the "splendid model" (Memmi 1974: 120) of the coloniser.This analysis could also ignite further postcolonial criticism on these examples, although the extension of the paper does not allow us to dwell into those matters with depth and can be left open for further research.
From London they embark for the Peninsula "that scene of British glory" (256), arriving in Portugal, as it was the norm with British troops at the time, ready for the final stage of the campaign, although as McCormack suggests, both Hector and Mark Antony "set off to recolonize Spain for Maxwell's particular brand of harmless, distinctly unsatirical picaresque, immune to the mortality normally associated with violence and warfare" (McCormack 1991: 834-835).Hector's mishaps in the Peninsula commence with Wellington, as an effective way of luring the reader to the campaign.After the reference, once again, to the Anglo-Irish "Iron Duke", the reader is introduced to a more Cervantine landscape -central Spainalthough the events do recall the previous narrative line in terms of satire and comedy: the visit to a Spanish inn, where Hector meets "El Empecinado", and gives a prolific description of the inn and how the innkeepers maintain their business, amid French occupation.Both Hector and Mark Antony save a French voltigeur from a guerrilla squadron, a deed that increased the bond established since infancy.It is in Spain that William Hamilton Maxwell refers back to Cervantes's original as a way of substantiating his inspirational source.This is clearly exposed, when Hector O'Halloran and Mark Antony O'Toole see two wayfarers they are explicitly described by Hamilton Maxwell in a hint to the source of his whole idea and theme behind his rollicking novel, as Don Quixote and Sancho Panza: One seemed an hidalgo of the Quixotic school -a thin, tall, shabby half-starved looking gentleman.His gait was stiff and lofty; and at first, the unhappy man seemed to labour under a delusion that we would resign a corner in his favour.Speedily that error of opinion was removed; and he ascertained, that upon us the imprint of his dignity was lost.He therefore contented himself with taking a place before the fire, demanding, in lordly tones, attendance, and more fuel, -'but none did come, though he did call for them'.
The other was a round, stumpy, well-fed, happy-looking little man, now touching close upon the grand climacterie.The word had evidently gone well with him, to judge by what, in Ireland, they would term 'a cozeycharacter' of countenance.He poked the fire, but complained not; talked of the wild evening, and blessed the saints he was under shelter; hoped, rather than expected, that we might obtain a supper; concluding with a Christian-like expression of resignation, that really would have done honour to a Turk.(308)(309) William Hamilton Maxwell precipitates the end of the novel.Hector's and Mark Antony's adventures in the Iberian Peninsula take place in the Basque provinces, particularly in Vitoria and San Sebastian, at a time when Napoleonic France was being defeated by the Duke of Wellington.Hector is interviewed by the very Wellington and wounded in San Sebastian before returning to England, where his Dulcinea, Isadora, awaits him in a happy ending of the novel, much in accordance with the highly-read romantic tradition at the time of Maxwell's publication.
Following the convention and as any combination between romance and comedy requires, William Hamilton Maxwell's Cervantine novel precipitates into the final romantic marriage of Hector O'Halloran and his Isadora.But, Maxwell, following his suspense technique ends his novel with a reference to Mark Antony O'Toole and his wife, both surrounded by a throng of as it was probably expected by readers both in Ireland and the English discourses during the first half of the nineteenth century.
Intertwined with social, political and cultural representation and much imbued with the features of the gothic genre the nineteenth century witnessed what could be termed as Anglo-Irish Cervantine.Taking Miguel de Cervantes's masterpiece of all times Ireland and Anglo-Ireland adapted to their purposes the main ingredients of this Spanish novel among novels.In the Irish newspapers of the time, which constantly referred to political and military turmoil in nineteenth-century Spain, Don Quixote de la Mancha clearly represented the character of Spain and her literary genius at a time of political, religious and social turmoil in Spain, which to a certain extend could find a reflection in nineteenth-century Ireland.The authors briefly analysed above, and many other minor Irish and Anglo-Irish instances left for further research, saw in Cervantes's work a serviceable inspirational source from which to expand their creativity, always having a special say -albeit minor or anecdotic-of their view to Ireland in terms of society, politics and religion.
The twentieth century, which witnessed the coming and birth -as the phoenix myth-of a new Ireland, in religious, social and political terms, much in accordance to the article on Cervantes in the unionist and Tory-proned Dublin University Magazine in 1866, to which we referred above, also saw an adaptation of Don Quixote for the stage.Lady Gregory's play Sancho's Master (Abbey Theatre 14 March 1927) also encapsulated her main ideal design for the real Ireland she lived in.Gregory's recurrent allusions to Don Quixote and Sancho Panza constituted referents in key moments of her ideological and literary production, contributing to the Irish literary scene in creative and political ways.What we could even term as Lady Augusta Gregory's Cervantine ideologymainly because she made use of this figure of Sancho-the noble and attendant squire -and Quijote-the day-dreamer but instigator of reflection, thought and commiseration owing to his much-erred philosophy and way of lifeextensively in her production and personal diaries-would lead her to state in 1916, a mythic year for Ireland, her belief in the universality of Miguel de Cervantes's masterpiece and the validity for her troubled Ireland at the time, as Lady Augusta Gregory's much-beloved country contained "tragedy and comedy, idealism and common sense, the knight errant and the squire erred, the Don Quixote and the Sancho Panza" (Gregory 1995: 290).
Notes
1.I owe great thanks to the Department for Education Eusko Jaurlaritza-Gobierno Vasco for postdoctoral fellowship support BFI03.224 that made this research and writing possible.I am also grateful to Profs Tadhg Foley and Gearóid Ó Tuathaigh at NUI-Galway, whose reading discussions on this paper provided valuable suggestions.Besides, and most importantly, I owe special thanks to Dr Pedro J. Pardo who guided me on the Cervantine inspiration, and provided me with a manuscript version of his study on Barrett, which I read before publication in the forthcoming Enciclopedia Cervantina, and used to put me on track to search further Cervantine connections in Ireland.
3.
Stephen Gwynn recalls the first meetings of the Lever and Maxwell. Gwynn, Stephen. 1936. Irish Literature and Drama. London: Elkin Mathews, 76.For Patrick Rafroidi, Maxwell actually launched Charles Lever's as a writer, Rafroidi, Patrick. 1980. Irish Literature in English: The Romantic Period, 1789-1850. Gerrards -This damn'd Irish fellow I pick'd up in my travels, is always out of the way! [Enters Kilmainham.KILMAINHAM Your honour's pleasure, my lord![ Bowing.GUZMAN Psha! where have you been?I'm not a lord here, sirrah, but a Don: we gentlemen in Spain are all Dons.KILMAINHAM Dons in Spain! -troth, we can have many Dons in Ireland too.GUZMAN Aye KILMAINHAM Many! we have Don-nellwe have O'Donnell -we have Mac Don-nellwe have Don-noughmore -we have Donnoughadee -we have -[ Counting his fingers.(Stuart 1791: 7-8) Cross: Colin Smythe.Vol.II, 276.On who was the initiator of the rollicking style in nineteenth-century Irish fiction McCormack even quotes almost fully Charles Gavan Duffy's analysis of the plagiarisms from Maxwell's My Life (1835) in Lever's Charles O'Malley (1841), McCormack, William John.1991."The Intellectual Revival".Field Day Anthology of Irish Writing, Ed.Seamus Deane.Derry: Field Day.Vol.I, 1173-1300.
The Victories of the British Armies, 2 vols.(1839), his highly successful Life of Field-Marshall His Grace the Duke of Wellington, 3 vols.(1841), his clearly Cervantine masterpiece The Fortunes of Hector O'Halloran and His Man Mark Anthony O'Toole (1842-43) and Peninsular Sketches by Actors on the Scene, 2 vols.(1845) W.H. Maxwell's significant The Fortunes of Hector O'Halloran and his Man, Mark Antony O'Toole -published in the Dublin University Magazine between 1842 and 1843 while Charles Lever was editor of the unionist magazine
|
2019-05-15T14:30:14.918Z
|
2005-01-01T00:00:00.000
|
{
"year": 2005,
"sha1": "34b54591ddc293d7d0b25c9448eb561c6fae7477",
"oa_license": "CCBYNC",
"oa_url": "https://www.estudiosirlandeses.org/wp-content/uploads/2013/05/AsierAltuna.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "34b54591ddc293d7d0b25c9448eb561c6fae7477",
"s2fieldsofstudy": [
"History"
],
"extfieldsofstudy": [
"History"
]
}
|
127528536
|
pes2o/s2orc
|
v3-fos-license
|
DYNAMICAL BEHAVIOUR OF FRACTIONAL-ORDER PREDATOR-PREY SYSTEM OF HOLLING-TYPE
. In this paper, the local derivative in time is replaced with the Caputo-Fabrizio fractional derivative of order α ∈ (0 , 1). A two-step fractional version of the Adams-Bashforth method is formulated for the approximation of this derivative. To enhance the correct choice of parameters when numeri- cally simulating the full-system, we examine the stability analysis of the main equation. Two important examples are drawn to explore the dynamic richness of the predator-prey model with Holling type. Simulation results at different instances of α is in agreement with the theoretical findings.
1. Introduction. The study of fractional-order differential equations have been of great interest. Fractional derivative of the Caputo, Caputo-Fabrizio, Riemann-Liouville or the Atangana-Baleanu type of derivatives are important mathematical tools which have been used for developing models in application areas of biology, control, economics and finance, electrical circuits/networks, rheology, nuclear physics, viscoelasticity, chemical physics, fluid flows, signal processing, dynamical phenomena in self-similar and porous structures [5][6][7][8][9]20], just to mention a few.
The Caputo-case model was presented in [24,31] for fractional ordinary differential equations. The Caputo-Fabrizio model type was considered in [7,8,11,12,25,26], the Riemann-Liouville type of fractional reaction-diffusion equations have been considered in [20][21][22][23][24][27][28][29][30][31]. The most recent Atangana-Baleanu fractional derivative that was formulated based on the nonlocal and nonsingular kernels when applied to model various phenomena in science and engineering was discussed in [1][2][3][4]10,[13][14][15][16]32,[34][35][36][37][38][39] and references therein. To our knowledge, application of the Caputo-Fabrizio type to model the dynamic bahaviour of predator-prey system with Holling type functional has not been reported. Hence, we are motivated in this work by considering the general time-fractional differential equation where CF 0 D α t is the Caputo-Fabrizio fractional derivative of order α defined as [7,11,12] CF where M (α) is a normalization function, such that M (0) = M (1) = 1. In what follows, a quick tour of some properties of fractional differentiation will be presented. The Caputo fractional derivative of order α > 0 is defined by The Riemann-Liouville fractional derivative of order α ∈ (0, 1] for a function u(t) ∈ C 1 ([0, b], R n ); b > 0 is given by [20,28] for all t ∈ [0, b] and n − 1 < α < n, where n > 0 is an integer. Atangana and Baleanu [9] proposed the following derivatives in the sense of Caputo and Riemann-Liouville. Let y ∈ H 1 (a, b), a < b, α ∈ [0, 1] then, the definition of the Atangana and Baleanu fractional derivative in Caputo sense is given as [10] ABC a where M (α) has the same properties as in the case of the Caputo-Fabrizio fractional derivative. Let y ∈ H 1 (a, b), a < b, α ∈ [0, 1] then, the definition of the Atangana-Baleanu fractional derivative in Riemann-Liouville sense becomes [10] ABR a The rest of this paper is structured as follows. The main model is introduced in Section 2. We proceed with the mathematical analysis of the local derivative in order to ascertain the correct choice of the parameters when numerically simulating the model. Numerical method for the approximation of the Caputo-Fabrizio fractional derivative is presented in Section 3. Simulation experiment for different α values is reported in Section 4. We finally conclude with Section 5.
2. Numerical method for fractional differential equations with the Caputo-Fabrizio fractional derivative. In this section, we introduce the newly formulated numerical scheme that is based on Adams-Bashforth scheme for the approximation of the Caputo-Fabrizio fractional derivative. By following closely the recent idea presented in [10], we consider the general fractional differential equation (1) with the Caputo-Fabrizio fractional derivative (2). By using the fundamental calculus theorem, equation (1) simply transforms into in such that and By subtracting (9) from (8) we have So that, which means that Therefore, which is the required two-step Adams-Bashforth scheme for numerical approximation of the Caputo-Fabrizio fractional derivative. it should be noted that if α = 1, we recover the classical Adams-Bashforth method. The above fractional scheme was studied completely in [10,25,26] and the convergence and stability results are summarized in the following theorems.
where f is a continuous function bounded for the Caputo-Fabrizio fractional derivative, we have such that if f (t n , u n ) − f (t n−1 , u n−1 ) ∞ → 0 as n → ∞, then u n+1 − u n ∞ → 0 as n → ∞.
3. Analysis of the main equation with local derivative. The main model considered in this paper is the dynamics of a predator-prey system with Holling type function response given as: where u 1 and u 2 being a function of time that represent the species population densities for respective prey and predator. The carrying capacity of the prey is denoted by κ, while the death rate of the predator is given by σ, the growth rate of prey and maximum predation rate are denoted by ϕ > 0 and > 0, respectively. The parameter φ > o stands for half-saturation constant, ψ is given in such a way that system (13) does not vanish for positive u 1 and b > −2 √ φ, we shall study this dynamic in strictly first quadrant where all the parameters are biologically feasible.
Numerical experiments.
In this section, we explore the dynamic richness of the fractional predator-prey model with Holling type-IV functional response. The local time-derivative in system (13) is replaced with the Caputo-Fabrizio fractional derivative, to have (17).
where d 1 , d 2 are the diffusivity constants, CF 0 D α t u(t) remains as earlier defined. We simulate with parameters φ = 0.1; ψ = 0.001; ϕ = 5; κ = 0.7; = 1; σ = 1 (17) for different values of α to obtain the results in Figures 1-4. A strange attractor of the species is reported in Figure 4 for α = 0.48. We extend our simulation experiment by considering the time-fractional reactiondiffusion version with the Caputo-Fabrizio derivative, given as In this case, the solution is sought as a function of position x and time t. The second-order partial derivative is approximated with the second-order central finite difference scheme, see [19]. In the simulation, we compute with the zero-flux boundary condition and initial function given in [33]. The diffusivity coefficients are set to be d 1 = 0.007 and d 2 = 0.1. As displayed in Figures 5 to 9, it is obvious that both species undergo a spatiotemporal oscillation in phase. It should be mentioned that as α approaches 1, there exists a stable distribution. The biological implication is that, whenever the trajectories is approaching the boundary (washout) equilibrium point E 1 (κ, 0) as t → +∞, which implies that the prey population with initial condition will obviously get to the stable density κ, while that of the predator population will tends to extinction as shown in Figures 1-3.
5.
Conclusion. In this paper, a fractional version of the Adams-Bashforth scheme is applied to numerically approximate the Caputo-Fabrizio derivative which was used to study the dynamic complexities of a predator-prey system with Hollingtype functional response. Mathematical analysis of the local derivative system is examined to guarantee the good choice of the parameters. Our findings on stability show that the system is globally asymptotically stable. Simulation experiment results obtained for different instances of fractional order α confirm the theoretical findings.
|
2019-04-23T13:24:03.823Z
|
2020-01-01T00:00:00.000
|
{
"year": 2020,
"sha1": "46ea10ff464068e708d9e99bb17d19d1b8f5b037",
"oa_license": null,
"oa_url": "https://www.aimsciences.org/article/exportPdf?id=75138dc2-46d6-458e-8234-1746cddecb2c",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "d1a8bd700e2093560d3291e14684d03593244ffc",
"s2fieldsofstudy": [
"Mathematics",
"Environmental Science"
],
"extfieldsofstudy": [
"Mathematics"
]
}
|
424152
|
pes2o/s2orc
|
v3-fos-license
|
The Effect of Epidural Analgesia on the Delivery Outcome of Induced Labour: A Retrospective Case Series
Objective. To investigate whether the use of epidural analgesia during induced labour was a risk factor for instrumental vaginal delivery and caesarean section (CS) delivery. Study Design. This was a retrospective case series of primigravidae women being induced at term for all indications with a normal body mass index (BMI) at booking and under the age of 40 years. Results. We identified 1,046 women who fulfilled the inclusion criteria of which 31.2% had an epidural analgesia. Those with an epidural analgesia had significantly greater maternal age, higher BMI, greater percentage of oxytocin usage, and a longer first and second stage of labour. Women with an epidural analgesia had a higher instrumental delivery (37.9% versus 16.4%; p < 0.001) and CS delivery rate (26% versus 10.1%; p < 0.001). Multivariable analysis indicated that the use of an epidural was not a risk factor for a CS delivery but was a risk factor for an instrument-assisted delivery (adjusted OR = 3.63; 95% CI: 2.51–5.24; p < 0.001). Conclusion. Our study supports the literature evidence that the use of an epidural increases the instrumental delivery rates. It has also added that there is no effect on CS delivery and the observed increase is due to the presence of confounding factors.
Introduction
Epidural analgesia is a central nerve blockade technique which involves the injection of a local anaesthetic into the lower region of the spine, thus blocking the painful impulses that are generated from the nerves of the contracting uterus during labour. It is most commonly used for intrapartum pain management with approximately 20% of women in the United Kingdom [1] and 60% of women in the United States [2] utilising this technique as a form of pain relief. A recent Cochrane review in 2012 summarised the available evidence from other existing Cochrane systematic reviews on the efficacy and safety of nonpharmacological and pharmacological interventions to manage pain in labour [3]. The authors of this review reported that epidural analgesia is the most effective pain management method in comparison with other pharmacological and nonpharmacological methods [3]. However, even though the overall risk of a caesarean section (CS) delivery was not found to be increased, nevertheless epidural analgesia was found to be associated with an increased risk of assisted vaginal birth [3,4].
The primary aim of our study was to investigate the effect of epidural analgesia on the delivery outcome in women with induced labour. In order to account for the significant confounding factors of parity [5], age [6], and body mass index (BMI) [7] on the success of induced labour, we restricted the inclusion criteria of our women to those who were primigravidae and under 40 years of age and had a normal BMI at booking.
Materials and Methods
This was a retrospective case series of women induced for all indications at term (gestational age ≥37 weeks) at the Maternity Unit of the Shrewsbury and Telford Hospital (SaTH) National Health Service (NHS) Trust, between January 2007 and December 2013. Primigravidae women with a normal body mass index (BMI) at booking (<25 kg/m 2 ) and under the age of 40 years with singleton cephalic presentation deliveries were considered eligible for the study. Women induced for stillbirths and fetal congenital abnormalities and with multiple pregnancies were excluded. Data was collected from Medway5 obstetric electronic database and maternal data, labour/delivery data, and neonatal data were all recorded.
Maternal data recorded involved age, body mass index at booking, smoking status, and self-reported ethnicity (White-European, Asian, Black, or other). Labour and delivery data included route of birth (normal vaginal delivery, instrumental vaginal delivery, or caesarean section delivery), indications for instrumental delivery and CS delivery, epidural analgesia use, and liquor appearance (normal, meconium stained). In our unit, epidural catheters are placed at the L2-L3, L3-L4, or L4-L5 interspace when women have a cervical dilatation of ≥3 cm. Finally, neonatal data recorded were fetal gender (male, female), birth weight, head circumference, Apgar scores (at 1 and 5 minutes), cord gases taken at delivery (arterial/venous pH), and admission to the neonatal unit (NNU).
Quantitative variables were expressed as mean values (SD, standard deviation) and qualitative variables were expressed as absolute and relative frequencies. For the comparison of proportions Fisher's exact tests were used, and Student'stest was computed for the comparison of mean values. Multivariable logistic regression analyses in a stepwise method ( for entry 0.05, for removal 0.10) were used in order to determine independent factors that were associated with the odds of an instrumental and caesarean section delivery. The variables that were entered in the primary analysis were time duration of first and second stage of labour, age of the mother, smoking, ethnicity, BMI, liquor appearance, use of epidural, fetal gender, birth weight, and head circumference at birth. Our study included 1,046 women and, with the current sample size, the study had >95% power to perform a logistic regression using an alpha of 0.05, large effect sizes, and two-tailed test. Statistical significance was set at < 0.05 and analyses were conducted using SPSS statistical software (version 20.0).
Ethical approval for collection and analysis of data in our study was obtained by the Research and Development Department of the Shrewsbury and Telford Hospital NHS Trust.
Results
The total sample consisted of 1,046 eligible women with a mean maternal age at delivery of 25.9 years (SD = 5.7 years). 88.2% of women were of White ethnic background, 4.1% were Asian, and 1.1% were of Black ethnic background. The mean value of BMI was 22 kg/m 2 (SD = 1.9 kg/m 2 ) and 87.1% of the participants never smoked. During labour 31.2% of women had an epidural analgesia for pain relief and the instrumental delivery and overall caesarean section delivery rate were 23.1% and 15.1%, respectively. The mean birth weight was 3371 gr (SD = 559 gr) with 52.5% of the fetuses being male. Meconium stained liquor appearance was identified in 13.3% of the participants and 4% of all newborns were admitted to the neonatal unit (Tables 1 and 2).
Those with an epidural analgesia when compared to those without had a significantly greater maternal age, higher BMI, greater percentage of oxytocin usage, and a longer first and second stage of labour. Though all women had a normal BMI, the increasing BMI was associated with a greater use of oxytocin in labour ( = 0.01). The neonates of women with an epidural analgesia had a significantly greater birthweight and head circumference, lower Apgar scores at 1 minute but similar Apgar scores at 5 minutes, and higher values of arterial pH in their cord gases. Women with an epidural analgesia also had a significantly higher instrumental delivery (37.9% versus 16.4%; < 0.001) and CS delivery rate (26% versus 10.1%; < 0.001) (Tables 1 and 2). Table 3 shows the results from multivariable stepwise logistic regression analysis with the dependent variable of presented route of birth (normal vaginal delivery versus instrumental delivery). The use of an epidural analgesia was independently associated with the odds of an instrumental vaginal delivery (OR = 3.63; 95% CI: 2.51-5.24, < 0.001). Additionally, it was found that the increased mother's age at delivery, the increased second stage of labour, and decreasing gestational age were associated with greater odds for an instrumental delivery. Table 4 presents the results from multivariable stepwise logistic regression analysis with the dependent variable of presented route of birth (vaginal delivery versus CS delivery). The use of an epidural analgesia was not found to be associated with the odds for a CS delivery. It was found that the increased birth weight and prolonged second stage were the two factors that increased the odds for CS delivery.
Discussion
We found that women with an epidural analgesia in comparison to those without had a significantly greater maternal age and a higher BMI. A survey conducted in 2010 showed that increasing maternal age was a significant factor associated with a woman's preference to have an epidural analgesia during labour [8]. A more recent, however, large-population based study in the United States demonstrated that distributions of age were similar between epidural users and nonusers [9]. On review of the literature, there are no studies directly reporting on the finding of increased rates of epidural analgesia in women with a higher BMI. Nevertheless, there are reports that the increased BMI due to the adipose tissue being hormonally active predisposes to a reduced response to the induction of labour process because of the altered metabolic status of these women [10,11]. In our study we presume that women with a higher BMI may have also had a reduced response to induced labour, as we found that the increasing BMI was associated with a greater use of oxytocin in labour ( = 0.01) which could explain the higher rate of epidural usage due to a more painful labour.
Our study demonstrated that women with induced labour and an epidural analgesia as compared with those without had a significantly greater percentage of oxytocin usage and a longer first and second stage of labour. A recent Cochrane review in 2011 [4] reported that epidural analgesia was associated with an increased rate of oxytocin administration (RR = 1.19; 95% CI: 1.03-1.39). There is evidence that induced labour may be less efficient than spontaneous labour [12] and for this reason oxytocin administration may be necessary, thus rendering labour more painful and therefore requiring the use of pain relief. The Cochrane review in 2011 [4] also reported that epidural analgesia was associated with a longer second stage of labour (mean difference = 13.66 mins; 95% CI: 6.67-20.66) but showed no clear effect on the duration of first stage. On review of the literature there is conflicting evidence regarding the effect of epidural analgesia with reports of either prolonging [13] or shortening [14] the first stage of labour. In our cohort of women, both first and second stages of labour were prolonged in those women who had an epidural analgesia.
The neonates of women with epidural analgesia in our study when compared to those without had significantly lower Apgar scores at 1 minute but similar Apgar scores at 5 minutes. This is in line with the Cochrane review in 2011 [4] which reported that there were no significant differences in neonatal Apgar scores at 5 minutes in babies born to women with epidural analgesia. Our study has also shown that neonates from women with an epidural have significantly higher values of arterial pH in their cord gases. Higher cord pH values have also been reported in the past [15] and this finding could be explained by a recent immunohistochemical study [16] that demonstrated that pain-reducing anaesthesia seemed to reduce the oxidative stress in human term placenta.
We have found in our study that the use of an epidural analgesia after adjusting for multiple confounding factors was independently associated with the odds of an instrumental vaginal delivery (aOR = 3.63; 95% CI: 2.51-5.24). This is in line with the Cochrane review of 2011 [4] indicating an increased risk of assisted vaginal birth in women with an epidural during labour (RR = 1.42; 95% CI: 1.28-1.57). Previous studies however have shown that the rate of instrumental vaginal delivery depends on several other confounding factors such as the dose and concentration of the epidural solution used, the degree of analgesia during second stage, and obstetric factors [17,18]. It has been reported that the motor block which is the chief complication of labour epidural analgesia might result in prolonged labour and therefore increase the rates of instrument-assisted delivery [19].
Women with an epidural analgesia in our study when compared to those without had a significantly higher CS delivery rate (26% versus 10.1%). Nevertheless, after adjusting for multiple confounding factors, there was no significant difference noted between epidural users and nonusers. This is in line with the Cochrane review of 2011 [4] indicating that there is no significant difference in the risk of CS delivery overall. Previous studies have contemplated that the degree of motor block achieved by an epidural analgesia may result in a prolonged labour and therefore increase the rates of a CS delivery [19]. Other studies [17,20] however have demonstrated that epidural analgesia per se is unlikely to affect the chances of a normal delivery and there are many other factors that may contribute to a CS delivery such as the increased birthweight [17].
There are certain limitations to be considered about our study. First, data were retrospectively collected from an electronic database for the study period 2007-2013 where accuracy of data is dependent on the practitioner recording the information each time on the database. Second, our electronic database does not have a mandatory field for recording the epidural regimen that was used. There is literature evidence showing that different epidural analgesia Obstetrics and Gynecology International 5 formulas exhibit a different effect on the course of labour and the delivery outcome [19,20]. The main strength of our study includes its large sample size with inclusion of women who were primigravidae and under 40 years of age and had a normal BMI at booking in order to account for the significant confounding factors of parity [5], age [6], and body mass index (BMI) [7] on the success of induced labour.
In conclusion we have found that women with an epidural in our cohort have a threefold increased risk of an instrumental delivery. Our study lends support to the literature reports that an epidural analgesia is a risk factor for an assisted vaginal birth. It has also added that there is no effect on the CS delivery rates and the observed increase is due to the presence of confounding factors.
|
2017-11-03T07:34:21.409Z
|
2016-11-20T00:00:00.000
|
{
"year": 2016,
"sha1": "0ded5277524ce77caea70331a4e7f24d6e30e67c",
"oa_license": "CCBY",
"oa_url": "http://downloads.hindawi.com/journals/ogi/2016/5740534.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "c78a24effdd410224a3f7ac67515668b13e502ae",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
257319919
|
pes2o/s2orc
|
v3-fos-license
|
Trigeminal neuralgia occurring after the third dose of Pfizer BioNTech COVID-19 vaccine. Complication or coincidence? An illustrative case report and literature review
The coronavirus disease 2019 pandemic is an ongoing concern for medical care worldwide. Since its emergence, multiple COVID-19 vaccines have been designed, allowing for more effective control of the pandemic. COVID-19 vaccines, like any other form of medical intervention, may cause adverse and unforeseen side effects, varying in frequency and severity. Determining a correlation between the occurring symptoms and the vaccination is often a challenging task, requiring multiple data sources and reported cases. So far, there have been multiple reports of trigeminal neuralgia developing after COVID-19 vaccination. A 36-year-old woman was admitted to the Emergency Ward due to chronic pain attacks in the left side of her face. The pain appeared two months ago, on the day following the vaccination using the third dose of the Pfizer BioNTech COVID-19 vaccine. At the Neurology Department she was diagnosed with trigeminal neuralgia. Based on the lack of any obvious causes, relation to the vaccination, and other similar reports, we assumed that the trigeminal neuralgia was a complication of the vaccination. Hospital treatment consisted of oxcarbazepine, dexamethasone and pregabalin. Treatment was successful, with transient episodes of exacerbation. Six months after the onset of the disorder the patient remains without pain. We believe that the presented case supports the possibility of trigeminal neuralgia occurring in relation to the Pfizer BioNTech COVID-19 vaccine administration. Additional reports may further contribute to establishing a certain link.
Introduction
Trigeminal neuralgia (TN) is a relatively uncommon chronic condition, affecting less than 0.5% of the general population [1]. It manifests itself as episodic attacks of sharp, electric, shock-like pain, usually unilateral, in the regions of the face subject to the fifth cranial nerve (CN V). Attacks are triggered by movements of the facial muscles, cold temperature, touch or are spontaneous in nature. TN is included in the 13 th chapter in the International Headache Society classification [2]. Based on the aetiology, it is systematized there as classical (due to vascular nerve compression), secondary (evidence of clear cause) or idiopathic (no cause is apparent). Recently, TN has been noted as one of many possible neurological complications of coronavirus disease 2019 . Since December 2020, when COVID-19 vaccines became the primary form of pandemic control, about 13 billion doses of vaccines have been administered worldwide. Having done a literature review, we came across only a few cases of TN in total that were postulated to have developed after COVID-19 vaccination [3][4][5][6]. Herein we present a case of TN, which originated shortly after the 3 rd dose Pfizer BioNTech COVID-19 vaccine, discussion on its differential diagnosis and suggested effective treatment, based on our observations.
Patient case presentation
A 36-year-old woman was admitted to the Department of Neurology due to persistent pain in the left side of her face. The pain had first appeared two months ago, on the day following the vaccination using the third dose of the Pfizer BioNTech COVID-19 vaccine. Initially, the entirety of the left side of her face had been affected, but a few days later the pain became localized in the region of the second and third branch of the left trigeminal nerve. The pain was paroxysmal, presenting itself as attacks lasting 4-5 seconds each, triggered by movements of the mouth and jaw during activities such as eating or brushing the teeth. Attacks had been excruciatingly strong, with a score of 10 according to the numerical rating scale (NRS), and as such prevented the patient from participating in everyday functions, finally causing her to seek help at the Emergency Ward. Allodynia with painful sensation after application of cold air and hypersensitivity to stimuli were prominent during preliminary examination. Patient reported subjective, distorted sensations in the region of the second branch of the left CN V during exacerbations of pain, but no objective changes in the sense of touch examination were found. No other complaints or changes were observed in neurological evaluation. No relevant history of previous or concomitant diseases was reported. No familial history of neuralgia or similar conditions was reported by the patient. The previous two doses of the same type of vaccine were taken without any complications or adverse effects.
Standard laboratory tests showed no abnormal results. Concurrent COVID (the real-time quantitative polymerase chain reaction [RT-qPCR] test displayed negative results), as well as other potential ongoing infections, were excluded. There was no elevation in D-dimer levels, strongly suggesting a lack of pathological thrombotic processes. Magnetic resonance imaging (MRI) of the head with contrast did not show any significant pathologies that could contribute to the development of symptoms.
Prehospital, initial treatment consisted of carbamazepine in the dose of 200 mg taken twice a day, and Lignocainum hydrochloridum 5 mg per kg of body mass, used to alleviate stronger attacks. This management strategy was only partially effective and caused a decrease in frequency and intensity of pain paroxysms, but resulted in multiple side effects, mainly somnolence, which significantly impacted the daily living of the patient.
Sudden exacerbation of pain was observed after about eight weeks of the treatment described above and was as-sociated by the patient with the onset of concomitant viral infection of the upper respiratory tract. After admission to the Neurological Department, a second MRI with angiography (MRA) was performed (according the European Academy of Neurology guidelines 2019) [7], which revealed no neurovascular conflict.
The new treatment was instituted with replacement of carbamazepine by oxcarbazepine 600 mg twice a day and introducing steroids based on a previous scientific report [3] encountered during our research following the patient's interview. Our case matched one description very well and, as such, with the patient's cooperation, therapy based on the reported findings commenced. Steroids were given in the form of dexamethasone 12 mg per day for two weeks and titrating doses during the following two weeks. After combined therapy employing steroids with oxcarbazepine, a reduction both in pain intensity and in frequency of attacks was observed within five days. Because the pain was still present, the treatment was further supplemented with pregabalin in the dose of 150 mg per day for two weeks and was continued for the next two months. Gradually, but with transient periods of weak exacerbation, pain alleviation was achieved and the patient made a full recovery. Treatment with oxcarbazepine and pregabalin was continued for two further weeks. Six months after the onset of the disorder the patient remains without pain. Treatment was officially declared complete in May 2022 ( Fig. 1).
Discussion of differential diagnoses
Differential diagnosis of TN includes possible presence of other similar conditions, such as glossopharyngeal neuralgia, cluster headaches, painful post traumatic trigeminal neuropathy, persistent idiopathic facial pain, herpes zoster related neuropathy and dental pathologies. Diagnosis relies on highly specific clinical features, allowing it to be easily distinguished from aforementioned ailments. Despite superficial similarities, such as with cluster headache symptoms overlapping somewhat with those of TN, based on different localizations, duration of the attacks and accompanying autonomic signs, a physician should be able to Reduction in pain intensity and frequency of the attacks. Patient is discharged from the hospital.
During a routine check-up prescription of pregabalin for 2 weeks. Continuing treatment with oxcarbazepine and steroids for another 2 weeks Follow-up visit and prescription of oxcarbazepine and pregabalin therapy for 2 months and then for 2 weeks after symptoms have disappeared 1 week 2 weeks easily provide a proper diagnosis. What is more, in cluster headaches pain tends to migrate from one side of the face to the other, while it consistently remains limited to a single side in TN, usually in the 2 nd or 3 rd branch of CN V. Triggering factors other than those typically found in TN and different pain quality may indicate persistent idiopathic facial pain. Painful post-traumatic trigeminal neuropathy may resemble TN, but it is always preceded by a major traumatic injury and displays clear neurological abnormalities visible on neuroimaging. Herpes zoster neuropathy shares many common features with TN, but it is differentiated by the presence of highly distinctive skin lesions, usually in the 1 st branch division of CN V. Altogether, using judicious observation and exclusions, a physician familiar with the basics of TN should be able to recognize it without much of a struggle.
Discussion of the final diagnosis
The patient described in this case report displayed a multitude of clinical features prompting us to diagnose her with a case of TN. These included specific triggering stimuli, that is cold, touch or movements of the jaw, characteristics of the pain, which was sharp, stabbing, and electric-like, affected area, limited to the 2 nd branch of CN V unilaterally, and duration of the attacks, shorter than two minutes.
Differential diagnosis requires a thorough examination to classify TN according to specific subtype. We excluded the possibility of it being a classical TN based on the lack of evidence of neurovascular conflict in MRA. Secondary TN was excluded based on absence of space-occupying tumours, a demyelinating process or other disorders. These results leave us with the possibility of either TN idiopathic or TN attributed to other causes. However, the patient did not suffer from any other conditions predisposing her to develop TN.
There have been many case reports in the literature which concerned TN as a complication of COVID-19 infection [8,9] and a few reports on TN developing after COVID-19 vaccination [3][4][5][6] as well. Neurological complications during the course of COVID-19 infection, apart from TN [8,9], include the neuropathies facial nerve palsy [10], sixth cranial nerve palsy [11] and Guillain-Barré syndrome [12]. So far, four cases of TN after COVID-19 vaccine have been described, all of which occurred after the Pfizer-BioNtech vaccine, after either the first [3,5,6] or the second dose [4]. Our case is the first case of similar symptoms developing after the third dose of the vaccine.
The pain was variously accompanied by a multitude of other neurological symptoms, including numbness in the V1, V2 or V3 branches of CN V, cervical radiculitis [5] and numbness of the left upper limb [4]. Neuroimaging studies are usually recommended to distinguish classic TN from secondary TN [13], and such studies were performed in each case we have encountered. Changes in MRI were detected in two cases as abnormal asymmetric thickening and robust perineural sheath enhancement of the V3 segment of the left CN V [5] or as hyperintensity in the right lateral dorsal pons, at a level above the CN V origin [4]. However, in the remaining cases, including the one described here, neuroimaging showed no significant changes.
Treatment of TN depends on a number of factors, such as age, general health, severity of symptoms and the cause of the condition. The first-line treatment of idiopathic TN usually is restricted to pharmacotherapy [13]. In patients who developed TN after a vaccine, administration of steroids significantly reduced the pain frequency and intensity and improved patients' condition [3][4][5]. Steroids were effective when administered both intravenously [3,4] and orally [4,5]. In one case combination of pregabalin and carbamazepine alone reduced pain and ameliorated facial numbness [6]. Pregabalin alone was insufficient to control pain attacks in all reviewed cases.
The immune-related reaction is suspected to be the underpinning pathomechanism in the described cases. Such a pathomechanism is proposed as a cause of neurological complications after defective immunization, with demyelination of the central nervous system (CNS) reported [4]. One of the currently suggested pathomechanisms of TN is local demyelination within the CN V root [14]. This process is usually triggered by compression of the root of the nerve, such as in the case of neurovascular conflict. Both peripheral and central demyelination was reported as a rare complication after COVID-19 vaccination [15].
In the cases presented so far in the literature and in our case, neuralgia symptoms developed within a few days, or sometimes even hours, after the vaccination [3][4][5][6]. The process that led to demyelination would have to develop rapidly. In addition, in the literature we can find dozens of cases of demyelination of the CNS, other than TN occurring after vaccination. It has been speculated that the mechanism for the development of this demyelination may be bystander activation [16]. Single-stranded mRNAs are able to activate TLR-7 and TLR-8 receptors, causing an increase in secretion of proinflammatory cytokines and a strong response from T and B lymphocytes, which results in activation of existing self-reactive T and B lymphocytes and development of inflammation [17]. The occurrence of this mechanism in the cases of TN discussed here is supported by the short intermediary period from the vaccination to the onset of symptoms. It is possible that the occurrence of such adverse reactions only in a relatively small number of vaccinated individuals is due to a genetic predisposition. Certain polymorphic variants of the pattern recognition receptor (PRR) may induce a stronger immune response [18]. Other mechanisms that can cause CNS demyelination include molecular mimicry and epitope spreading [16].
Our case is the first, as far as we know, occurrence of TN possibly occurring in relation to the 3 rd dose of the vaccine. However, based on the data provided by the Centers for Disease Control and Prevention (CDC) [19,20] we can conclude that the incidence rates of both local and systemic side effects after the 2 nd and 3 rd doses were comparable and only slightly higher than those occurring after the 1 st dose. Remaining on the subject of vaccinations, there is some speculation that the stronger immune response after the non-first doses may be due to the differences in the immune environment encountered by those doses. In the case of non-first doses, the vaccine affects not only naïve cells, but also primary specific antibodies and memory T and B cells formed after the first dose. Additionally, prime-induced resting trained innate cells can respond better than naïve cells to restimulation [21]. The exact molecular mechanisms of immune memory formation and maintenance after vaccination are not fully elucidated, but circulating antibody levels provided by Pfizer-BioNTech COVID-19 vaccination are greatly reduced at 6-8 months after vaccination [22]. Since our patient was vaccinated each time using the Pfizer-BioNTech COVID-19 vaccine, at the constant dose, maintaining the typical interval and employing the same route of administration, we may speculate that the above-described phenomenon might have influenced the occurrence of the adverse reaction after the booster. It still remains unclear why TN developed after the 3 rd rather than the 2 nd dose, but we can suppose it might have been influenced by other independent factors, which could have caused exaggeration of immunological response such as a subclinical infection the patient might have been suffering from, in the period when the vaccination took place. It is also worth remembering that peak antibody levels are typically reached after three vaccine doses [22]. The exacerbation of pain which was observed later during the course of TN was triggered by viral pharyngitis, an event with immunological implications. The beneficial effect after steroid treatment may indicate, again, the excessive immunological response as a cause. The improvement could not be entirely due to simultaneously introduced oxcarbazepine, because previously management with carbamazepine was partially effective. Carbamazepine and oxcarbazepine share approximately the same mechanisms and clinical efficacy, but in our case, radical improvement after switching from the former to latter, with the addition of steroids, was not only due to the alleviation of adverse effects, but also better pain control. Possibly, administration of steroids could act causally.
It is important to note that the nerve damage would derive not from the direct actions of the virus, but rather from the exaggerated and disproportionate immunological response of the organism to it. As such, it follows that the virus itself is not necessary for the nerve damage to occur, with only improper reaction to it being indispensable. Thus, the COVID-19 mRNA vaccine, which does not contain the virus proper, but generates, by its design, a response resembling that to the virus, could possibly cause similar symptoms to present themselves if this response is similarly distorted.
As was mentioned before, diagnosis of TN relies almost exclusively on the patient's history and symptoms reported and observed. Based on the clinical diagnosis, we cannot ascertain that there was an undisputed correlation between the occurrence of TN and vaccination. Neurovascular conflict was excluded, and so were the secondary causes, but it is exceedingly difficult to ensure that no idiopathic capacity was present. The question remains whether the origin of the disease was entirely spontaneous or the occurrence just coincided with the vaccination. The small number of similar cases does not allow us to confirm any categorical associations between the vaccine itself and the observed symptoms due to the lack of factual evidence supporting them, but, on the other hand, the infection itself has been definitively linked to multiple symptoms affecting the nervous system [23].
As can be seen in the Table 1, delineating the sofar reported cases of TN disorders possibly related to the COVID-19 vaccine, a number of those cases [4][5][6] presented themselves with additional sensory disturbances in the trigeminal nerve territory, absent in the other cases [3], including ours. Numbness and other sensory dysfunctions are not typically seen in TN, but rather in painful trigeminal neuropathy [2]. Painful trigeminal neuropathy is recognized as a separate condition, involving damage to the trigeminal nerve causing the loss of sensations. In TN proper, the nerve damage is less pronounced and causes increased function, rather than its loss [24]. We can assume that in those cases the sensory disturbances and the sensation of numbness could have been caused by the greater degree of damage done to the trigeminal nerve. It is also worth noting that those clinical features tended to be the most persistent ones, outlasting pain.
Conclusions
The temporal relationship, history and exclusion of other causes suggest that TN can occur in patients after the third dose of the Pfizer BioNTech COVID-19 vaccine. Therapy with steroids, oxcarbazepine and pregabalin may reduce the frequency and intensity of pain attacks of TN of such origin. Recurrences and exacerbations of pain are possible during treatment, as seen in our case. It cannot be ruled out that the association between the vaccine and TN was a coincidence and not a causal relationship, so further observation of subjects vaccinated against COVID-19 and investigation of the causes of TN must continue. The authors declare no conflict of interest.
|
2023-03-04T16:05:07.102Z
|
2023-02-27T00:00:00.000
|
{
"year": 2023,
"sha1": "f1a5efc30bcc72dfc112994767ad72cce7352b26",
"oa_license": "CCBYNCSA",
"oa_url": "https://www.termedia.pl/Journal/-10/pdf-50183-10?filename=Trigeminal%20neuralgia.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f116bf8af16c00e9724b314657b02d1600e2ca1f",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
}
|
263843996
|
pes2o/s2orc
|
v3-fos-license
|
A first-in-class dimethyl 2-acetamido terephthalate inhibitor targeting Conyza canadensis SHMT1 with a novel herbicidal mode-of-action
Graphical abstract
Introduction
Weeds are one of the most harmful species to crops, causing significant losses in yield.Herbicide application is still the most effective and direct weed management strategy.A commercial herbicide with novel modes of action has not been discovered for decades, resulting in the rise of herbicide-resistant weeds [1,2].Novel targeting herbicide have no side effect on non-target organisms as well as reduce environmental pollutants [3].Nevertheless, several target enzymes involved in amino acid biosynthesis were validated as promising leads for a novel mode of action herbicidal compound.For example, dihydroxy acid dehydratase is the third enzyme in the branched-chain amino acid biosynthetic pathway and is bound by the natural product aspterric acid [4].The dihydrodipi-colinate synthase, which is the first and limiting enzyme in lysine biosynthesis, was specifically targeted in weeds by the compound (Z) À 2-(5-(2-methoxybenzylidene) À 2,4-dioxo thiazolidin-3-yl)acetic acid and (Z) À 2-(5-(4-methoxybenzyliden e) À 2,4-dioxo thiazolidin-3-yl) acetic acid with herbicidal activity [5].However, the other critical enzymes involved in amino acid biosynthesis remain largely unexplored for herbicidal targeting.
SHMT is a typical a-class PLP enzyme that catalyzes the conversion of glycine into serine in the photorespiratory cycle and is essential in one-carbon metabolism [6].Crystal structures of cytosolic SHMT in PDB confirmed their high similarity in tertiary and dimeric subunit structure, which folds into N-terminal and C-terminal domains [7][8][9].Distinctions in the folate cofactor binding site of SHMT and orientation of the amino-terminal arm are still ubiquitous [10].Indeed, SHMT has been considered a potential drug target in previous reports [11].(+)-SHIN-1 (6-amino-4-(5-(hydroxyl methyl)-[1,1 0 -biphenyl]-3-yl)-4-isopropyl-3-methyl-1,4 -dihydropyrano [2,3-c]pyrazole-5-carbonitrile) binds tightly to the loop structure of Enterococcus faecium SHMT with a high inhibition effect (EC 50 = 10 -11 M), indicated the potential of SHMT as an antibacterial target [12].The antidepressant sertraline inhibited SHMT, suppressing serine/glycine synthesis and mitochondrial metabolism, which is a new treatment strategy for related cancers [13].Similarly, a patent presented a compound that inhibits the biological activity of SHMT and could be regarded as an herbicidal active inhibitor, but the nature of the compound was not disclosed [14].In our previous study, we found that caprylic acid (CAP) is a non-selective and efficient herbicide candidate [15].Further, we verified that CcSHMT1 is a competitive binding target of CAP (data unpublished).In brief, CcSHMT1 is a promising target for structurebased herbicide discovery.
Here, we first solved the crystal structure of CcSHMT1 and screened for compounds with high receptor energy minimization by structure-based virtual screening from the HTS compounds library [16,17].Based on the virtual compounds structure, novel CcSHMT1 inhibitors were designed, synthesized, and subjected to bioassays.Field and safty experiments were carried out to explore potential practical applications.To identify the specific target of CcSHMT1, an in-depth druggability evaluation of the candidate inhibitor has yet to be presented.Overall, these findings provide a first-class novel CcSHMT1 inhibitor for weed management.This novel compounds may be a promising starting point for the developing CcSHMT1 inhibitors.
CcSHMT1 has a typical PLP-dependent enzyme 3D structure
To use CcSHMT1 as a target protein for high-efficiency and selective herbicide discovery, we first resolved the structure of CcSHMT1 with X-ray crystallography at 2.8 Å resolution (Fig. 1).Fig. 1A shows the details of relevant refinement statistics.The resolution range is ranged from 2.86 to 43.09, and completeness (98%) is over 95%.This data showed that the X-ray crystallography is stable and reliable.The final model of CcSHMT1 containing 513 residues is very similar to that of other SHMT structures (e.g., AtSHMT, HcSHMT, and RcSHMT) [7][8][9].In each asymmetric unit, CcSHMT1 formed a tetramer with four identical subunits, three containing a PLP molecule (Fig. 1B, 1C).In brief, the monomer can be divided into three domains: the N-terminus, the large domain, and the C-terminus (Fig. 1D).The N-terminal domain can also be described as the small and large domains.The small N-terminus (residues 11-53) mediates inter-subunit contacts and folds into two a-helices and one b-strand.The large domain (residues 53-321) folds into an aba-sandwich containing nine ahelices wrapped around a seven-stranded mixed b-sheet.PLP binds with Ser146, Asp253, His281, and Lys282 through hydrogen bonds in the a-helices and binds with Gly327 through hydrogen bonds in the b-strand (Fig. 1E).The C-terminal small domain (residues 322-480) folds into an ab-sandwich.The antiparallel b-sheet has -1,-2x,1 topology.This sheet packs on one side against the large domain and is shielded from the solvent by four helices on the other side.The above data show that the 3D structure of CcSHMT1, a typical a-class PLP-dependent enzyme, is highly conserved with known SHMT structures.
Virtual screening of potential CcSHMT1 inhibitors and biological studies
To select potential inhibitors of CcSHMT1 (PDB: 7E13), we searched the HTS compounds library database (2,153 k compounds) through virtual screening.The compounds interacted with key CAP binding pocket residues, which is Ser146, Pro169, Tyr178, and Lys410.The selected compounds were filtered and ranked in the top 5% in the standard-precision (SP) scoring mode and the top 10% in the extra-precision (XP) scoring mode [18,19] (Fig. 1S).Twenty compounds (A1-A20) were selected through four rounds of a screening combination of the compound's structure and docking energy from the HTS compounds library (Fig. 2A, Table 1S).Of the 20 selected compounds, A1 and A20 had the lowest and highest docking energies of À9.325 Kcal/mol and À7.727 Kcal/mol (Table 1S).The compounds formed from two (A12 and A15) to nine (A14) hydrogen bonds with CcSHMT1.The compounds formed covalent bonds with the main residues of CcSHMT1, including Arg426, Lys282, His256, His173, and Ser146 (Fig. 2S, 3S).For example, A8 interacted with CcSHMT1 through seven hydrogen bonds (two hydrogen bonds each with Ser146 and Lys282, and one each with His173, His256, and Arg426), a p-p interaction with His173 at 3.8 Å and hydrophobic interactions with Leu168, Pro169, Ala419, Met420 and Pro422 (Fig. 2B).In addition, compounds A1-A20 inhibited CcSHMT1 activity in vitro.The inhibition rates of compounds A1-A20 against CcSHMT1 activity were lower than 30% at 100 mg/L, except for A4, A5, A9 and A17 (Fig. 2C).The twenty compounds were then subjected to bioassays of herbicidal activity to select the lead molecules.
Synthesis of substituted CcSHMT1 inhibitor derivatives
2-Phenoxyacetic acid is the core active skeleton of commercial chlorophenoxy acid herbicides such as 2,4-D, 2,4-D ester, and 2methyl-4-chlorophenoxyacetic acid [21].Compounds 9aa-9bf were designed using piperazine linkage to dimethyl 2acetamidoterephthalate and 2-phenoxyacetic acids to obtain potentially high-activity herbicidal compounds.The synthetic route is shown in Fig. 3. Intermediate 2 was synthesized from material 1 and 2-bromoacetyl chloride via amidation in THF solvent with high yield.Nucleophilic substitution, catalyzed by potas-Fig.2. The virtual screening process and biological activity of compounds A1-A20.A: The virtual screen-based discovery process.B: Docking diagram of SHMT (light green) and compound A8 (carmine), Oxygen (red), nitrogen (blue), and hydrogen (white) are indicated.C: Inhibition ratio of compounds A1-A20 on CcSHMT1 in vitro at a dosage of 100 mg/L.D-E: Inhibitory effects of compounds A1-A20 on Amaranthus retroflexus (D) and Echinochloa crusgalli (E) at a dosage of 100 mg/L after 7 day.Data are presented as the SE of the mean (n = 3).F: Compounds A1-A11, A13-A14, and A16-A20 using pharmacophore models could be divided into N-phenylacetamide derivatives, substituted benzyl-thick heterocyclic derivatives, and furan-carboxylic acid/thiophene-carboxamides.(For interpretation of the references to colour in this figure legend, the reader is referred to the web version of this article.)sium carbonate, occurred by reacting intermediate 2 with tertbutyl piperazine-1-carboxylate acetate in DMF, resulting in 83% yield of intermediate 3. Intermediate 3 was reacted with trifluoroacetic acid in dichloromethane until the completion of the reaction.Then ammonium hydroxide was used to adjust the pH = 9-10 of generated intermediate 4. Subsequently, a reaction between intermediates 5aa-5bf and 2-bromoacetate was catalyzed by potassium carbonate in DMF, resulting in nucleophilic substitution to obtain 6aa-6bf.After intermediates 6aa-6bf were dissolved in ethanol, sodium hydroxide was added.The reaction was refluxed until completion, and the diluted hydrochloric acid solution was used to adjust the reaction to pH = 1-3 to obtain intermediates 7aa-7bf with a yield of 78-90%.Intermediates 8aa-8bf were obtained via acylating chlorination of 7aa-7bf using oxalyl chloride dissolved in tetrahydrofuran.Intermediates 8aa-8bf were dropped into intermediate 4 and triethylamine tetrahydrofuran solution, wherein nucleophilic substitution resulted in targeted compounds 9aa-9bf with yield from 71% to 83%.The structures of all intermediates and targeted compounds were identified using 1 H and 13 C NMR spectroscopy, high-resolution mass spectrometry, and melting point.The characterization data for intermediates 2, 3, and 7aa-7bf and targeted compounds 9aa-9bf are in the Supplementary Compounds List.In addition, the structure of compound 9aq was confirmed through the X-ray diffraction analysis (CCDC: 2172118, Table 3S and Fig. 10S).
Compounds 9ao and 9ay exhibited the highest weed control efficiency among the CcSHMT1 inhibitors.Crop selectivity of compounds 9ao and 9ay was carried out for maize, rice, Hibiscus cannabinus, and soybean in the greenhouse using CAP and 2,4-D as positive controls applied to the leaves and stems with sprayer at the dosage of 1150 g a.i./ha.The maize and rice plants were not seriously influenced by compounds 9ao and 9ay in the post- emergence application assay at a dosage of 750 g a.i./ha (Fig. 4D and Fig. 14S).However, soybean growth was significantly affected by the same dosage of compounds 9ao and 9ay treatment, similar to the treatment with 2,4-D.H. cannabinus was seriously influenced by compounds 9ao and 9ay.H. cannabinus treated with 2,4-D and CAP had similar results to soybean at a dosage of 750 g a.i./ha (Fig. 14S).In addition, the safety of honeybees was carried out on compound 9ay.The LC 50 of compound 9ay was 929.1 lg/g, which indicated that compound 9ay has low toxicity (>200 lg/g) for honeybees (Fig. 15S).Hence, compounds 9ao and 9ay had slight phytotoxicity, decreasing plant height and fresh weight, toward monocotyledon crops, rice, and maize.
Field trials were conducted with maize to further evaluate herbicidal activity and crop safety of 9ay (Fig. 4E and Fig. 16S).Fourteen days after treatment (DAT), at a concentration of 360 g a.i./ ha, the control effect of 9ay on broad-leaf weeds was 94.5%, equal to that of 375 g a.i./ha of 2,4-D.As the control treatment, the herbicidal effect of the same concentration of CAP was less 80% (Fig. 4F).Meanwhile, compound 9ay had no obvious influence on the growth of maize, while 2,4-D and CAP had significant side effects on maize.These data revealed that compound 9ay could be developed as a potential herbicide for monocotyledon crop weed management.
The ''druggability" of compound 9ay on CcSHMT1
To verify that compound 9ay selectively inhibits CcSHMT1, we checked for ultrastructural changes to the cells of C. canadensis leaves under 9ay treatment.Several studies reported that SHMT1 genes have been located in mitochondria [22][23][24].Transmission electron micrographs of C. canadensis leaves indicated ultrastructural characteristics of mitochondria (Fig. 5A).The mitochondria were more seriously wrinkled and disintegrated 4 h after treatment with 9ay than CAP, while 2,4-D had no significant effect on mitochondria (Fig. 14S).
To further elucidate whether compound 9ay inhibits CcSHMT1, we first expressed and purified CcSHMT1 from E. coli and performed in vitro enzyme assays.The results showed that CcSHMT1 activity decreased as 9ay increased from 0.1 to 20 mM (Fig. 5B).Similar results were observed when CAP increased from 0.63 to 94.8 lM (Fig. 5C).However, CcSHMT1 activity was not significantly influenced by treatment with 0.41 to 54.3 mM 2,4-D (Fig. 18S).
Furthermore, we constructed a pCambia2301-KY-CcSHMT1 recombinant plasmid and then transformed the plasmid into Nipponbare plants by the Agrobacterium tumefaciens method [25].RT-qPCR analysis showed that the mRNA transcription level of CcSHMT1-overexpression lines was eight-fold higher than in the wild-type (WT) plants (Fig. 19S).After treatment with different concentrations of 9ay and CAP, CcSHMT1-overexpressing and WT plant seedlings showed different phenotypes.Upon treatment with 9ay and CAP at 1150 g a.i./ha, overexpressing lines did not show visible injury, whereas the WT plants were severely injured and did not survive beyond 7 DAT (Fig. 5D).However, at the 1150 g a.i./ha dosage of 2,4-D, the overexpressing and WT plants were severely injured to the same degree (Fig. 20S).The SHMT activity in CcSHMT1-overexpressing lines decreased slightly compared to the WT plants after 9ay and CAP treatments (Fig. 5E).Furthermore, SHMT plays a role in scavenging H 2 O 2 to enhance abiotic stress tolerance [24,26].The H 2 O 2 content in CcSHMT1-overexpressing lines also increased slightly compared to the WT plants after 9ay and CAP treatments (Fig. 5F).The mitochondria ultrastructural data, enzyme activity in vivo and vitro enzyme activity, and genetics experiments verified that the herbicidal activity of compound 9ay was due to inhibition of CcSHMT1.
Discussion
SHMT is a promising herbicidal target for weed management, but high-efficiency inhibitors have not been identified.This work presented the discovery of first-in-class CcSHMT1 inhibitors through a target structure-based approach, followed by structure optimization and synthesis of related compounds (Fig. 6).The structure-based virtual screening and bioassays showed that dimethyl 2-acetamido terephthalate is essential to A1-A20.The skeleton was used as a basis to design and synthesize the 9aa-9bf dimethyl-((phenoxyacetyl) piperazin-1-yl) acetamido) terephthalate derivatives.Of the newly optimized compounds, 9ay showed the highest inhibition activity and was safe for maize.CcSHMT1 inhibition was confirmed by observation of ultrastructural changes to mitochondria, slightly (but insignificantly) increased H 2 O 2 content, and inhibition of enzyme activity in vivo and in vitro.Furthermore, in vivo data showed that CcSHMT1 overexpression plants were more resistant to 9ay than WT plants.This study provides a class of chemical candidates with a novel mode of action for weed management.
It is well-known that effective targeting is key for agrochemicals [27,28].Although the structure of SHMT1 is highly conserved, there are still structural differences in plant and animal species, which could lead to different efficacy of the inhibitors in different species.In E. coli SHMT, the corresponding residues His203, HisA228, and LysA229 form hydrogen bonds with PLP in the active site [10].However, soybean Lys244 forms a covalent Schiff-base linkage (internal aldimine) with PLP and is located deep inside the obligate dimer [29].We used the few differences in the SHMT structures to design novel targeted inhibitors.Compounds 9ao and 9ay exhibited better herbicidal activities against the tested dicot weeds than the monocot weeds at 375 g a.i./ha (Fig. 4).Similarly, a novel HPPD inhibitor was designed and synthesized based on HPPA binds with residues through hydrogen bonds, including Ser267-Asn282-Gln307-Gln293.This novel inhibitor was the first selective herbicide for weed management in sorghum fields [30].Hence, CcSHMT1 is a promising target for herbicide discovery.
The excellent crop selectivity of herbicides is receiving considerable attention in agrochemicals interests [31].Our study indicated that the monocotyledon crop corn and sorghum treated with compounds 9ao and 9ay showed slight phytotoxicity at 750 g a.i./ha.The differences in crop safety could be attributed to the differences in the 3D structures of plant targets.A previous report revealed that the crystal structural differences between AtG-PRAT2 and its bacterial ortholog allowed inhibitor DAS734 to specifically and directly bind with residue R264 in AtGPRAT2 [32].Based on the crystal structure of AtHPPD strategy, the 6-(2-Hydroxy-6-oxocyclohex-1-ene-1-carbonyl)-1,5-dimethyl-3-(3-(tri methylsilyl)prop-2-yn-1-yl)quinazoline-2,4(1H,3H)-dione had good crop safety of peanut as to the specific inhibitory activity against AtHPPD [33].A novel SHMT inhibitor could be developed based on the differences in plant SHMT structures, which could alter the herbicidal selectivity of the non-selective candidate leader (e.g., CAP).
Employing multiple protein targets for herbicidal compounds reduces herbicide resistance risk [34].Previous reports showed that due to the absolute conservation of the protein-binding pockets in DHPS and HPPK active sites, guanine-based inhibitors with low solubility were less likely to develop resistance within the expected range of commercial herbicides [35].Our work demonstrated that CcSHMT1 is a typical PLP enzyme.There are many SHMT isoforms and PLP-dependent enzymes in plant species.For example, Arabidopsis thaliana has seven isoforms [36].In addition, PLP enzymes are highly conserved in plants [37].The key active site residues, Asp232 in serine synthase and Glu333 in cystathionine gammalyase, were well-known targeting residues for designing novel chemical inhibitors targeting multiple enzymes [38,39].A study attempted to develop an herbicide against five targets, including ACCase, ALS, HPPD, PDS, and PROTOX.Although only 4% of the 394 tested compounds fell within the applicable domains, the results underlined herbicide possibility targets multi-enzymes [40].Therefore, further development of SHMT inhibitors would consider other PLP enzyme sites, which may result in high efficiency and low resistance of novel herbicides.Our current work has opened a door for herbicidal targeting of other SHMT isoforms or even all PLP-containing enzymes.
Conclusion
The 3D structure of CcSHMT1 has been resolved, and a novel class of small molecule SHMT1 inhibitors was identified and synthesized (Fig. 6).The pre-and post-emergence herbicidal activity bioassays revealed that compound 9ay was the SHMT inhibitor with the highest activity and were the safest for maize.In addition, compound 9ay was verified to target CcSHMT1.Future efforts in resolving the structures of other SHMTs and identifying specifically binding compounds would likely contribute to developing novel herbicides to control broad-spectrum weeds.The structure-based development of selective SHMT inhibitors with a new mode of action will be a powerful tool for a noxious weeds management strategy.
Rice (Oryza sativa) plants were grown in a chamber with a 16 h photoperiod (100-120 lmol m À2 s À1 ) 28/22 °C (day/night temper- atures) and 65% relative humidity.Plants at the 3-to 5-leaf stage were used for experiments.The primers and methods used for mutant genotyping are shown in Supplemental Table 6S.
The bees (Apis mellifera) were reared at Suzhou, Jiangsu provinces, China (E120°62 0 , N31°32 0 ), placed in a standard plastic cage, and held in a dark incubator at 32 °C, 40% relative humidity, for 96 h.They were fed pollen paste [50% (wt/vol) honey/50% (wt/ vol) pollen] and sugar syrup [50% sucrose (wt/vol) in water], which was replaced daily.Honeybees carrying pollen were used in the experiment to ensure that the bees were the same age.
Structure determination of CcSHMT1
The SHMT1 CDS was cloned directly from C. canadensis mRNA by RT-qPCR using primer-C (Supplemental Table 6S), which was designed based on high homology sequence data in GenBank.To obtain the full-length CDS of CcSHMT1, 5 0 rapid amplification of cDNA ends (RACE) was performed using primer-R5 (Supplemental Table 6S) with the 5 0 Full Race Core kit (TAKARA, Japan).The PCRamplified fragments using primer-EX (Supplemental Table 6S) were purified with a kit (Sangon Biotechnology Co., Ltd., China) and ligated to the pET28a vector by using ExnaseII (ClonExpress II One step Cloning Kit, Vezyme Biotechnology Co., Ltd.).After sequencing, the pET28a-6 Â his-CcSHMT1 recombinant plasmid was transformed into E. coli BL21.Protein expression was induced with 0.1 mM IPTG at 15 °C and harvested by centrifugation (7500 g for 10 min at 4 °C) 12 h after induction.Harvested cells were resuspended and disrupted, and the supernatant was loaded onto an affinity column (12 mL) and washed with washing buffer (20 mM Tris HCl pH 8.0, 500 mM NaCl, 10 mM imidazole) to remove non-specifically bound proteins, followed by elution buffer (20 mM Tris-HCl pH 8.0, 500 mM NaCl, 150 mM imidazole) to elute the His-tagged protein.The elution fractions were concentrated using centrifugal concentrators with a 10 kDa MW cut-off (Millipore) before size-exclusion chromatography (SEC).Purified CcSHMT1 was concentrated at 15 mg/mL.
Crystallization screening was performed using sets of commercially available kits (S3-S11, Emerald Biosystems or Hampton Research) using sitting drop vapor diffusion at 16 °C.Initially, the obtained crystals were optimized.Diffraction-quality crystals were obtained from the condition containing 0.2 M potassium citrate tribasic monohydrate and 19 % (w/v) PEG 3350.The crystal was flash-cooled in liquid nitrogen after cryoprotection in 20% glycerin.Crystal structures were solved using molecular replacement with the Phenix software suite using A. thaliana SHMT2 as the search probe (PDB ID: 6SMW).COOT was used for manual fitting in the electron density maps between rounds of model refinement in Phenix.refine.The refinement statistics are listed in Fig. 1A.After thoroughly validating with Molprobity, the structures were deposited in the RCSB PDB (PDB ID: 7E13).CcSHMT1 protein expression, purification, and the solved crystal structure were described in the Supplementary material.
ity and quality of the 2.16 million compounds, which had diverse structures and unique properties, were validated by NMR and HPLC in HTS compounds library (Specs group, Netherland).The output structures were primordially docked in Glide's highthroughput virtual screening (HTVS) mode.The top 5% ranked compounds were chosen and redocked in Glide standard precision (SP) scoring mode.The compounds that ranked in the top 5% for the standard-precision (SP) scoring mode and the top 10% for the extra-precision (XP) scoring mode were selected as the standard [6].Ultimately, twenty compounds (A1-A20) were prioritized, based on docking scores and chemical diversity, to perform biological assays.
Herbicidal activity studies of virtually screened compounds
The virtually screened compounds (A1-A20) were purchased from Specs group (Netherland).Compounds A1-A20 were tested against broadleaf weeds, A. retroflexus, and L. sativa, and monocotyledon weeds, E. crus-galli and L. perenne.The tested seeds were soaked separately in distilled water overnight.Twenty seeds were placed on filter paper in a Petrie dish containing 9 mL of 100 mg/L test compounds solution for the pre-emergence bioassay.After incubation for 5 days, the root and stem of the test plants were measured.Percent inhibition was calculated from the root or stem length of the treated groups compared to the untreated control.For the post-emergence bioassay, the 3-to 5-leaf stage of the tested weeds were sprayed with solutions of test compounds (100 mg/ L) using a MATABI STYLE 1.5 portable sprayer (GOIZPER, S. COOP.; Antzuola, Spain) at 300 kPa.Treated plants were returned to the greenhouse and watered as needed.Fresh weights were measured at 7 and 14 DAT.The percentage of herbicidal activity was calculated by comparing the fresh weight of the growth-inhibited plants to that of the untreated plants.Completely inhibited growth was set to 100%, and the untreated plants were set to 0%.Each independent measurement was comprised of three replicates.
SHMT enzyme activity assay in vitro of virtually screened compounds
CcSHMT1 was expressed and purified as described above.A 20 lL aliquot of CcSHMT1 protein in PBS buffer at various concentrations (0.075-0.117 mg/mL) was used in the assays.The virtually screened compounds (100 mg/L) were added to the reaction.Each reaction was composed of 100 mM PBS, pH 7.7, 50 lM PLP, 10 lM DL-threo-3-phenylserine, 1% imidazole (v/v), and 500 lL supernate extract in a total volume of 1 mL.The reaction was carried out at 37 ℃ for 30 min and then centrifugated at 15,000 rpm for 10 min.One hundred lL supernate extracts were added to 100 lL 4-amino-3hydrazino-5-mercapto-1,2,4-triazole (40 mM).The reaction was carried out at 25℃ for 30 min.The absorbance of benzaldehyde was measured at 540 nm, and the amount of benzaldehyde produced was calculated with a standard curve (Fig. 21S).One unit of SHMT enzyme activity was defined as the production of 1 nmol of benzaldehyde per min.Each independent measurement was comprised of three replicates.
Analytical procedures for target synthesized compounds
Reagents and materials were purchased from commercial sources without further treatment. 1H and 13 C nuclear magnetic resonance (NMR) spectra were recorded using a Bruker Avance-400 spectrometer (Bruker BioSpin AG, Fällanden, Switzerland).Deuterated chloroform (CDCl 3 ) and deuterated dimethyl sulfoxide (DMSO d 6 ) were used as NMR solvents, and tetramethylsilane (TMS) was used as an internal standard.Melting points (mp) were recorded on a Hanon MP100 melting point apparatus (Hanon Instruments, Jinan, China).High-resolution mass spectral analysis was performed using a Varian 7.0 T FTICR-MS instrument (Varian IonSpec, Lake Forest, CA, United States).Single-crystal X-ray data were obtained using a Bruker Smart Apex II X-ray single-crystal diffractometer (Bruker AXS, Karlsruhe, Germany) with graphite monochromated MoKa radiation (k = 0.71073 Å), and an A560-UV-VIS spectrophotometer (Aoyi Instruments Co., Ltd., Shanghai, China).
Synthesized compounds' herbicidal activity bioassay in greenhouse
The synthesized compounds (9aa-9bf) were dissolved in dimethylformamide and diluted with deionized water containing 1 g/ L Tween-80 to achieve the required concentrations.The herbicidal activities of synthesized compounds were measured with the two-round pre-emergence and post-emergence bioassays.The tested weeds were treated with 100 mg/L of the synthesized compounds in the first-round pre-emergence and post-emergence bioassay.The higher control efficiency of synthesized compounds was chosen in the second-round bioassay, and a series of dosages of the compounds were set at 0.43, 1.29, 1.72, 2.15, 2.58, 3.44, 4.3, and 5.16 mM, according to the details of the bioassay described above.Each independent measurement was comprised of three replicates.
SHMT enzyme activity inhibitory experiments of synthesized compounds
CcSHMT1 was expressed and purified as described above.A 20 lL aliquot of CcSHMT1 protein in PBS buffer at various concentrations (0.075-0.117 mg/mL) was used in the assays.In the firstround enzyme activity assay bioassay, the tested weeds were treated at 100 mg/L of synthesized compounds.To the reaction, diluted CAP at eight concentrations (0, 0.63, 6.32, 15.80, 31.60,47.40, 63.20, and 94.80 mM), 2,4-D at eight concentrations (0.41, 1.36, 2.71, 4.07, 13.57, 27.15 and 54.3 mM) or 9ay at eight concentrations (0, 0.1, 1, 1.5, 5, 10, 15 and 20 mM) were added according to the details of the enzyme activity assay described above.Each independent measurement was comprised of three replicates.
Crop selectivity experiment of synthesized compounds
The safety experiment for rice, Hibiscus cannabinus, and maize was carried out at 750 g a.i./ha 9ao and 9ay with CAP and 2,4-D as control at the 2-to 3-leaf stage.Fresh weight and plant height of aerial parts were measured 14 DAT.Each independent measurement was comprised of three replicates.
The safety experiment for honeybees was presented to confirm the maximum concentration (about 100% mortality of bees) and the minimum concentration (the mortality rate was not significantly different between the untreated bees group).Compound 9ay was diluted with acetone to make a solution and then diluted with 50% sucrose solution (w:w) to obtain five different dosages (0, 200, 400, 800, 1600, 3200 lL/g).The acetone contents in all treatments were adjusted to the same concentration in sugar solution (500 lL/100 g) as the control.After treatment, the survival rate of honeybees was counted according to a previous study [41].Each independent measurement was comprised of three replicates.
Field bioassay of synthesized compounds
A field experiment was conducted at the Agricultural school gard, Furong District, Changsha, Hunan Province, China (loam soil, 1.6 % organic matter and pH 6.8).The field was rotavated and leveled in submerged conditions, and maize was sown at 45 kg/ ha.The experiment set up fifteen treatments, including compounds 9ay separately at 180 and 360 g a.i./ha,CAP (57%, EC) at 375 g a.i./ ha, 2,4-D (2.2%, EC) at 375 g a.i./ha and water control.Ridges separated each individual plot (1 m  1 m) to prevent water channeling, with three replicates in a randomized block arrangement.The weeds (three-to four-leaf stage) were mainly consisted of Lactuca sativa.The experiment was carried out from 5 July 2022 to 17 August 2022.Herbicides were applied using a MATABI SUPER GREEN-16 backpack manual sprayer.
Transmission electron microscopy analysis of Conyza canadensis leaves in response to CAP or 9ay C. canadensis plants were treated with 375 g a.i./haCAP or 9ay and 2,4-D.Leaves were harvested after 4 h.For transmission electron microscopy (TEM) analysis, fresh leaves were fixed in 0.05 M cacodylate buffer (pH 7.2) for 2 h, an aqueous solution containing 2.5% glutaraldehyde and 4% paraformaldehyde.Samples were post-fixed with 1.0% OsO 4 in the same buffer for 1 h and then dehydrated by acetone and embedded in an epoxy resin.Ultrathin sections (70 nm) were cut with a Reichert Ultracut sultra microtome (Leica, Mark Solms, Germany).The ultrathin sections were stained with uranyl acetate, followed by lead citrate, and photographed with a TEM 900 ZEISS microscope (Zeiss, Oberkochen, Germany).
Overexpression of CcSHMT1 in Nipponbare and resistance bioassay
The coding sequence of CcSHMT1 was amplified using primer-C with BamHI and XbaI cloning sites.The amplified PCR product was digested and cloned into a pCambia2301 vector.The pCambia2301-KY-SHMT1 plasmid was then transformed into Agrobacterium tumefaciens EHA105 strain, and Nipponbare (Oryza Sativa L. spp.japonica) plants were transformed.Homozygous T2 lines were selected for further analysis.SHMT1 gene expression was analyzed in the transgenic and WT plants using RT-qPCR.Each independent measurement was comprised of at least three replicates.Homozygous T2 plants were sprayed with diluted CAP (316, 505.6, 632, 1264, 1896 lM) or 9ay at five concentrations (750, 900, 1150, 1300, 1750 g a.i./ha) using the leaves and stem with a sprayer to test CAP or 9ay sensitivity during plant growth.The bioactivity assay comprised three biological replicates, each with three technical replicates.
Statistical analysis
Data from one representative experiment each are shown.Most data provide the mean values ± standard deviation (s.d.).Statistical analysis was performed using the analysis of variance (ANOVA) test and was considered statistically significant at p-value <0.05 according to Student's t-test.
Compliance with ethics requirements
For studies that do not contain studies with human or laboratory animal subjects.All experimental materials for this study were collected in China, but did not cause the species to be threatened or endangered.
Declaration of Competing Interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
Fig. 1 .
Fig. 1.The 3D X-ray crystal structure of CcSHMT1.A: The details of relevant refinement statistics; B and C: the overall structure (the obverse, B) and (the reverse, C) of CcSHMT1 at 2.8 Å resolution.D: the three domains of the CcSHMT1 monomer, the N-terminus, the large domain, and the C-terminus.F: the omit map for the PLP ligand in the overall structure.PLP: pyridoxal-5-phosphate.PLP binds with Ser146, Asp253, His281, and Lys282 through hydrogen bonds in the a-helices and binds with Gly327 through hydrogen bonds in the b-strand.
CRediT authorship contribution statement Dingfeng Luo: Performed experiments, Analyzed the data, Wrote the paper, Commented on the manuscript.Zhendong Bai: Performed experiments, Commented on the manuscript.Haodong Bai: Performed experiments, Commented on the manuscript.Na Liu: Performed experiments, Commented on the manuscript.Jincai Han: Performed experiments, Commented on the manuscript.Changsheng Ma: Performed experiments, Commented on the manuscript.Di Wu: Performed experiments, Commented on the manuscript.Lianyang Bai: Designed the experiments, Analyzed the data, Commented on the manuscript.Zuren Li: Designed the experiments, Performed experiments, Analyzed the data, Wrote the paper, Commented on the manuscript.
|
2023-10-12T15:05:00.535Z
|
2023-10-01T00:00:00.000
|
{
"year": 2023,
"sha1": "41e8ad941b02c9b04557d0df3210e5aba38a85ea",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.jare.2023.10.003",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "8acc04382c5680bc896a8e36194004bc4fca5abf",
"s2fieldsofstudy": [
"Chemistry",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
221346484
|
pes2o/s2orc
|
v3-fos-license
|
Binder Jetting Additive Manufacturing of High Porosity 316L Stainless Steel Metal Foams
High porosity (40% to 60%) 316L stainless steel containing well-interconnected open-cell porous structures with pore openness index of 0.87 to 1 were successfully fabricated by binder jetting and subsequent sintering processes coupled with a powder space holder technique. Mono-sized (30 µm) and 30% (by volume) spherically shaped poly(methyl methacrylate) (PMMA) powder was used as the space holder material. The effects of processing conditions such as: (1) binder saturation rates (55%, 100% and 150%), and (2) isothermal sintering temperatures (1000 °C to 1200 °C) on the porosity of 316L stainless steel parts were studied. By varying the processing conditions, porosity of 40% to 45% were achieved. To further increase the porosity values of 316L stainless steel parts, 30 vol. % (or 6 wt. %) of PMMA space holder particles were added to the 3D printing feedstock and porosity values of 57% to 61% were achieved. Mercury porosimetry results indicated pore sizes less than 40 µm for all the binder jetting processed 316L stainless steel parts. Anisotropy in linear shrinkage after the sintering process was observed for the SS316L parts with the largest linear shrinkage in the Z direction. The Young’s modulus and compression properties of 316L stainless steel parts decreased with increasing porosity and low Young’s modulus values in the range of 2 GPa to 29 GPa were able to be achieved. The parts fabricated by using pure 316L stainless steel feedstock sintered at 1200 °C with porosity of ~40% exhibited the maximum overall compressive properties with 0.2% compressive yield strength of 52.7 MPa, ultimate compressive strength of 520 MPa, fracture strain of 36.4%, and energy absorption of 116.7 MJ/m3, respectively. The Young’s modulus and compression properties of the binder jetting processed 316L stainless steel parts were found to be on par with that of the conventionally processed porous 316L stainless steel parts and even surpassed those having similar porosities, and matched to that of the cancellous bone types.
Introduction
316L stainless steel (SS316L), a quotidian austenitic steel, offers a wide range of applications in the marine, energy, aerospace, semiconductor and medical industries due to its high strength and corrosion resistance [1]. High porosity metal parts may exhibit excellent properties such as low density, high strength-to-weight ratio, high gas and liquid permeability, high thermal conductivity and excellent energy absorption properties [2]. Low modulus biomaterials with high porosity and open-cell porous structures are of particular interests targeting orthopedic implant applications favoring bone in-growth [3]. SS316L is one of the most commonly used biomaterials for orthopedic implant and compressive yield stress, elastic modulus and energy absorption properties were in the range of 5.2-10.5 MPa, 2.01-7.03 GPa, and 1.2-3.5 MJ/m 3 . The impregnation methods are simple however achieving precise control of pore size with interconnected pores and improved mechanical properties is a real challenge [15].
Due to the recent advancements in the field of metal additive manufacturing (AM), several processes such as: (a) Selective Laser Melting (SLM) [16], (b) Electron Beam Melting (EBM) [17,18], (c) Selective Laser Sintering (SLS) [19], (d) Direct Metal Laser Deposition (DMLD) [20] and (e) Ink jet 3D printing and binder jetting have been extensively explored to fabricate high porosity functional metal parts with desired pore characteristics. Among these AM processes, SLM, EBM, SLS and DMLD are energy-based processes that use high energy laser/electron beam to melt or sinter the metal powders layer-by-layer to form the 3D parts. They are more commonly used to fabricate porous metal parts with complex shapes directly from the digital CAD models by using pores-by-design approach. But, in the case of micro-porous open-cell porous structures, designing and fabricating such fine micro-pores throughout the 3D part are still in the very initial stage of research. They are difficult to achieve through pores-by-design approach due to the following reasons: (1) software limitations to design such micro-pore features throughout the 3D part, and (2) during processing, there are high chances for the loose metal powders to get trapped within the micro-pores and subsequent processing at high temperatures makes the powder removal difficult. Achieving open-cell porous structures through pores-by-processing route via the energy-based AM processes is under very initial stage of research and it is challenging to achieve the desired pore size and volume of geometrically undefined pores generated by energy-based AM processing [21].
In the additive manufacturing community, binder jetting is renowned for easy fabrication of porous parts with open-cell porous structures by using pores-by-processing approach. Contrary to the powder bed fusion energy-based technologies, binder jetting operates at ambient environment and requires no support structures. Binder jetting consists of four consecutive process steps: (1) preparation of powder bed with a spread of fine layer of powders, (2) 3D printing of part by successively adding material layer-by-layer and selectively dispensing binders from the print head every layer as per the part's cross section, (3) as-printed parts are cured at low temperatures (typically up to 200 • C for 12 h based on the binders used; usually solvent or aqueous based for metals), (4) finally, the parts are debinded and sintered similar to the MIM process. Binder jetting provides a freeform fabrication solution for creating complex shaped porous metal structures that are difficult to be fabricated using the conventional processes such as MIM without the need for expensive moulds and tools. A brief literature review of the works on the fabrication of porous metal parts with binder jetting is discussed in the Table 1. The literature search results indicate that the porous binder jetting parts are focused mostly for biomedical applications. The binder jetting original equipment manufacturers and researchers have recently found the applications for the binder jetting manufactured porous parts for the fabrication of high efficiency metal filters for air purification and protection in response to the current COVID-19 crisis. ExOne and the University of Pittsburgh reported their research in developing porous copper parts for antimicrobial filtration applications for use in the reusable and serializable respirators [22].
Due to the advantages of binder jetting technology aiding the fabrication of porous parts, in the present work, it was chosen as the additive manufacturing technique to fabricate porous SS316L parts. To further increase the porosity of binder jetting processed parts, an appropriate powder space holder material (PSH) should be added to the feedstock. Accordingly, in the present study, 30 vol. % (or 6 wt. %) PMMA with spherical morphology and size of 30 µm is proposed as the PSH material. The volume fraction of PMMA (30 vol. %) was chosen to be less than the total volume of ink containing binders utilized during the binder jetting process. The powder bed packing density can range between the apparent and tapped density of the powders. Considering ink and binders during binder jetting penetrate and fill the interstitial void spaces between the powder particles, the volume of PMMA in the feedstock was chosen to be less than the total volume of ink and binders used during binder jetting. Or else, during debinding and sintering, the coordination number between the SS316L powder particles will be very low which will affect the binder jet part integrity. The effect of isothermal sintering temperatures (1000 • C to 1200 • C), binder saturation rates, presence of PMMA PSH on the porosity, pore sizes, pore openness index, and mechanical properties of the porous SS316L parts are investigated.
Feedstock
In the current study, 3D printing of porous 316L stainless steel (SS316L) parts was accomplished by using two types of feedstock: (1) pure SS316L, and (2) SS316L + 30 vol. % PMMA. SS316L + 30 vol. % PMMA feedstock was prepared by dry mixing the required quantities of SS316L and poly (methyl methacrylate) or PMMA powders. Gas atomized SS316L powders of size range 20-53 µm with average particle size of 25.9 µm supplied by Högonäs (Bruksgatan, Sweden), was used as the base material and the powder's chemical composition are discussed in Table 2. Mono-sized PMMA powder of diameter 30 ± 0.1 µm supplied by EPRUI Nanoparticles and Microspheres Co. Ltd. (Nanjing Jiangsu, China), was used as the space holder material. SEM analysis of both SS316L and PMMA powders conducted by using a field emission scanning electron microscope (FESEM, Zeiss, Oberkochen, Germany) indicate spherically shaped powders with minimal powder satellites as shown in Figure 1. The particle size of PMMA powders was chosen to be within the powder size range of the base material (SS316L) to minimize the size effects on the powder segregation during recoating process and subsequent layered manufacturing. Table 2. Chemical composition of as received 316L stainless steel powders and sintered SS316L parts 3D printed by using two types of feedstock processed at two binder saturation rates.
Elements
As
Feedstock Characteristics
Feedstock density and flow properties majorly influence the powder-recoating process during additive manufacturing. Hall flow rate can be used to evaluate powder's flowability and is measured from the time taken by allowing 50 g of powders to pass through a flow funnel consisting of an orifice of size 25.4 mm. Hall flow rate measurements were performed as per ASTM B213-17 [32].
Apparent density measurements were conducted as per ASTMB212-17 [33]. Initially, the feedstock powders were let to freely flow through a flow funnel (without any force) and were filled into a nominal density cup with a standard volume of 25 cm 3 . Later, the excess powders were levelled for the powders to precisely fill the density cup volume. Finally, the apparent density of powders was computed from the mass of the powders within the nominal density cup divided by the volume of cup (25 cm 3 ). Tapped density measurements followed ASTMB527-15 [34]. For the tapping
Feedstock Characteristics
Feedstock density and flow properties majorly influence the powder-recoating process during additive manufacturing. Hall flow rate can be used to evaluate powder's flowability and is measured from the time taken by allowing 50 g of powders to pass through a flow funnel consisting of an orifice of size 25.4 mm. Hall flow rate measurements were performed as per ASTM B213-17 [32].
Apparent density measurements were conducted as per ASTMB212-17 [33]. Initially, the feedstock powders were let to freely flow through a flow funnel (without any force) and were filled into a nominal density cup with a standard volume of 25 cm 3 . Later, the excess powders were levelled for the powders to precisely fill the density cup volume. Finally, the apparent density of powders was computed from the mass of the powders within the nominal density cup divided by the volume of cup (25 cm 3 ). Tapped density measurements followed ASTMB527-15 [34]. For the tapping experiments, initially the accurately weighed powders were filled within a graduated funnel of known volume. Later, the graduated funnel containing the powder samples was mechanically tapped up to 3000 tap counts at a constant tap frequency of 300 taps/min. Finally, the tapped density of the powders was computed from the mass of the powders within the graduated funnel divided by the final tap volume. True density measurements followed ASTM B923-16 [35] by using an AccuPyc II 1340 helium gas displacement pycnometry system (Micromeritics, Norcross, GA, USA). Powder morphology or powder shape of the feedstock was investigated with a Zeiss field emission scanning electron microscope.
Three-Dimensional Printing of High Porosity 316L Stainless Steel Via Binder Jetting
An Innovent type binder jetting 3D printer (ExOne, North Huntingdon, PA, USA) with a proprietary aqueous based binder from ExOne (ExOne, North Huntingdon, PA, USA) was utilized for the fabrication of porous SS316L parts. The 3D part details are shown in Figure 2: (1) cubes of dimensions 10 × 10 × 10 mm 3 with X, Y and Z letter markings on their faces that follows the 3D printing and sintering directions, and 1, 2, and 3 number markings on the surfaces that represents the three different binder saturation rates such as 55%, 100% and 150% employed during 3D printing, respectively, and, (2) cylinders of dimensions 12.5 mm diameter and 80 mm length.
Materials 2020, 13, x FOR PEER REVIEW 6 of 25 experiments, initially the accurately weighed powders were filled within a graduated funnel of known volume. Later, the graduated funnel containing the powder samples was mechanically tapped up to 3000 tap counts at a constant tap frequency of 300 taps/min. Finally, the tapped density of the powders was computed from the mass of the powders within the graduated funnel divided by the final tap volume. True density measurements followed ASTM B923-16 [35] by using an AccuPyc II 1340 helium gas displacement pycnometry system (Micromeritics, Norcross, GA, USA). Powder morphology or powder shape of the feedstock was investigated with a Zeiss field emission scanning electron microscope.
Three-Dimensional Printing of High Porosity 316L Stainless Steel Via Binder Jetting
An Innovent type binder jetting 3D printer (ExOne, North Huntingdon, PA, USA) with a proprietary aqueous based binder from ExOne (ExOne, North Huntingdon, PA, USA) was utilized for the fabrication of porous SS316L parts. The 3D part details are shown in Figure 2: (1) cubes of dimensions 10 × 10 × 10 mm 3 with X, Y and Z letter markings on their faces that follows the 3D printing and sintering directions, and 1, 2, and 3 number markings on the surfaces that represents the three different binder saturation rates such as 55%, 100% and 150% employed during 3D printing, respectively, and, (2) cylinders of dimensions 12.5 mm diameter and 80 mm length. The feedstock powders such as pure 316L stainless steel and 316L + 30 vol. % PMMA were successively added to the powder bed with layer thickness set to 100 µm throughout the experiments. A print head dispensed aqueous-based binder layer wise depending on the input cross-section of the parts received from the STL file by using three different binder saturation rates such as 55%, 100% and 150%, respectively. During binder jetting, the powder bed consists of conditionally packed stainless-steel powders, void spaces or air, and binders. Binder saturation rate is the ratio of volume of binders used during 3D printing to successfully fabricate a solid part to volume of air in the powder bed and is given by Equation (1): where PR is the packing ratio of the powder bed (tapped density of SS316L powders was used in the present study), X and Y spacing are the binder droplet spacing along the XY plane, and Z is the layer The feedstock powders such as pure 316L stainless steel and 316L + 30 vol. % PMMA were successively added to the powder bed with layer thickness set to 100 µm throughout the experiments. A print head dispensed aqueous-based binder layer wise depending on the input cross-section of the parts received from the STL file by using three different binder saturation rates such as 55%, 100% and 150%, respectively. During binder jetting, the powder bed consists of conditionally packed stainless-steel powders, void spaces or air, and binders. Binder saturation rate is the ratio of volume of binders used during 3D printing to successfully fabricate a solid part to volume of air in the powder bed and is given by Equation (1): where PR is the packing ratio of the powder bed (tapped density of SS316L powders was used in the present study), X and Y spacing are the binder droplet spacing along the XY plane, and Z is the layer thickness (set to 100 µm). The corresponding values of X and Y droplet spacing for the set binder saturation rate are discussed in Figure 2B. The volume of binders was experimentally found by jetting the binders for the set saturation rate on a sponge and measuring its weight. From the experimentally measured binder weight and available binder density values, the volume of binders for a set saturation rate was computed. Later, the computed binder volume was used to verify the set saturation rate by using Equation (1). The as-built parts were cured at 200 • C for 12 h followed by thermal debinding at a peak debinding temperature of 800 • C (for 2 h) and sintering at three different conditions such as 1000 • C, 1100 • C and 1200 • C (for 2 h, each) under high vacuum (≤1 mTorr) and at partial pressure of Ar using a Solar Manufacturing high vacuum furnace (Pennsylvania, USA). A constant heating/cooling rate of 5 • C/min was employed during the sintering cycles. The sintered parts were then used for characterization studies.
Part Characterization
Density/porosity values of the sintered SS316L parts were determined by immersion method following Archimedes principle with de-ionised water as the immersion medium. The total porosity (P), open porosity (Po) and pore openness index (POI) were calculated according to the equivalents Equations (2)-(4), respectively [25]: where, ρ th is the theoretical density of SS316L (8 g/cm 3 ), ρ exp is the experimental density of 3D printed SS316L part, m 1 is the dry weight of part, m 2 is weight of part that fully infiltrated with de-ionised water, m 3 is the weight of part in de-ionised water. The dimensional accuracy of the green parts after 3D printing and the shrinkages in the lateral (diameter) and longitudinal (length) directions after sintering were evaluated by using a Vernier caliper. The lateral (diameter) shrinkage was calculated according to Equation (5): where, D 0 and D denotes the diameters of SS316L parts before and after sintering. The longitudinal shrinkage values were calculated similarly considering the shrinkage of the length before and after sintering. The porosity and pore size of the SS316L parts were further investigated by using AutoPore V-Mercury intrusion porosimetry (Micrometrics). Initially, the parts were oven dried at 105 • C (12 h). During the test, mercury invades the pores of the parts with the applied pressure and the corresponding parts' pore information such as pore sizes and porosity were obtained. Based on the cylindrical capillary model, by assuming the pores to be cylindrical, Washburn equation [36] was used to calculate the pore radius as shown in Equation (6): where, ∆P denotes the pressure (dynes/cm 2 ), γ denotes the surface tension of Mercury (485 dynes/cm), θ is the wetting contact angle of mercury (130 • ) and R is the capillary radius (cm) at the certain pressure. The fabricated SS316L parts were metallographically polished and were characterized for microstructural investigations with an optical microscope (Olympus, Tokyo, Japan). The ImageJ 1.52n software (NIH, MD, USA) was used to identify the pore fraction (2D porosity information) of SS316L parts (P, in %) using image analysis and the pores in the micrographs were also identified [37]. The chemical composition of the feedstock SS316L powders and as-sintered SS316L parts fabricated by using two types of feedstock (Table 2) were analyzed by Optima 4300 DV (PerkinElmer, Waltham, MA, USA) inductively coupled plasma optical emission spectroscopy, combustion-infrared absorbance (Eltra CS800 Carbon/Sufur Analyzer, Dusseldorf, Germany), inert gas fusion-infrared absorbance and inert gas fusion-thermal conductivity (Eltra ONH 2000 Oxygen/Nitrogen/Hydrogen analyzers) as per CSP-017 Rev. E (ICP-OES), and ASTM E 1019-18. The chemical analysis tests were repeated three times per feedstock type to ensure consistency. The dynamic Young's modulus of the porous SS316L parts was evaluated at room temperature by using impulse excitation technique with a resonant frequency damping analyzer (ICME, Genk, Belgium) as per the ASTM E1876-15 [38]. Parts of 12 mm diameter and 80 mm length (l/d > 6) were used for the characterization. Compression properties of the parts were tested by using a 5982 Universal Testing System (Instron Norwood, MA, USA) at a strain rate of 7 × 10 −4 s −1 (crosshead speed of 0.5 mm/min) according to ASTM E9-19 [39]. Parts with 12 mm diameter and 12 mm length (l/d = 1) were used for the compression test. Porosity measurements and compression experiments were repeated at least 5 times to ensure result consistency.
Feedstock Characteristics
The density and flowability characteristics of the feedstock are discussed in Table 3. The results indicated that with the addition of 30 vol. % PMMA, the Hall flow rate and apparent density values of the SS316L powders were found to be affected and this is attributed due to the inherent cohesive nature of PMMA polymeric fine powders of size 30 µm. SS316L + 30 vol. % PMMA exhibited poor Hall flow rate of 28 s 11 (50 g −1 ), apparent density of 3.054 g/cm 3 , and apparent packing factor (p.f) of 51.3% when compared to that of the pure SS316L feedstock with Hall flow rate of 18 s 18 (50 g −1 ), apparent density of 4.601 g/cm 3 , and apparent p.f of 58.02%, respectively. The apparent density of powders drop along with the growth of interparticle friction forces and this is due to the prevailing high resistance of SS316L particles containing PMMA to re-arrange during their apparent flow leading to poor powder packing and flowabilty characteristics [40]. Upon tapping, the density of SS316L + 30 vol. % PMMA was found to improve exhibiting tapped density of~3.720 g/cm 3 and tapped p.f of 62.48% which is slightly greater than that of the pure SS316L feedstock (62.15%) indicating the possible re-arrangement of low density (1.18 g/cm 3 ) and fine PMMA powder particles (30 µm) filling the interstitial powder spaces. Hausner ratio (HR) is the ratio of tapped density to apparent density of powders [40]. The significant decrease in the apparent density of SS316L + 30 vol. % PMMA feedstock led to increase in the HR value to~1.2. Powders with HR ratio > 1.5 are classified as non-freely flowing with fluidization problems [41]. Both the pure SS316L and SS316L + 30 vol. % PMMA feedstock are classified as freely flowing based on their HR values (HR < 1.5, Table 3) and pure SS316L feedstock exhibited HR value as low as~1 indicating excellent flowability. Figure 3 shows the dimensional accuracy results of the as-printed green SS316L parts (10 × 10 × 10 mm 3 ) measured right after the 3D printing process fabricated by using different binder saturation rates and at a constant layer thickness value set to 100 µm. The dimensions of the as-printed green parts were found to be higher than the 3D model dimensions used during printing irrespective of the binder saturation rates. Further, the printing directions influence the dimensional accuracy of the green parts. The dimensions of the parts along the X and Y printing directions are majorly controlled by the binder droplet spacing and their corresponding values for the set binder saturation rates are discussed in Figure 2. Low binder saturation rates lead to insufficient binders to firmly join or bond the metal powders together causing pre-mature failure of the as-printed green parts during depowdering and subsequent handling for post-processing steps. In the present study, all the SS316L green parts maintained good structural integrity and did not fail during handling and subsequent sintering steps indicating sufficiently bound SS316L powder particles even at a low binder saturation rate of 55%. Figure 3 shows the dimensional accuracy results of the as-printed green SS316L parts (10 × 10 × 10 mm 3 ) measured right after the 3D printing process fabricated by using different binder saturation rates and at a constant layer thickness value set to 100 µm. The dimensions of the as-printed green parts were found to be higher than the 3D model dimensions used during printing irrespective of the binder saturation rates. Further, the printing directions influence the dimensional accuracy of the green parts. The dimensions of the parts along the X and Y printing directions are majorly controlled by the binder droplet spacing and their corresponding values for the set binder saturation rates are discussed in Figure 2. Low binder saturation rates lead to insufficient binders to firmly join or bond the metal powders together causing pre-mature failure of the as-printed green parts during depowdering and subsequent handling for post-processing steps. In the present study, all the SS316L green parts maintained good structural integrity and did not fail during handling and subsequent sintering steps indicating sufficiently bound SS316L powder particles even at a low binder saturation rate of 55%. *X, Y and Z directions denote the 3D printing directions. The standard deviation of average linear dimensional error along the X and Y directions was found to be ±0.15% (equivalent to ~ ±0.03-0.05 mm), and along the Z direction, it is 0.35% (equivalent to ~ ±0.07-0.09 mm) for both the feedstock types.
The linear dimensional error along the Z direction was found to be the maximum irrespective of the feedstock type and binder saturation rates. This is predominantly due to the combined effects of: (1) selection of layer thickness value of 100 µm, and (2) different capillary mediated binder infiltration rates along the X, Y, and Z directions of the part due to the heterogeneous porosity within the packed *X, Y and Z directions denote the 3D printing directions. The standard deviation of average linear dimensional error along the X and Y directions was found to be ±0.15% (equivalent to~±0.03-0.05 mm), and along the Z direction, it is 0.35% (equivalent to~±0.07-0.09 mm) for both the feedstock types.
The linear dimensional error along the Z direction was found to be the maximum irrespective of the feedstock type and binder saturation rates. This is predominantly due to the combined effects of: (1) selection of layer thickness value of 100 µm, and (2) different capillary mediated binder infiltration rates along the X, Y, and Z directions of the part due to the heterogeneous porosity within the packed powders arising during powder layering and subsequent printing due to differences in the binder drop spacing, layer thickness and powder size. Further, an increase in the binder saturation rates increases the dimensional error or decreases the dimensional accuracy of the parts along the printing directions. Similar observations w.r.t poor dimensional accuracy with increase in the binder saturation rates and along the Z printing direction of the binder jet parts was previously reported by Xia et al. [42]. Poor dimensional accuracy at higher binder saturation rates is due to the bleeding or unintended spread of binders outside the print area that bond excess or unnecessary powders to the part surfaces or migrate the part surface slightly outwards affecting its dimensional accuracy [42]. Low binder droplet spacing will cause over-saturation and excessive adhesion between the powders [43]. There exists an optimum binder droplet spacing under which the printed lines will be smooth, narrow and more uniform, and the representative 3D printed green parts exhibit smallest dimensional error [44]. In the present study, the green parts printed at 55% binder saturation rate exhibited relatively better dimensional accuracy for both the feedstock types.
Results of Porosity Measurements
The porosity values of binder jet SS316L parts were measured by using the Archimedes principle (water immersion method) and further confirmed by mercury intrusion porosimetry and image analysis of optical micrographs, respectively. A theoretical pure SS316L stainless steel density of 8 g/cm 3 was used for the porosity calculations. Several factors such as: (1) sintering parameters (isothermal sintering temperature, holding time, and heating rate), (2) binder volume controlled by the set binder saturation rates, (3) volume of PMMA space holder particles in the feedstock, and (4) feedstock characteristics, together affect the porosity values of SS316L parts. In the present study, the sintering conditions such as holding time of 2 h, heating and cooling rates of 5 • C/min and sintering atmosphere of high vacuum with partial pressure of argon were kept constant throughout the experiments. Isothermal sintering temperature effects on the porosity values of SS316L parts fabricated at a constant binder saturation rate (set to 55%) using pure SS316L and SS316L + 30 vol. % PMMA feedstock are shown in Figure 4. powders arising during powder layering and subsequent printing due to differences in the binder drop spacing, layer thickness and powder size. Further, an increase in the binder saturation rates increases the dimensional error or decreases the dimensional accuracy of the parts along the printing directions. Similar observations w.r.t poor dimensional accuracy with increase in the binder saturation rates and along the Z printing direction of the binder jet parts was previously reported by Xia et al. [42]. Poor dimensional accuracy at higher binder saturation rates is due to the bleeding or unintended spread of binders outside the print area that bond excess or unnecessary powders to the part surfaces or migrate the part surface slightly outwards affecting its dimensional accuracy [42]. Low binder droplet spacing will cause over-saturation and excessive adhesion between the powders [43]. There exists an optimum binder droplet spacing under which the printed lines will be smooth, narrow and more uniform, and the representative 3D printed green parts exhibit smallest dimensional error [44]. In the present study, the green parts printed at 55% binder saturation rate exhibited relatively better dimensional accuracy for both the feedstock types.
Results of Porosity Measurements
The porosity values of binder jet SS316L parts were measured by using the Archimedes principle (water immersion method) and further confirmed by mercury intrusion porosimetry and image analysis of optical micrographs, respectively. A theoretical pure SS316L stainless steel density of 8 g/cm 3 was used for the porosity calculations. Several factors such as: (1) sintering parameters (isothermal sintering temperature, holding time, and heating rate), (2) binder volume controlled by the set binder saturation rates, (3) volume of PMMA space holder particles in the feedstock, and (4) feedstock characteristics, together affect the porosity values of SS316L parts. In the present study, the sintering conditions such as holding time of 2 h, heating and cooling rates of 5 °C/min and sintering atmosphere of high vacuum with partial pressure of argon were kept constant throughout the experiments. Isothermal sintering temperature effects on the porosity values of SS316L parts fabricated at a constant binder saturation rate (set to 55%) using pure SS316L and SS316L + 30 vol. % PMMA feedstock are shown in With increasing sintering temperatures, the interstitial void spaces between the SS316L powder particles decrease and thereby decrease the pore sizes and porosity of the parts but affect their pore openness index values with presence of possible pore closure within the parts.
For the sintering temperatures between 1000 • C and 1200 • C, porosity values of 40-45% were observed for the parts fabricated by using pure SS316L feedstock and~57-61% for the parts fabricated by using SS316L + 30 vol. % PMMA feedstock, respectively. The SS316L parts sintered up to 1100 • C exhibited POI of~1 indicating all the pores to be open and well interconnected. At the sintering temperature of 1200 • C, SS316L parts exhibited POI of~0.87-0.91 indicating most of the pores to be open. The reduced POI value at 1200 • C is due to the enhanced SS316L powder consolidation during sintering at high isothermal sintering temperature forming strong neck connections and subsequent densification. Figure 5 shows the combined influence of different binder saturations rates, isothermal sintering temperatures and presence of PMMA space holders in the 3D printing feedstock on the porosity values of SS316L parts. The binder volume did not contribute much to the porosity values of SS316L parts and with increase in the binder saturation rates (up to 150%), there was only a feeble change (by ±2%) in the porosity of the parts. . Porosity results of SS316L parts sintered at 1000 °C, 1100 °C and 1200 °C using pure SS316L, and SS316L + 30 vol. % PMMA feedstock fabricated at 55% binder saturation rate. Note: Porosity of SS316L parts was measured by using the Archimedes (water immersion) method.
With increasing sintering temperatures, the interstitial void spaces between the SS316L powder particles decrease and thereby decrease the pore sizes and porosity of the parts but affect their pore openness index values with presence of possible pore closure within the parts.
For the sintering temperatures between 1000 °C and 1200 °C, porosity values of 40-45% were observed for the parts fabricated by using pure SS316L feedstock and ~57-61% for the parts fabricated by using SS316L + 30 vol. % PMMA feedstock, respectively. The SS316L parts sintered up to 1100 °C exhibited POI of ~1 indicating all the pores to be open and well interconnected. At the sintering temperature of 1200 °C, SS316L parts exhibited POI of ~0.87-0.91 indicating most of the pores to be open. The reduced POI value at 1200 °C is due to the enhanced SS316L powder consolidation during sintering at high isothermal sintering temperature forming strong neck connections and subsequent densification. Figure 5 shows the combined influence of different binder saturations rates, isothermal sintering temperatures and presence of PMMA space holders in the 3D printing feedstock on the porosity values of SS316L parts. The binder volume did not contribute much to the porosity values of SS316L parts and with increase in the binder saturation rates (up to 150%), there was only a feeble change (by ± 2%) in the porosity of the parts. Porosity changes (by ±2%) are attributed to possible changes in the powder packing during 3D printing as a result of rearrangement of powder particles on the powder bed during powder recoating and subsequent binder jetting with changes in the X and Y binder droplet spacing and thereby causing changes in the binder penetration behavior into the packed powder bed (Figure 2). Lighter and mono-sized PMMA particles with density of~1.18 g/cm 3 and size of 30 µm are highly prone to become rearranged due to powder segregation effects during the powder recoating process and infiltration of binders into packed powders every layer [45]. The density of PMMA is closer to the density of aqueous binder (~0.9-1 g/cm 3 ), but there is a strong mismatch in the density values between PMMA and SS316L (~8 g/cm 3 ).
The results of porosity and average pore size of SS316L parts measured by using the mercury intrusion method are discussed in Table 4 and Figure 6. For comparison purpose, the parts fabricated by using the lowest (55%) and the highest (150%) binder saturation rates were studied. Pore sizes of parts fabricated by using pure SS316L feedstock were found to be in the range of 10-20 µm, whereas parts fabricated by using SS316L + 30 vol. % PMMA feedstock exhibited a bigger pore size range of 20-40 µm, respectively. This increase in the pore size is attributed due to the decomposition of 30 µm PMMA powder particles used as space holder material leaving behind bigger pores of size ≥30 µm. No pore size greater than 40 µm was observed.
Materials 2020, 13, x FOR PEER REVIEW 12 of 25 and subsequent binder jetting with changes in the X and Y binder droplet spacing and thereby causing changes in the binder penetration behavior into the packed powder bed (Figure 2). Lighter and mono-sized PMMA particles with density of ~1.18 g/cm 3 and size of 30 µm are highly prone to become rearranged due to powder segregation effects during the powder recoating process and infiltration of binders into packed powders every layer [45]. The density of PMMA is closer to the density of aqueous binder (~0.9-1 g/cm 3 ), but there is a strong mismatch in the density values between PMMA and SS316L (~8 g/cm 3 ). The results of porosity and average pore size of SS316L parts measured by using the mercury intrusion method are discussed in Table 4 and Figure 6. For comparison purpose, the parts fabricated by using the lowest (55%) and the highest (150%) binder saturation rates were studied. Pore sizes of parts fabricated by using pure SS316L feedstock were found to be in the range of 10-20 µm, whereas parts fabricated by using SS316L + 30 vol. % PMMA feedstock exhibited a bigger pore size range of 20-40 µm, respectively. This increase in the pore size is attributed due to the decomposition of 30 µm PMMA powder particles used as space holder material leaving behind bigger pores of size ≥30 µm. No pore size greater than 40 µm was observed. Figure 6. Representative Mercury intrusion porosimetry results of SS316L parts indicating the average pore sizes. *Note: part name format follows sintering temperature_feedstock_binder saturation rate. For example: 1100_316L_55% denotes SS316L parts sintered at 1100 ⁰C fabricated using 55% binder saturation rate using pure SS316L feedstock.
10-20 µm
No pore size > 40 µm Figure 6. Representative Mercury intrusion porosimetry results of SS316L parts indicating the average pore sizes. * Note: part name format follows sintering temperature_feedstock_binder saturation rate. For example: 1100_316L_55% denotes SS316L parts sintered at 1100 • C fabricated using 55% binder saturation rate using pure SS316L feedstock. Optical micrographs of SS316L parts revealing the relative 2D porosity information and the results of pore fraction measured by image analysis are shown in Figure 7. Further, the microstructure images revealed the presence of big voids within the porous SS316L parts fabricated by using pure SS316L feedstock sintered at 1000 • C and for the other SS316L parts fabricated by using SS316L + 30 vol. % PMMA feedstock sintered at 1000 • C and 1100 • C, respectively. Optical micrographs of SS316L parts revealing the relative 2D porosity information and the results of pore fraction measured by image analysis are shown in Figure 7. Further, the microstructure images revealed the presence of big voids within the porous SS316L parts fabricated by using pure SS316L feedstock sintered at 1000 °C and for the other SS316L parts fabricated by using SS316L + 30 vol. % PMMA feedstock sintered at 1000 °C and 1100 °C, respectively. Binder saturation rate for all the parts was set to 55%. The images were sampled from the cylindrical coupons ( Figure 2) at locations closer to its center. The arrow marks indicate presence of big voids within the parts leading to insufficient particle necking.
The porosity values measured by using the mercury intrusion method (Table 4) and pore fraction values by image analysis of optical micrographs (Figure 7) were found to be in consensus with those measured by using the Archimedes method. The binder saturation rate was found to have no significant influence on the porosity of parts fabricated using pure SS316L feedstock. The presence of PMMA in the feedstock led to decrease in the porosity with increasing binder saturation rates and this behavior was found to be in consensus with the previous study on binder jetting of PMMA which is due to the interaction between the binder phase and PMMA [46]. Table 5 discusses the porosity values of SS316L and austenitic steel parts fabricated by conventional and selective laser sintering processes. The results indicate that finer pores with controlled pore size and pore interconnectivity are able to be achieved by binder jetting with feedstock containing space holders proposed in the present study.
Results of Chemical Analysis
Keeping the carbon (C), hydrogen (H) and oxygen (O) contents to the lowest levels throughout the binder jetting and subsequent sintering processes is of paramount importance especially for the successful processing of low carbon austenitic stainless steel SS316L to ensure its superior corrosion and mechanical properties. Table 2 presents the chemical composition results of the as-received SS316L staring powders and final sintered SS316L parts fabricated using different binder saturation rates with two types of feedstock.
The results indicated that the chemical composition of the final sintered SS316L parts do not change throughout the processing and exactly matches to that of the starting SS316L powders for 55% binder saturation rate. At 150% binder saturation, there is no change in the chemical composition of the final parts fabricated with pure 316L feedstock. Both the binder phase and PMMA materials consist of C and H as the major constituents and due to which the parts fabricated using SS316L + 30 vol. % PMMA (at 150% binder saturation) suffer from increase in the C content to 0.07 wt. %, but matches the composition of SS316. No change in the C, H and O composition confirms the binder jetting processing route (present study) to be contamination-free and ideal for fabricating porous 316L stainless steel and an optimum binder saturation rate (for example, 55% in the present study) and right selection of binder phase and space holder materials will further help to avoid contamination especially for C sensitive materials like SS316L stainless steel.
Results of Shrinkage Measurements
For the SS316L porous parts to exhibit good mechanical properties, strong interparticle necking between the powder particles should initiate during sintering for which SS316L atoms can transport from the interior of the part (bulk transport phenomenon or volumetric diffusion) and from the surface (or surface phenomenon) to fill the vacant pore sites around the particle contacting points to form necks that subsequently shrinks the part. Figure 8 shows representative evidence of interparticle necking for high porosity SS316L parts fabricated using SS316L + 30 vol. % PMMA feedstock and sintered at 1200 • C. The effects of isothermal sintering temperatures on the volumetric shrinkage values of SS316L parts are shown in Figure 9A. As expected, the shrinkage grows with increasing isothermal sintering temperature and maximum volumetric shrinkage values of ~9.66% and ~12% were observed for the SS316L parts sintered at 1200 °C fabricated by using pure SS316L and SS316L + 30 vol. % PMMA feedstock types, respectively. The shrinkage in the parts are predominantly due to the SS316L powder consolidation during sintering at higher isothermal sintering temperatures and partially due to the decomposition of binders and PMMA [6]. After the decomposition of 30 vol. % PMMA from the SS316L parts, there were only less SS316L powders surrounding the spaces (or presence of bigger The effects of isothermal sintering temperatures on the volumetric shrinkage values of SS316L parts are shown in Figure 9A. As expected, the shrinkage grows with increasing isothermal sintering temperature and maximum volumetric shrinkage values of~9.66% and~12% were observed for the SS316L parts sintered at 1200 • C fabricated by using pure SS316L and SS316L + 30 vol. % PMMA feedstock types, respectively. The shrinkage in the parts are predominantly due to the SS316L powder consolidation during sintering at higher isothermal sintering temperatures and partially due to the decomposition of binders and PMMA [6]. After the decomposition of 30 vol. % PMMA from the SS316L parts, there were only less SS316L powders surrounding the spaces (or presence of bigger voids) that inhibit the network formation between the powders. Apart from the isothermal sintering temperature, the shrinkage of the SS316L sintered parts also depends on their initial powder packing before sintering that affects the part porosity as the reduction of micro-pore sizes within the parts during sintering majorly contribute to the shrinkage of the high porosity parts [47]. Feedstock density characteristics affect the powder packing. By using pure SS316L feedstock exhibiting higher density characteristics (Table 3), the powder packing within the parts can be substantially improved and thereby can mitigate shrinkage and similar behavior was observed for SS316L parts fabricated by metal injection moulding with feedstock containing SS316L nanoparticles without space holders contributing to their particle packing density and thereby exhibiting low shrinkage values [47]. For the same sintering conditions, SS316L parts (sintered at 1200 °C) exhibited higher shrinkage values at high binder saturation rates (150%) when compared to the parts fabricated at low binder saturation rates (55%) for all the X, Y and Z directions (Figure 9). Further, the shrinkage values of SS316L parts fabricated with SS316L + 30 vol. % PMMA feedstock was found to be significantly high when compared to that of the parts fabricated with pure SS316L feedstock and the results are consistent to the previous works by Ziaee et al. [48] confirming that parts consisting of less pore formers possesses lower shrinkage. For the same sintering conditions, SS316L parts (sintered at 1200 • C) exhibited higher shrinkage values at high binder saturation rates (150%) when compared to the parts fabricated at low binder saturation rates (55%) for all the X, Y and Z directions ( Figure 9). Further, the shrinkage values of SS316L parts fabricated with SS316L + 30 vol. % PMMA feedstock was found to be significantly high when compared to that of the parts fabricated with pure SS316L feedstock and the results are consistent to the previous works by Ziaee et al. [48] confirming that parts consisting of less pore formers possesses lower shrinkage.
The mismatch or differences in the linear shrinkage values along the X, Y and Z directions of the part or presence of anisotropic shrinkage is predominantly due to: (1) non-uniform binder droplet spacing along the X, Y directions arising during 3D printing that are majorly controlled by the binder saturation values (Figure 2), and (2) set layer thickness value (100 µm) that alters the spacing in the Z direction affecting the powder packing within the green part. In the present study, shrinkage anisotropy along the Z direction was found to be the maximum. The shrinkage values along the X and Y directions of the part were found to be relatively more uniform especially at 100% binder saturation rate and this is attributed due to almost equal-sized X and Y binder droplet spacing (~43 µm). Further, the linear shrinkage values were found to be less than 5% indicating surface diffusion phenomenon as the predominant mechanism. This is also supported by the Figure 8 and the theory suggesting that the particle necking should be greater than 1/3 rd of the particle diameter to realize volumetric diffusion [49,50]. Table 5 presents the dynamic Young's modulus and compression property results of the binder jetting-processed porous SS316L parts and Figure 10 shows the representative stress-strain curves under compression loading, respectively. The dynamic Young's modulus of the SS316L stainless steel parts decreased with increasing porosity values and different Young's modulus values in the range of 2-29 GPa were able to be achieved with changes in the porosity of the parts. The stress-strain curves observed during compression loading of porous parts can be generally categorized into three distinct regions [51]: (1) within the elastic regime, stress increases linearly with strain, (2) followed by a long deformation plateau with a small increase of flow stress to large strain, and (3) a final densification stage where the flow stresses rapidly increase resulting to fracture. At low stress values, all the stress-strain curves of binder jetting processed porous SS316L parts exhibited a very similar behavior under compression where the stresses raised almost linearly with strain (or elastic deformation). The SS316L parts fabricated by using pure SS316L feedstock sintered at 1200 • C exhibited the maximum 0.2% compressive yield strength (0.2% CYS) of 52.7 MPa which is almost 50% (34.7 MPa) and 100% (26.2 MPa) greater than the other binder jet SS316L parts fabricated by using pure SS316L feedstock sintered at 1100 • C and 1000 • C, respectively. The 0.2% CYS was found to be significantly affected for the parts containing high porosity values (~60%) fabricated by using SS316L + 30 vol. % PMMA feedstock exhibiting 0.2% CYS values of only 12.6 MPa and 16.2 MPa when sintered at 1100 • C and 1200 • C, respectively. Specific compressive strength is the ratio of 0.2% CYS to the density of material. Specific compressive strength decreased with the increasing porosity of the parts. High specific compressive strength of~11 MPa/(g/cm 3 ) was observed for the parts fabricated with pure SS316L feedstock sintered at 1200 • C.
Results of Dynamic Young's Modulus and Compression Properties
Beyond the elastic regime, the deformation plateau significantly varied with porosity. The parts fabricated by using pure SS316L feedstock sintered at 1100 • C and 1200 • C with porosity of 44% and 40%, respectively, exhibited a long deformation plateau followed by densification where the flow stresses increased rapidly achieving a significant ultimate compressive strength (UCS) values of 172 MPa (for 1100 • C) and 520 MPa (for 1200 • C), respectively, and the corresponding fracture strain (FS) values were~24% (for 1100 • C) and~36.4% (for 1200 • C), respectively. Work of fracture or energy absorption of a material is found from the surface below the stress-strain curve and it was found to be the maximum of~116.7 MJ/m 3 for SS316L parts fabricated with pure SS316L feedstock sintered at 1200 • C indicating higher capability to absorb energy until fracture upon compressive loading. But, the SS316L parts sintered at 1000 • C exhibiting porosity of 45.3% fabricated by using pure SS316L feedstock failed with UCS and FS values of~47 MPa and~5.1%, respectively, indicating poor consolidation of SS316L powders during sintering leading to insufficient or weak particle necking that was revealed during microstructural characterization with micrographs indicating presence of big voids (Figure 7). Similar weak behavior was observed for the high porosity (~60%) SS316L parts fabricated by using SS316L + 30 vol. % PMMA feedstock and the parts exhibited UCS of only 35 MPa and 75.4 MPa when sintered at 1100 • C and 1200 • C, respectively, and the corresponding FS values were 13.2% (for 1100 • C) and 27.3% (for 1200 • C). The SS316L parts fabricated by using SS316L + 30 vol. % PMMA sintered at 1000 • C were very fragile and failed at very low compressive stresses upon loading. The change in slope of the curves (Figure 11) indicate that the constants C1 and C2 from Equations (7) and (8) significantly rely on the interparticle necking between 316L powders during sintering but the dependency of the constants on the SS316L porous structures are not well understood especially for high porosity parts as reported in the previous study [19]. The compression properties of high porosity metals follow the Gibson and Ashby model [52]. The relative density of porous metals is the most significant structural property that influences the stresses upon loading and is given by ; where, ρ exp is the experimental density of porous SS316L parts and ρ Solid is the theoretical density of solid (fully dense SS316L stainless steel with density of 8 g/cm 3 ). The relationship between relative stress, Young's modulus and relative density are calculated according to Equations (7) and (8) [53]. C1 and C2 are positive constants that mainly depend on the pore structures [19]. The relationships between 0.2%CYS Exp , E Exp , and the relative densities of binder jetting processed SS316L parts are shown in Figure 11. Both the 0.2% CYS and E of the porous parts increased with increasing relative density (or decreasing porosity) as observed with the other studies on porous metals [54]. The significant increase in the overall compression properties observed with the parts fabricated by using pure SS316L feedstock sintered at 1100 • C and 1200 • C led to a sudden upward shift in the E and 0.2% CYS versus relative density curves ( Figure 11). This upward shift or sudden increase in the slope of the curve indicates significant powder consolidation and formation of strong interparticle necking between SS316L powders during sintering exhibiting sudden increase in the 0.2% CYS and Young's modulus values when compared to the other SS316L parts. The findings are in consensus with the compressive stress-strain curves (with long deformation plateau) and microstructural investigations ( Figure 7) indicating absence of voids for SS316L parts sintered at 1100 • C and 1200 • C unlike other SS316L samples. The compression properties of the fabricated SS316L porous parts are compared to that of the cancellous bone types such as Femoral head, Femoral condyle and vertebra [55]. The results ( Table 5) indicate that the compression properties of SS316L parts are closer to that of the cancellous bone types and especially matches with the Young's modulus values. Table 5 The change in slope of the curves ( Figure 11) indicate that the constants C1 and C2 from Equations (7) and (8) significantly rely on the interparticle necking between 316L powders during sintering but the dependency of the constants on the SS316L porous structures are not well understood especially for high porosity parts as reported in the previous study [19].
The compression properties of the fabricated SS316L porous parts are compared to that of the cancellous bone types such as Femoral head, Femoral condyle and vertebra [55]. The results (Table 5) indicate that the compression properties of SS316L parts are closer to that of the cancellous bone types and especially matches with the Young's modulus values. Table 5 lists the compression properties of several porous SS316L parts fabricated by conventional processes. The properties achieved in the present study are still comparable to the other conventionally processed porous SS316L parts and even surpasses those having similar porosities.
The present work offers more insights into correlation between porosity and respective Young's modulus and compression properties of the binder jetting processed SS316L stainless steel parts and provides range of properties to target different applications as per the requirements, with parts having open pores and a controlled pore size of <40 µm. In future work, corrosion studies and effect of varying layer thicknesses and different X and Y binder droplet spacing processing parameters on the porosity and shrinkage anisotropy of binder jet processed SS316L parts will be investigated.
Conclusions
High porosity 316L stainless steel (SS316L) with total porosity of~40-60% and pore openness index of 0.87 to 1 were successfully fabricated by binder jetting and subsequent sintering (up to 1200 • C) coupled with the powder space holder (PSH) technique by using 30 µm equal-sized PMMA powders as PSH. Two approaches have been systematically studied to understand their effects on the porosity of binder jet parts; (1) pores-by-processing approach by varying the isothermal sintering temperatures (1000 • C, 1100 • C and 1200 • C for 2 h, each) and binder volumes at different binder saturation rates (55%, 100% and 150%), and (2) pores by feedstock modification approach by adding PSH (30 vol. % PMMA) to pure SS316L feedstock.
The following are the primary conclusions of the present study: • Isothermal sintering temperature plays a vital role in controlling the porosity of SS316L parts; porosity increased with decreasing sintering temperatures, whereas varying binder saturation rates affected the porosity values by only ±2%. Through pores-by-processing approach (present study), porosity of 40%-45% was achieved.
•
With the addition of 30 vol. % of PMMA powders to the SS316L feedstock, the porosity values of parts sintered up to 1200 • C (2 h, each) increased significantly to 57%-61%.
•
All the parts exhibited anisotropic shrinkage especially along the Z direction predominantly due to the mismatch between the set layer thickness (100 µm) and X & Y binder droplet spacing that varies based on the set binder saturation rates.
•
The dynamic Young's modulus and compression properties of the SS316L stainless steel parts increased with increasing relative density (or decreasing porosity). The SS316L parts fabricated by using pure SS316L feedstock sintered at 1200 • C exhibited the maximum overall compressive properties with 0.2% compressive yield strength of 52.7 MPa, ultimate compressive strength of 520 MPa, fracture strain of 36.4%, and energy absorption of 116.7 MJ/m 3 , respectively. • Low Young's modulus values in the range of 2-29 GPa could be achieved. The Young's modulus and compression properties of the binder jet SS316L parts were found to be on par with that of the conventionally processed SS316L parts and even surpassed those having similar porosities and matched to that of the cancellous bone types.
•
The final chemical composition of the sintered SS316L parts exactly matched the chemical composition of starting SS316L powders with no C, H and O contaminations confirming binder jetting process route to be contamination-free and ideal for fabricating porous austenitic 316L stainless. An optimum binder saturation rate of 55% was found to be more favourable to fabricate contamination-free SS316L parts with high dimensional accuracy.
|
2020-08-27T09:05:24.357Z
|
2020-08-24T00:00:00.000
|
{
"year": 2020,
"sha1": "6913ba6d1056c9648457c33b87b0fc2b291afd9f",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1996-1944/13/17/3744/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e50849f2750ad6021debfbac5703c91492df913c",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science",
"Medicine"
]
}
|
257651342
|
pes2o/s2orc
|
v3-fos-license
|
Baseline Analysis of Endophytic Fungal Associates of Solenopsis invicta Buren from Mounds across Five Counties of Guangdong Province, China
Red imported fire ants mounds have been suggested as a potential reservoir for beneficial entomopathogenic fungal species that are vital for more complex roles in the ecosystem aside from infecting the insects. In the current study, the assemblage of fungal symbionts of the red imported fire ants (RIFA) were obtained across five cities in Guangdong Province, China. The sampling areas were selected because of high occurrence of fire ants mounds in the regions. Mound soils, plant debris within mounds, and ants were collected from three sampling locations in each city for potential isolation of entomopathogenic fungal associates of RIFA. All samples were collected during the spring of 2021. Following successful isolation from substrates, the patterns of fungal species composition, and richness were evaluated. In total, 843 isolates were recovered, and based on their phenotypic distinctiveness and molecular characterization based on DNA sequences of multiple loci including the ITS, SSU, and LSU regions, 46 fungal taxa were obtained, including 12 that were unidentified. Species richness and abundance was highest in the mound soils, while the lowest value was recorded from the ant body. As per the different locations, the highest abundance level was recorded in Zhuhai, where 15 fungal taxa were cultivated. The most common taxa across all substrates and locations was Talaromyces diversus. A baseline analysis of the fungal community composition of RIFA would better our understanding on the interactions between these social ants and their associated microbial organisms, and this knowledge in turn would be important for the successful management of the RIFA.
Introduction
The microbial communities inhabiting the soil have rich diversity, and their distribution is based on the soil type, climatic conditions, and soil use (i.e., whether the soil is used for agricultural purposes or not) [1]. According to available data, a potential 1.5 million fungal species are contained in the soil, while only about 10% of these abundant microorganisms have been studied until recently [2]. The examined species include entomopathogenic, endophytic, saprophytic, and some edible fungi [3]. For the entomopathogenic species, approximately 90 genera and over 700 species have been reported so far [4][5][6].
Red imported fire ants, Solenopsis invicta Buren (Hymenoptera: Formicidae), are difficult to control due to their aggressiveness, effective foraging, and their ability to mobilize rapidly and actively attack intruders when their mounds are disturbed [7]. These social ants are notorious for invading other exotic and native ant species, which could eventually result in the displacement or elimination of essential native species. It is therefore important to keep a close watch on the activities of these social insects, especially monitoring their rate of dispersal and evaluating the influence of environmental conditions or microbial associates on their development, behavior, survival, etc. These efforts could be of huge importance towards the successful management of the red imported fire ants (RIFA). Insectassociated microorganisms have widely demonstrated the capacity to infect their hosts. Notably, fungi, bacteria, viruses, or virus-like organisms have been reported to cause visible infections in RIFA [8,9].
Baseline analysis of fire ants mounds and plant debris would reveal unique or distinct naturally occurring fungal associates of RIFA. As the microbial species within a microbial population may be beneficial or pathogenic to the hosts, isolation and identification of the associated microbes could help identify potential biological control options for these noxious pests. Arguably, more research works should focus on the diversity of fungi associated with S. invicta, as the currently available data are limited. Tellingly, a previous study suggested RIFA mound soils as a more desirable source of soil-inhabiting fungi. The collected mound soils exhibited a significantly higher abundance of fungi, where roughly 19 times more colony-forming units than the non-mound soils were recorded [10]. However, most previous studies have focused on investigating various entomopathogenic fungal species as potential biocontrol agents for RIFA, while only few studies have conducted surveys to explore the associated microorganisms of RIFA [11]. Survey studies on RIFA microbial associates conducted in China are still limited, while few studies conducted in the United States and some other countries are available [7,10,[12][13][14][15][16][17].
The extraction of novel fungal isolates from different soils and other environmental samples has been accomplished using multiple isolation methods. Fungi isolation involving the use of selective media methods is apparently the most common. Meanwhile, several insect-baiting methods have also been widely reported as effective for the isolation of entomopathogenic fungal species [4,[18][19][20][21][22]. Some of these aforementioned studies as well as few other RIFA-related studies have revealed that several generalists and entomopathogenic fungi occur in fire ants mounds [7,10,12,13]. However, Woolfolk et al. [14] argued that the data available from most of these studies are somewhat restricted to only a few locations, therefore making the identification of generalist and entomopathogenic fungal associates of the fire ants still relatively unclear. This has therefore raised the need for expanding the available data on fire ants mounds associated fungal microbes. In this vein, an extensive survey of the red imported fire ants' associated fungal microbes was conducted in the current study. The current study assessed the species richness, diversity, and densities of the culturable fungi associated with RIFA, plant debris within mounds, and mound soils collected from various cities across Guangdong province of China. It is a general opinion that isolating and accurately identifying the RIFA microbial associates would be vital for the effective management of the red imported fire ants.
Soil and Ants Sampling
During the spring of 2021, soil samples were collected from fire ant mounds from selected locations across five cities, namely Dongguan, Guangzhou, Huizhou, Jiangmen, and Zhuhai, located within Guangdong Province of China ( Figure 1).
From each of the five cities, soil samples were collected from three randomly selected mounds. For individual sampling, approximately 500 g of soil was collected from each From each of the five cities, soil samples were collected from three randomly selected mounds. For individual sampling, approximately 500 g of soil was collected from each mound, taken at about 10-15 cm below the surface using a sterilized hand shovel. The soil samples were transported to the laboratory for analysis in sealed plastic bags.
Procedure for Fungi Isolation from Samples
Media preparation: The selective media method was deployed for the isolation of fungal isolates from collected samples. First, 40.1 g of potato dextrose agar (PDA; Guangdong Huankai Microbial Sci. and Tech. Co., Ltd., Guangzhou, China) was dissolved in 1 L of distilled water and was amended with 100 mg/L tetracycline hydrochloride (Sangon) and 300 mg/L streptomycin sulfate (Sangon) to inhibit bacteria growth.
Isolation from soil samples: Prior to microbial isolation, existing clumps and stones were carefully removed from soils using a 2 mm pore sieve. Following the procedure of Dhar et al. [23], about 50 g of soil per individual mound was suspended in 500 mL sterile distilled water and vortexed at 200 rpm for approximately 25-30 min on a rotary shaker at room temperature, enabling the fungal spores present in the soil to be dislodged. This procedure was followed by allowing the soil particles to settle for about 15-20 min, and from the third serial dilution of the supernatant, 100 μL of solution was evenly spread on PDA solid media using a sterile disposable cell spreader. Inoculated plates were transferred into a BOD incubator (BS-1E, China) and incubated at 25 °C for 5 days.
Isolation from mound plant debris: Plant debris were removed from soil, and remnant soils were carefully removed using a brush. The remaining soil clogs were removed from the plant tissues by washing them in sterile distilled water for approximately 1 min. The procedures described by Woolfolk et al. [14] were followed for fungi isolation from plant samples. Plates were supplemented with antibiotics to minimize contamination and incubated for 5 days, as previously described.
Isolation from ant bodies: The isolation procedure was carried out following the guidelines of Baird et al. [7] with slight modifications. In the current study, 24 ants were
Procedure for Fungi Isolation from Samples
Media preparation: The selective media method was deployed for the isolation of fungal isolates from collected samples. First, 40.1 g of potato dextrose agar (PDA; Guangdong Huankai Microbial Sci. and Tech. Co., Ltd., Guangzhou, China) was dissolved in 1 L of distilled water and was amended with 100 mg/L tetracycline hydrochloride (Sangon) and 300 mg/L streptomycin sulfate (Sangon) to inhibit bacteria growth.
Isolation from soil samples: Prior to microbial isolation, existing clumps and stones were carefully removed from soils using a 2 mm pore sieve. Following the procedure of Dhar et al. [23], about 50 g of soil per individual mound was suspended in 500 mL sterile distilled water and vortexed at 200 rpm for approximately 25-30 min on a rotary shaker at room temperature, enabling the fungal spores present in the soil to be dislodged. This procedure was followed by allowing the soil particles to settle for about 15-20 min, and from the third serial dilution of the supernatant, 100 µL of solution was evenly spread on PDA solid media using a sterile disposable cell spreader. Inoculated plates were transferred into a BOD incubator (BS-1E, China) and incubated at 25 • C for 5 days.
Isolation from mound plant debris: Plant debris were removed from soil, and remnant soils were carefully removed using a brush. The remaining soil clogs were removed from the plant tissues by washing them in sterile distilled water for approximately 1 min. The procedures described by Woolfolk et al. [14] were followed for fungi isolation from plant samples. Plates were supplemented with antibiotics to minimize contamination and incubated for 5 days, as previously described.
Isolation from ant bodies: The isolation procedure was carried out following the guidelines of Baird et al. [7] with slight modifications. In the current study, 24 ants were selected per mound, while each individual media plate received 4 ants (a total of 6 plates per mound). Plates were also supplemented with antibiotics and incubated under similar conditions as previously described. All fungal mycelium emerging from the ant tissues were subsequently sub-cultured in fresh growth medium for up to four weeks until monocultures for all fungal isolates were cultivated.
Morphological Characterization
Following multiple sub-culturing, the monocultures were identified on the basis of their phenotypic distinctiveness prior to phylogenetic characterization. Morphological characterization of fungal strains followed the protocols as described by Humber [24]. Further characterization was performed following the guidelines of Meyer et al. [25], where an optical microscope system equipped with a digital camera was used to analyze the mycelia, conidia, and sporulation structures of individual fungal isolates.
Molecular Identification
Extraction of genomic DNA from fungal cultures (about 7 days old) was completed with the help of genomic DNA extraction kits (Rapid Fungi Genomic DNA Isolation Kit) provided by the manufacturer (Sangon, Shanghai, China). Extracted DNA was amplified using targeted regions specific primers as follows (Table 1). Table 1. List of primer pairs selected for fungal DNA fragments amplification.
Targeted Region Primer Used
Internal transcribed spacers region The PCR procedure includes a 50 µL reaction mix containing 25.0 µL of 2× High-Fidelity PCR MasterMix (Tiangen Biotech, Shanghai, China), 2.0 µL of each primer pairs, 3.0 µL of DNA template, and 18.0 µL PCR-grade water. The PCR conditions set for fungal DNA amplification was strictly in accordance to the guidelines of the manufacturer. Visualization of amplified DNA was completed on 1.0% m/v agarose gel, and Sanger sequencing was conducted by Sangon Biotech Co. Ltd. Guangzhou, China.
Phylogenetic Analysis
BioEdit v 7.1.9. [26] was utilized to manually edit the obtained fungal sequence traces, while multiple loci were edited and aligned using Clustal W [27]. With the help of BLASTn, the reference sequences were downloaded from the GenBank database of the National Center for Biotechnology Information (NCBI) (http://www.ncbi.nlm.nih.gov/-accessed on 25 February 2023). In addition, we performed phylogenetic analysis using the sequences produced for the present study and the reference sequences. Neighbor-joining method was deployed for the analysis of sequences based on the maximum composite likelihood via MEGA v. 11.
Statistical Analysis
Using the frequencies of isolation, the biodiversity indices of fungal samples were determined in accordance with a few of the previously described procedures [7,28,29]. Statistical analysis was performed to determine index of diversity (H'), species richness (n), and evenness (J'). Consequently, diversity of fungal species was calculated via Shannon-Wiener index (H'), which was computed as follows: H' = ∑ PiInPi, Pi = Ni/Nt, where Ni denotes the value for isolates that belong to the i-th genus, while Nt on the other hand represents the value for isolates in the group of interest (i.e., sampling sites or substrates). In addition, the community coefficient (CC) was computed using the following formulae: CC = C/(S_1 + S_2 − C). Here, C represents the value of unique fungal species that is common to both substrates/location under study, while S_1 and S_2 denote the values for fungal species in individual community, i.e., substrate/location 1 and substrate/location 2, respectively.
When required, one-way analysis of variance (ANOVA) was used for data analysis. The least-significance difference (LSD) test at p < 0.05 was employed to perform multiple comparison among treatment means.
Morphological Characterization of Fungal Isolates
Following multiple sub-culturing until unique cultures were cultivated for all fungal isolates, morphological identification was conducted to determine their distinctive phenotypic features ( Figure 2).
represents the value for isolates in the group of interest (i.e., sampling sites or substrates). In addition, the community coefficient (CC) was computed using the following formulae: CC = C/(S_1 + S_2 − C). Here, C represents the value of unique fungal species that is common to both substrates/location under study, while S_1 and S_2 denote the values for fungal species in individual community, i.e., substrate/location 1 and substrate/location 2, respectively.
When required, one-way analysis of variance (ANOVA) was used for data analysis. The least-significance difference (LSD) test at p < 0.05 was employed to perform multiple comparison among treatment means.
Morphological Characterization of Fungal Isolates
Following multiple sub-culturing until unique cultures were cultivated for all fungal isolates, morphological identification was conducted to determine their distinctive phenotypic features (Figure 2).
Molecular Identification and Phylogenetic Placement of Isolates
In addition to morphological characterization, the successful classification of isolates into specific fungal taxa was achieved by molecular identification based on DNA sequences of multiple loci and phylogenetic characterization. In total, 34 fungal taxa were identified, while 12 isolates are yet to be identified. Phylogenetic characterization was carried out using the combined dataset of three loci (SSU + LSU + ITS) while obtaining supplementary sequences available in the database of NCBI (Figure 3). The DNA sequences derived in the current study can be found in GenBank, and the accession numbers are provided (Table 2).
Molecular Identification and Phylogenetic Placement of Isolates
In addition to morphological characterization, the successful classification of isolates into specific fungal taxa was achieved by molecular identification based on DNA sequences of multiple loci and phylogenetic characterization. In total, 34 fungal taxa were identified, while 12 isolates are yet to be identified. Phylogenetic characterization was carried out using the combined dataset of three loci (SSU + LSU + ITS) while obtaining supplementary sequences available in the database of NCBI (Figure 3). The DNA sequences derived in the current study can be found in GenBank, and the accession numbers are provided ( Table 2).
Assessment of Fungal Species Richness, Diversity, and Densities
For the assessment of fungal species richness, diversity, and densities, fungal isolates were classified into two sub-groups, i.e., sampling sources and locations. In general, the isolation percentage across the fungal taxa varied from 3.9% to 26.5%, and about 24.1% were unidentified. Across various cities and substrates, T. diversus was the most commonly extracted taxa at 26.5%. Other taxa with the greatest percent isolation frequencies include T. pinophilus (12.8%), T. minioluteus (8.3%), and A. flavus (6.4%). The identified taxa were unevenly distributed across the three examined environmental samples: soil (71.7%), plant debris (21.7%), and ant bodies (6.5%) (Figure 4).
Assessment of Fungal Species Richness, Diversity, and Densities
For the assessment of fungal species richness, diversity, and densities, fungal isolates were classified into two sub-groups, i.e., sampling sources and locations. In general, the isolation percentage across the fungal taxa varied from 3.9% to 26.5%, and about 24.1% were unidentified. Across various cities and substrates, T. diversus was the most commonly extracted taxa at 26.5%. Other taxa with the greatest percent isolation frequencies include T. pinophilus (12.8%), T. minioluteus (8.3%), and A. flavus (6.4%). The identified taxa were unevenly distributed across the three examined environmental samples: soil (71.7%), plant debris (21.7%), and ant bodies (6.5%) (Figure 4).
Among the fungal species, only T. diversus (62.5%) and T. pinophilus (37.5%) were successfully isolated from the ant body, while the predominant fungal species across plant debris was T. diversus. All fungal species were successfully isolated from soil samples, where A. flavus and T. minioluteus were only obtained from soils and not from any other substrates. Only T. diversus and T. pinophilus were recorded across all substrates. Following the analysis of sampling locations and environmental samples, the results show that the highest taxa abundance values were from mound soils (33) and Zhuhai (15), respectively ( Figure 5). Among the fungal species, only T. diversus (62.5%) and T. pinophilus (37.5%) were successfully isolated from the ant body, while the predominant fungal species across plant debris was T. diversus. All fungal species were successfully isolated from soil samples, where A. flavus and T. minioluteus were only obtained from soils and not from any other substrates. Only T. diversus and T. pinophilus were recorded across all substrates. Following the analysis of sampling locations and environmental samples, the results show that the highest taxa abundance values were from mound soils (33) and Zhuhai (15), respectively ( Figure 5).
Assessment of Fungal Species Richness, Diversity, and Densities
For the assessment of fungal species richness, diversity, and densities, fungal isolates were classified into two sub-groups, i.e., sampling sources and locations. In general, the isolation percentage across the fungal taxa varied from 3.9% to 26.5%, and about 24.1% were unidentified. Across various cities and substrates, T. diversus was the most commonly extracted taxa at 26.5%. Other taxa with the greatest percent isolation frequencies include T. pinophilus (12.8%), T. minioluteus (8.3%), and A. flavus (6.4%). The identified taxa were unevenly distributed across the three examined environmental samples: soil (71.7%), plant debris (21.7%), and ant bodies (6.5%) (Figure 4).
Among the fungal species, only T. diversus (62.5%) and T. pinophilus (37.5%) were successfully isolated from the ant body, while the predominant fungal species across plant debris was T. diversus. All fungal species were successfully isolated from soil samples, where A. flavus and T. minioluteus were only obtained from soils and not from any other substrates. Only T. diversus and T. pinophilus were recorded across all substrates. Following the analysis of sampling locations and environmental samples, the results show that the highest taxa abundance values were from mound soils (33) and Zhuhai (15), respectively ( Figure 5). For diversity of species assessment using Shannon's diversity index, the values for fungal species diversity were 1.49 (H') and 0.75 (H') across cities and substrates, respectively. On the other hand, the evenness of fungal species was 0.92 (J') and 0.68 (J') across cities and substrates, respectively (Table 3). With regards to the coefficient of community values obtained for fungal species across five different locations and the three substrates examined, values ranged from 0.29 to 0.57 (Table 4). For locations such as Zhuhai, Dongguan, and Huizhou, computed CC values for all paired substrates, namely ant body vs. plant debris, ant body vs. mound soil, or plant debris vs. mound soil, were 0.0, as the fungal species recorded across different substrates were not similar. For other locations, namely Jiangmen and Guangzhou, CC values ranged from 0.33 to 1.0 and 0.00 to 0.33, respectively (Table 5).
Discussion
The current study conducted a baseline analysis of fungal community assemblage in RIFA mounds across five cities located within Guangdong Province, China. The quest to expand the available data on potential biological control agents of RIFA has been one of the major motivation for conducting scientific studies related to insect-hosts-associated microbe interactions. The diversity, richness, and densities of fungal associates of RIFA mounds were examined across ant body, mound soils, and plant debris within the mounds.
The results revealed unevenness in the distribution of fungal species within substrates and across various locations examined. Among the fungal species, the highest species richness values and abundance (total isolations) were recorded in the mound soils, while the rate was lower for ant body and mound plant debris. With regards to the isolation sites, the highest fungal species richness value was recorded in the samples collected from Dongguan and Guangzhou, while the fungal taxa abundance level was highest in the samples collected from Zhuhai.
The overall high value of species richness from soil samples was similar to the study of Woolfolk et al. [14], where evidence of fungal species richness in mound soils at a significantly higher level than in plant debris within the mound and ant body was provided. Similarly, another study conducted by Baird et al. [7] also reported the highest total species richness values in the mound soils, which was significantly higher than values recorded for ant bodies and plant debris within mound soils.
The current study reveals a successful isolation of some beneficial ant-fungal associates, where the most commonly isolated taxa across all substrates and locations are T. diversus, T. pinophilus, A. flavus, and T. asperellum. Most of the identified fungal species could be classified as generalists, while only a few have previously been reported as insect pathogenic fungi. For instance, A. flavus in Aphis fabae [30], P. citrinum in Spodoptera frugiperda [31], and T. asperellum [32]. The possibility of some species of entomopathogenic fungi existing within the RIFA mound has been reported. For instance, in a related study, the cosmopolitan insect pathogenic fungus Beauveria bassiana (Balsamo) Vuillemin was successfully isolated from mound soils, plant debris within mound soils, and ant bodies [7]. Similarly, two other common insect pathogenic fungi, i.e., Purpureocillium lilacinum and Metarhizium anisopliae (Metschnikoff) Sorokin, were isolated from mound soils, mounds plant debris, and ant bodies by Woolfolk et al. [14]. These two studies are similar to several other related studies where a larger percentage of the cultured fungi belong to the artificial assemblage fungi imperfecti (Deuteromycetes), while the majority were documented as non-pathogenic fungal species [10,13,33].
The findings of the present study are similar to numerous other studies that have revealed the possibility of fire ant mounds serving as a good source of important or beneficial microbial symbionts of RIFA [7,14,34]. However, the specific roles played by the fungal species characterized in the current study in RIFA populations regulating and other environmental processes are yet to be clearly defined. For example, the definite interactions between RIFA and T. diversus, which was the most abundant species across all examined substrates and locations, have not been adequately documented in the literature, whereas a number of fungal species that exist as microbial associates of ant mounds have been reported with the potential of naturally regulating colony populations. This possibility was reported in Paecilomyces lilacinus [7]. This fungus draws advantage from its ability to survive in a wide range of agricultural ecosystems as a saprophyte, entomopathogen, as well as nematophagous [35]. Similarly, a number of Fusarium species are also known for their saprophytic lifestyles, where they can exist as opportunists on plant debris within mound soils or as plant parasites in many living plant species, although their effects or benefits on S. invicta or their mounds have not been well documented [14]. As fungal species are able to survive in various habitats, their existence in fire ant mounds has been suggested to be secondary [7]. This is similar to many fungal species that have been classified as rhizosphere-competent, hence their ability to colonize and survive in variety of soils across different regions [19]. It has also been found that certain fungal associates of the RIFA could display a host-protective mechanism against foreign insect pathogenic fungi. This was evident in RIFA colonies' association with Hypocrea lixii, which appears to protect the ants colonies from being colonized by other entomopathogenic fungi such as B. bassiana or M. anisopliae [14].
Most of the fungal species classified in the current study have previously been isolated from various soil types (including mound and non-mound soils and cultivated and non-agricultural soils), regions, or habitats (including agricultural and forest systems).
Moreover, a few species have been reported as phytopathogens of cultivated crops in many agricultural ecosystems. For instance, P. citrinum and A. flavus have both been reported as saprophytes with the ability to colonize plant debris within the soil as well as existing as plant parasites, serving as causal organisms of several plant infections [36][37][38][39]. Although the cultured fungal species have demonstrated the ability to survive in different soil types and habitats., some previous studies have provided evidence of ant mound soils with greater abundance or richness in fungal species in comparison to non-mound soils. For instance, Zettler et al. [10] found about 19 times more colony-forming units in mound soils, although lower fungal species richness and diversity was reported in the mound soils. On the other hand, Woolfolk et al. [14] suggested that the habitat or geographical location of the isolation sites could exert much more influence on fungal species abundance than the ant colonies. In addition, temperature, PH, rainfall, and several other environmental conditions have been reported to greatly affect fungal diversity within RIFA mounds [10].
Conclusions
The current findings reveal the existence of diverse fungal species within the RIFA mound soils, plant remnants deposited within the mound, and the body of ants. The identified fungal species were found to be unevenly distributed across the substrates and locations examined. Several entomopathogenic fungal species such as B. bassiana, M. anisopliae, and P. lilacinum have been found existing within the RIFA mounds, where they could play the role of colony population regulators by nature. However, none of these three insect pathogenic species was successfully recovered in the current study. Notably, few of the species recorded in this study have been reported as insect pathogenic fungi in some insects. The specific roles played by the cultured fungal species in RIFA survival, growth, invasion, and other ecological functions are still relatively unknown. Future research should be focused in this direction.
|
2023-03-22T15:14:16.808Z
|
2023-03-01T00:00:00.000
|
{
"year": 2023,
"sha1": "a6c843ddab8cf738cf60faa0cbd50bd083060631",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2309-608X/9/3/377/pdf?version=1679283361",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "88c63b1aa2bfbd0d012460486fc342f143315666",
"s2fieldsofstudy": [
"Environmental Science",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
252451915
|
pes2o/s2orc
|
v3-fos-license
|
Hijacking Host Immunity by the Human T-Cell Leukemia Virus Type-1: Implications for Therapeutic and Preventive Vaccines
Human T-cell Leukemia virus type-1 (HTLV-1) causes adult T-cell leukemia/lymphoma (ATLL), HTLV-1-associated myelopathy/tropical spastic paraparesis (HAM/TSP) and other inflammatory diseases. High viral DNA burden (VL) in peripheral blood mononuclear cells is a documented risk factor for ATLL and HAM/TSP, and patients with HAM/TSP have a higher VL in cerebrospinal fluid than in peripheral blood. VL alone is not sufficient to differentiate symptomatic patients from healthy carriers, suggesting the importance of other factors, including host immune response. HTLV-1 infection is life-long; CD4+-infected cells are not eradicated by the immune response because HTLV-1 inhibits the function of dendritic cells, monocytes, Natural Killer cells, and adaptive cytotoxic CD8+ responses. Although the majority of infected CD4+ T-cells adopt a resting phenotype, antigen stimulation may result in bursts of viral expression. The antigen-dependent “on-off” viral expression creates “conditional latency” that when combined with ineffective host responses precludes virus eradication. Epidemiological and clinical data suggest that the continuous attempt of the host immunity to eliminate infected cells results in chronic immune activation that can be further exacerbated by co-morbidities, resulting in the development of severe disease. We review cell and animal model studies that uncovered mechanisms used by HTLV-1 to usurp and/or counteract host immunity.
Introduction
Human T-cell leukemia/lymphoma virus type-1 (HTLV-1) is the first pathogenic retrovirus discovered in humans [1,2]. Its current prevalence is unknown, with estimates ranging from 10 to 20 million people worldwide [3]. While the majority of HTLV-1-infected individuals remain asymptomatic, after a long period of clinical latency a low percentage of patients develop either adult T-cell leukemia/lymphoma (ATLL), a disease characterized by malignant proliferation of CD4 + T-lymphocytes, or HTLV-1-associated myelopathy/ tropical spastic paraparesis (HAM/TSP), a neurodegenerative condition of possible autoimmune nature [4][5][6][7][8][9][10][11][12][13]. HTLV-1 is also associated with other clinical disorders including HTLV-1-associated arthropathy, HTLV-1-associated uveitis, infective dermatitis, polymyositis, and bronchiolitis [14][15][16]. To date, no disease-specific differences in viral strains have been identified, and it appears that the chronic inflammation associated with HTLV-1 infection may be at the basis of diseases manifesting as lymphoproliferation and degenerative inflammatory diseases. Although some progress has been made in therapies for these diseases, the prognosis for ATLL is still dismal, and HAM/TSP remains an intractable disease. The aim of this review is to provide an overview of the current state of knowledge of the interplay between HTLV-1 and host immunity.
HTLV-1 Transmission
The genomic organization and nucleotide sequence of HTLV-1 isolates are highly conserved. To classify HTLV-1 into different subtypes (subtypes A-G) with characteristic geographic distribution, variations in the sequence of the HTLV-1 long terminal repeat (LTR) sections have been used [17]. The predominant subtype in central Australia is HTLV-1C. In some regions of Australia, HTLV-1C has an extremely high prevalence of approximately 30% infection among indigenous populations, representing a public health emergency. As well as a risk of developing ATLL and HAM/TSP, HTLV-1C infected individuals have elevated mortality and develop lung inflammation, bronchiectasis and infectious diseases at an increased frequency [16]. Sequence analysis found that HTLV-1C is most divergent from the other HTLV-1 subtypes at the 3 section of its genome. Whether these differences truly contribute to differences in viral pathogenicity or are due to virus-host co-evolution is not yet known [18].
HTLV-1 infection occurs primarily through cell-to-cell contact between the virusinfected CD4 + T-cell and uninfected cells. The most common routes of transmission are mother-to-infant, sexual intercourse (mainly male-to-female), and, rarely, blood transfusion (whole blood products and sharing syringes) and organ transplants [19,20]. Cell-free HTLV-1 infection has not been documented.
Risk factors of HTLV-1 vertical transmission are mainly associated with breastfeeding and additional factors, such as vulnerable socioeconomic position, with the rate of vertical transmission ranging from 3.9% to 22% in endemic areas [21]. Of note, vertical transmission has been associated with diseases such as uveitis and ATLL.
Another major route of transmission is sexual intercourse in both genders, as HTLV-1-infected cells are present in genital secretions, such as vaginal mucus or secretions and semen [22]. Many infected cells are found in semen, perhaps accounting for more effective male-to-female and male-to-male transmission [23]. Studies from Japan showed that the male-to-female transmission rate of HTLV-1 was 60.8%, but female-to-male transmission was only around 0.4% [24].
Increased HTLV-1 transmission may occur in individuals infected with other sexually transmitted diseases because these infections induce inflammatory reactions that recruit lymphocytes, which have a high proportion of CD4 + T-cells facilitating HTLV-1 transmission [25]. In addition, several factors such as age over 45 years old, menopause, and a high number of HTLV-1-infected cells can increase the number of HTLV-1 positive cells in the seminal and vaginal fluids [23], increasing HTLV-1 transmission risk. Increased risk of viral transmission has been associated with the presence of neutralizing antibodies against Tax. A 1991 study demonstrated that 75% of HTLV-1-infected males had antibodies against Tax [26]. It is possible that the level of Tax neutralizing Abs reflects a more active viral replication in vivo in males favoring virus transmission to females.
HTLV-1 transmission also may occur during allograft transplantation [36]. Despite low incidence, myelopathy cases have been reported following organ transplantation in HTLV-1 positive subjects in non-endemic countries [37]. In 2000, three patients who received organ transplants from the same donor, who was determined to be an HTLV-1 healthy carrier, then presented with clinical manifestations of myelopathy [38]. The development of HAM/TSP in recipients from HTLV-1 healthy carriers has been reported and can be rapid and progressive [30,39,40]. In the HTLV-1 endemic regions, more cases of HAM/TSP and ATLL subjects have been reported after allograft transplantation from HTLV-1 carriers [41,42]. It appears that the immunosuppression used to avoid organ rejection is a primary factor in frequent and rapid disease onset [20]. Alternative explanations include high doses of virus exposure due to the large numbers of infected cells in the contaminated organs.
Although HTVL-1 can infect various cell types, such as dendritic cells, B cells, macrophages and T-cells, the virus preferentially induces clonal expansion of CD4 + T-cells [43,44] and has an impact on T-cell function contributing to disease progression [45][46][47]. Although in vitro cell-free virus transmission has been demonstrated for DCs and monocytic cell lines [48,49], HTLV-1 is believed to be transmitted to T-cells and myeloid cells primarily by cell-to-cell contact through virological synapse, biofilm-like extracellular viral assemblies or cellular conduits [50][51][52]. Viral genes are responsible for clonal proliferation of infected cells, de novo infection, and infected cell survival. Importantly, viral gene expression is also critical for the virus's ability to evade the host immune response.
Immune Deregulation in HTLV-1 Infection
HTLV-1 infection is associated with diseases that are often accompanied by changes in immune responses (4)(5)(6)(7)(8)(9)(10)(11)(12)(13)(14)(15)(16). ATLL is often associated with severe immune suppression, while HAM/TSP is accompanied by chronic inflammation. CD4 cells regulate immune responses, but due to viral infection, their function is altered, causing changes in inflammatory responses and immune tolerance. Increased Treg cell function and production of IL-10 and TGF-b trigger the immunosuppressive phenotype observed in patients [53,54]. In HAM/TSP patients, unlike in ATLL, investigators found decreased FoxP3 expression and reduced IL-10 and TGF-b [55]. The loss of suppressive function may cause chronic inflammation, T-cell and Natural Killer (NK) cell exhaustion and exacerbate the disease process. HTLV-1-infected CD4 + cells of HAM/TSP patients exhibit spontaneous proliferation with an increased production of proinflammatory cytokines such as interferon (IFN)-γ, TNF-α, IL-1 and IL-16 and neurotoxic cytokines IFN-γ and TNF-α, which are found in high concentrations in the spinal fluid of HAM/TSP patients [20,56,57]. Disruption of the cytokine homeostasis and the balance between inflammatory and anti-inflammatory responses is thought to lead to loss of tolerance and the development of autoimmunity.
The type-I interferon response is induced by viral infection [58][59][60][61][62]. The culture of HTLV-1-infected cells with IFNs suppress HTLV-1 expression [58]. HTLV-1 mRNA and protein expression are markedly decreased when infected cells are co-cultured with stromal cells through type-I IFN responses [62]. Furthermore, it was shown that HTLV-1 infection reduces the phosphorylation of factors in the IFN signaling cascade [59], and that the viral proteins Tax and p30 can regulate cellular transcription factors such as SOCS1 and PU.1, which inhibit the interferon response [48,60,[63][64][65]. Interestingly, the combination of the antiviral drugs zidovudine (AZT) and IFN-α have become the standard treatment of some forms of ATLL and significantly improves survival for patients diagnosed with chronic or smoldering subtypes, or a portion of acute individuals carrying wild-type p53 [66][67][68][69]. These data suggest that the antiviral effect of both drugs may target an ongoing, low level of viral infection/replication.
Genomic Organization
After entry of the virus into a host cell, the viral RNA is reverse transcribed into double-stranded DNA, which integrates into the host chromosomal DNA and results in lifelong infection. The HTLV-1 integrated genome (provirus) contains the characteristic retroviral structural and enzymatic genes gag, pro, pol, and env [7]. In addition, a region located between env and the 3 long terminal repeat (LTR), contains four partially overlapping open reading frames (orf s) expressing regulatory proteins [7] that are produced via alternatively spliced mRNAs and by internal initiation codons [70][71][72][73]. Orf-I produces the p12 protein, which is proteolytically cleaved at the amino terminus to generate the p8 protein, while differential splicing of mRNA from orf-II results in production of the p13 and p30 proteins [71,[73][74][75]. The HTLV-1 regulatory genes p12, p8, p30, and p13 are not absolutely required for virus replication or immortalization of human primary T-cells in vitro [76][77][78]. Nevertheless, several studies have shown that primary human T-cells immortalized with molecular clones lacking p12 or p30 grew less efficiently than the wild type molecular clone and are more dependent on IL-2 [78][79][80].
Interestingly, it was shown that the HTLV-1C subtype does not encode the orf-I gene [81]. In HTLV-1C proviral sequences from 22 Australian isolates, a mutation at position 6840 leads to a change of the start codon for orf-1 from methionine to a threo-nine [82]. Although the p12/p8 protein would not be expressed, a bicistronic mRNA, rex-orf -I, uses an initiation codon in exon 2 and the acceptor splice site at position 6383 to encode a protein of 152 amino acids referred to as the Rex-orf-I protein of 17 kDa. In this mRNA, the first coding exon of the Rex protein is joined in frame to p12/p8. The distinct functional motifs implicated in p12 function are conserved in the amino acid sequence of the putative protein rex-orf -I, and thus could possibly compensate for the role of p12 in viral persistence and immune dysregulation.
Orf-III and orf-IV encode for the Rex and Tax proteins that are essential for viral expression and production, respectively, and an antisense mRNA transcribed from the 3 LTR that generates the HTLV-1 basic leucine zipper (HBZ) protein [83][84][85][86]. All regulatory proteins interfere with cellular pathways and only Tax and Rex are essential for virus expression and production in vitro and likely in vivo. The regulatory proteins p12/p8, p30, and HBZ are dispensable for viral replication in vitro but essential for viral persistence in vivo (see next sections). To date, there is no disease-specific difference in viral strains, and it is unclear how infection results in asymptomatic, cancer, neurodegenerative or inflammatory diseases. It is thought that the viral regulatory proteins play an important role in pathogenesis.
Tax and HBZ-Specific Cytotoxic Response and Viral Burden
The prognosis for ATLL is still bleak and HAM/TSP remains an intractable disease. The two regulatory proteins of HTLV-1, Tax, and the HTLV-1 bZIP factor, HBZ, have been shown to have pleiotropic functions connected to viral pathogenesis. Many early studies focused on the viral transcriptional activator, Tax. In addition to being required for induction of the 5 viral long terminal repeat, and thus the expression of viral sense strand genes, Tax has been shown to regulate the expression of NF-kB-and CREB-responsive genes, cellular pathways central to immunity [87,88]. Tax has also been shown to have cell-dependent pro-or anti-apoptotic activity and to affect DNA repair [89][90][91][92][93][94]. Therefore, Tax is thought to play a major role in the proliferation of infected cells, as well as in inducing genomic instability, thereby contributing to viral oncogenesis. NF-kB regulates physiological processes such as proliferation, cell death, inflammation, and immunity [87], and has been shown to be constitutively activated in HTLV-1-infected cells. Therefore, it is believed that NF-kB activation is central for HTLV-1 associated inflammation and cancer.
However, while Tax expression is high in early infection, it is often suppressed at later timepoints, likely because Tax is highly immunogenic and renders infected cells vulnerable to cytotoxic T-cells [93,[95][96][97]. Tax expression is suppressed transcriptionally by HBZ and post-transcriptionally by both p13 and p30 [98,99]. p30 has been found to regulate Tax and Rex expression and viral production by sequestering the common Tax/Rex doubly-spliced RNA in the nucleus [100], whereas p13 binds Tax and interferes with its activity [101]. Other mechanisms identified to inhibit Tax expression include mutations in the tax gene [102], methylation or deletion of the 5 LTR, and host restriction factor CTIIA [103][104][105]. The transcription factor CTIIA, which regulates major histocompatibility complex (MHC) class II expression, was shown to bind Tax and reduce its activation of viral transcription [105]. Interestingly, Tax has been shown to increase MHC-II basal expression by interacting with NF-YB [106]. However, more studies are needed to determine the possible interplay between Tax and CTIIA on MHC-II expression and its impact on peptide presentation.
Recent ex vivo studies in T-cell clones showed that Tax can be expressed in bursts [107] that can be triggered by cellular stress [108] and can toggle between an on and off state [100,101,[109][110][111]. While Tax is often silenced in the later stages of infection, HBZ, encoded by the minus strand HTLV-1 RNA, is constitutively expressed at very low levels in vivo throughout infection [85]. HBZ has been shown to have a variety of functions that are thought to play a role in viral persistence and pathogenesis [47, 90,110]. Interestingly, HBZ has been shown to counter many of the activities of Tax. Recently, cytoplasmic versus nuclear localization of the HBZ protein has been shown to differ in asymptomatic carriers and HAM/TSP patients compared to ATLL patients, in the distribution of the HBZ protein in peripheral blood mononuclear cells [112]. It has been shown that not only HBZ protein but also HBZ mRNA, which is retained in the nucleus, may be involved in HLTV-1 mediated cell proliferation and anti-apoptosis [85,113]. Unlike Tax, the cytotoxic T lymphocyte (CTL) response to HBZ is very low [114,115]. It remains unclear, however, whether the low immunogenicity is intrinsic to HBZ or is linked to its low expression in vivo.
The CTL response is a critical component of the host immune response against viral infection. CTLs directed toward HTLV-1 predominantly recognize the Tax antigen, and anti-Tax CTLs have been suggested to contribute to the control of expansion of infected cells [95,[116][117][118][119][120][121]. Similarly, even if the immunogenicity of HBZ is low [115], correlative analyses suggest that CTL responses to HBZ may contribute to the control of virus burden [114,122,123]. However, all these studies are correlative and performed either on ex vivo tetramer stained cells or stimulated cells apart from their natural micro-environment. Several studies have demonstrated a functional impairment of ex vivo CTL in HAM/TSP linked to the exhaustion associated with chronic immune activation. While direct evidence that CTL controls the HTLV-1 viral burden is lacking in humans, CD8 + T-cell depletion, as a means to demonstrate their importance in non-human primates, has demonstrated that their decrease accelerates primary HTLV-1 infection [124].
6. HTLV-1 Regulatory Genes 6.1. The Pleiotropic orf-I Encoded p12/p8 Proteins HTLV-1 orf-I encodes a 99 amino acid p12 protein which can be proteolytically cleaved at the amino terminus to generate the p8 protein [74]. The two protein isoforms localize to different cellular compartments and are associated with infected cell proliferation, as well as the ability of the virus to evade several arms of immunity such as cytotoxic T-cells, NK cells, and monocyte efferocytosis. Orf-I mRNA is expressed early after virus entry and is critical for establishing and maintaining viral infection in vivo [78,[125][126][127].
T-Cell Proliferation
HTLV-1 persists primarily through the proliferation of infected cells. The viral p12 protein localizes to the endoplasmic reticulum (ER) through a noncanonical ER retention signal [75]. In the ER, p12, through its interaction with calcium binding proteins calnexin and calreticulin, increases cytosolic calcium [128]. In T-lymphocytes, the increased ER calcium release is mediated by inositol triphosphate receptors. In response to the lower level of calcium in the ER, calcium enters through calcium channels in the plasma membrane [129,130]. By depleting ER calcium stores and increasing cytosolic calcium, p12 modulates a variety of processes including T-cell proliferation, viral replication, and viral spread. Early studies demonstrated that overexpression of orf-I influenced T-cell proliferation by activating the nuclear factor of activated T-cells (NFAT), which is dependent on calcium-binding proteins for its dephosphorylation and nuclear import, to increase T-cell proliferation [129][130][131]. During the immune response, NFAT activation is controlled by calcium influx upon T-cell activation. Recognition of specific peptide-bound MHC molecules by the T-cell receptor (TCR) activates a cascade of events that lead to NFAT activation. Upon ligand binding, the protein tyrosine kinases Lck and Fyn phosphorylate the TCRζ and CD3 subunits, allowing ZAP70 docking and activation. ZAP70 then phosphorylates the linker of activation of T-cells (LAT) that, in turn, binds and activates phospholipase C-γ-1 (PLCγ1). This leads to the production of inositol-1,4,5-trisphosphate and the release of ER calcium stores. The increase in intracellular calcium stimulates NFAT dephosphorylate by the Ca 2+ /calmodulin-dependent phosphatase calcineurin, triggering NFAT's nuclear import. Because p12 can modulate the cytosolic calcium levels, it can also activate NFAT independent of TCR signaling [129]. NFAT is known to bind to and activate transcription of the IL-2 promoter, and thus p12 can increase the production of IL-2 in T-cells in a calcium-dependent process [130]. The expression of p12 can also modulate other calcium-regulated proteins such as p300, a transcriptional coactivator [132]. Since p300 is known to play a role in Tax-mediated LTR activation, this suggests that p12 may aid in viral gene expression [133]. In a calcium dependent manner, p12 may enhance intercellular viral transmission by inducing the cellular adhesion through the clustering of Lymphocyte Function Associated Antigen 1 (LFA-1) on the surface of T-cells, which is known to promote cell-to-cell contacts [134].
In addition, early studies demonstrated another function in the ER of p12. P12 binds to the IL-2R β chain in a region critical for JAK1 and JAK3 recruitment, and the interaction of p12/p8 with the immature IL-2R leads to an increase in Signal Transducer and Activator of Transcription 5 (STAT5) phosphorylation and DNA binding activity and decreases the cellular requirement for IL-2 [79]. Furthermore, the binding of p12 to IL-2R allows T-cells to proliferate not only with a lesser amount of IL-2, but also with suboptimal antigen stimulation, providing a proliferative advantage to HTLV-1-infected cells [79].
MHC-Class I and Cytotoxic T-Cells
The presentation of antigens via the MHC class I (MHC-I) processing pathway plays a critical role in the development of host immunity against pathogens. All nucleated cells express MHC-I on their cell surface. MHC-I molecules present antigen peptides to the TCRs on effector CD8 + T-cells, also called cytotoxic T lymphocytes (CTLs). Because CTLs recognize viral peptide:MHC-I complexes on target cells, many viruses have evolved proteins to interfere with this pathway [135]. The MHC-I molecule is composed of a heavy chain (Hc) that is non-covalently bound to a nonglycosylated β2 microglobulin protein (β2M). The affinity of the MHC-I heavy chain is increased in the presence of peptide and folds to assemble the peptide:MHC-I-Hc: β2M complex in the ER lumen [136]. Early work showed that, prior to association with β2M, the p12 protein binds to newly synthesized MHC-I-Hc, preventing its maturation [137]. These improperly assembled protein complexes are cleared from the ER by degradation [138]. Immature MHC-I-Hc:p12 complexes are ubiquitinated, retro-translocated to the cytoplasm, and degraded by the proteasome, resulting in decreased MHC-I surface expression [137]. Although the viral p8 protein was also able to bind MHC-I, its biological importance has not been investigated. Interestingly, a study comparing MHC-I expression on the surface of primary CD4 + T-cells infected with HTLV-1 mutant viruses (HTLV-1 WT , HTLV-1 G29S , HTLV-1 N26 , HTLV-1 p12KO ) demonstrated that a decrease in surface MHC-I was seen only in cells infected with virus that predominantly expresses the p12 protein HTLV-1 G29S [139]. This same study showed that expression of p12 and p8 (HTLV-1 WT ) was necessary for the protection of infected CD4 + cells from CTL lysis [139]. By preventing the presentation of viral antigens through the MHC-I presentation pathway, p12/p8 may contribute to the expansion of infected T-cell clones by allowing the evasion of the adaptive immune surveillance in vivo.
ICAM-1 and ICAM-2 and NK
NK cells detect and destroy cells expressing low surface MHC-I levels. Thus, reduced MHC-I cell-surface expression enables infected cells to evade CTL killing but makes them targets for NK cells. NK cells directly kill target cells by delivering cytotoxic proteins (perforin and granzyme B) to their targets. When NK cells recognize a target, a lytic immune synapse is established through integrins like LFA-1 on the NK cell, and its ligand intercellular adhesion molecule 1 (ICAM-1) on the target cell [140]. Early studies demonstrated that overexpression of Tax induced surface expression of the adhesion molecules LFA-3 and ICAM-1 [141,142]. Although ICAM-1 levels were high on Tax-expressing HTLV-1 transformed cell lines, it was found that the expression of its ligand LFA-1 was independent of HTLV-1 infection, and was low in three of four ATL cell lines [142]. Later studies found that the surface expression of MHC-I, ICAM-1, and ICAM-2, but not ICAM-3, was significantly reduced in HTLV-1-infected primary CD4 + T-cells, making them resistant to autologous NK cell killing [143]. Pretreatment of the NK cells with IL-2 only marginally increased their ability to kill infected cells. In addition to reduced MHC-I and ICAM-1/2, HTLV-1-infected CD4 + T-cells did not express ligands for NK cell activating receptors NCR and NKG2D, further contributing to the reduced adherence of NK cells to HTLV-1-infected cells [143]. This study went on to show that expression of p12 I in primary CD4 + T-cells was sufficient to cause downregulation of surface ICAM-1 and ICAM-2.
The immunomodulatory drug Pomalidomide (Pom), used as part of the standard treatment for multiple myeloma and recently approved for the treatment of Kaposi Sarcoma [144,145], increased both MHC-I and ICAM-1 on Tax-expressing cells. The treatment of HTLV-1-infected cells with Pom increased surface expression of MHC-I, ICAM-1, and B7-2 and significantly increased the susceptibility of infected cells to NK cell killing. Furthermore, the effect of Pom was dependent on orf-I expression, as the surface expression of both MHC-I and ICAM-1 increased following Pom treatment in primary CD4 + cells infected with wild type HTLV-1 but not primary CD4 + cells infected with a mutant orf-I knockout HTLV-1 virus [146]. Additional studies demonstrated that the thalidomide drugs Pom and the related analogue lenalidomide (Len) directly affected HTLV-1-infected cell proliferation by reducing the transcription factors involved in cell signaling and survival: IRF4, STAT3, EZH2, Aiolos and Ikaros [146][147][148]. Thus, Pom treatment could potentially reduce the viral burden in HTLV-1-infected individuals by rendering them susceptible to CTL and NK cell killing. Indeed, the importance of NK and CTL cells in controlling infection is underscored by macaque studies in which the depletion of CD8 + cells greatly enhanced the infection of both wild type and orf-I knockout virus [124]. Although Pom treatment of HTLV-1-infected macaques did result in the activation of T-cells, this immune activation was transient and viral activation was also found [149]. While a phase II trial of lenalidomide in the United States of four patients with refractory/relapse ATLL had no clinical activity, Len did have tolerable toxicity and provided significant anti-cancer activity in a phase II clinical trial in Japan of 26 relapsed/recurrent patients (15 acute and four chronic cases of ATL and seven cases of lymphoma) [150,151]. These results have led to the approval of Len for the treatment of refractory/relapse ATLL in Japan. [152]. The recognition of peptide-bound major histocompatibility complex II (MHC-II) on antigen-presenting cells via the TCR induces TCR ligation and recruitment of the complex to lipid rafts and the immunological synapse (IS). The p8 protein also localizes to the IS upon TCR ligation, causing a LATdependent decrease in phosphorylation of LAT, VAV and PLCγ1, downregulating NFAT activation [74,152]. Thus, p8 is able to impair antigen-specific T-cell responses to immunologic stimuli, a state called T-cell anergy. Induction of T-cell anergy by p8 was shown to result in decreased Tax activity and thus decreased viral replication [152]. However, because p8 is known to be transferred to target cells through cellular conduits, p8-induced T-cell anergy in neighboring cells may increase viral transmission [51,153].
The p8 Protein and Viral Transmission
It is well-documented that HTLV-1 is transmitted via cell-to-cell contact and that cell-free virus is poorly infectious and rarely detected in the blood plasma of HTLV-1infected individuals [49, [154][155][156]. Three modes of cell-to-cell viral transmission have been identified: the virological synapse, biofilm-like extracellular viral assemblies, and cellular conduits [50][51][52]157]. Virus transmission through the virological synapse depends on the polarization of cytoskeletal and adhesion molecules to the cellular contact [50]. Cellular surface adhesion molecules are also important for viral transmission. The HTLV-1 p8 protein enhances LFA-1 clustering on the cell surface, increasing cell-to-cell contacts and poly synapse formation, which promotes viral transfer [51,134]. The p8 protein also promotes the formation of cellular conduits, thin membranous protrusions used by several different cell types for intercellular communication [51, 158,159]. Immune cells such as macrophages, B cells, NK cells and T-cells are known to use tunneling nanotubes (TNTs) for intercellular communication [160,161]. TNTs are filamentous actin containing structures that function as long cytoplasmic bridges connecting adjacent or distant cells for efficient cell-to-cell communication. The p8 protein was shown to induce TNT formation, increasing the quantity and length, and allowing the transfer of HTLV-1 proteins such as Tax, Gag, Envelope, and p8 itself [51]. Other viruses have been shown to induce TNTs to enhance viral spread and avoid immune recognition [162][163][164][165][166]. When HTLV-1-infected T-cells are treated with Cytarabine, a molecule shown to reduce TNT formation [167], virus transmission is decreased by 30% [168]. Furthermore, using a quantitative flow cytometry method, the p8 protein was shown to be transferred to approximately 5% of recipient T-cells after 5 min of co-culture in a process dependent on actin polymerization [51,168,169].
The p8 Protein and VASP
Interestingly, the vasodilator-stimulated phosphoprotein (VASP), which promotes actin filament elongation, co-immunoprecipitated with p8, and imaging showed partial areas of co-localization of VASP and p8 on the plasma membrane and in membrane protrusions [153]. The knockdown of VASP expression by RNA interference or CRISPR/Cas9 reduced p8 and Gag transfer to target cells, but virus release was unaffected [169]. Since VASP is associated with filamentous actin formation, it likely plays a widespread role in cell adhesion and motility, and contributes to intracellular signaling pathways that regulate integrin-extracellular matrix interactions, as well as processes dependent on cytoskeleton remodeling and cell polarity such as T-cell activation and phagocytosis [170].
The p8 Protein and Monocytes
The role of p12/p8 in monocyte function is unclear. It was shown that HTLV-1 virus knocked-out for Orf-I protein expression was severely impaired in its ability to replicate in dendritic cells [126]. Furthermore, when mutant viruses were used to infect the monocytic cell line THP-1, we found that p8 expressing virus (HTLV-1 N26 ) infected monocytes similar to wild type virus, with a proviral load of three to four copies per cell and high supernatant p19 levels. In contrast, mutant viruses expressing only p12 (HTLV-1 G29S ) or no p12/p8 (HTLV-1 p12KO ) had lower proviral loads of > one copy per cell and no detectable supernatant p19 produced [139]. This is similar to what we found in the rhesus macaque model, where HTLV-1 G29S and HTLV-1 p12KO did not establish persistent infection, while HTLV-1 WT and HTLV-1 N26 did [139]. Orf-I also alters the engulfment of infected cells by monocytes. In vitro experiments in human primary monocytes or THP-1 cells demonstrated that orf-I expression is associated with the inhibition of inflammasome activation, with increased CD47 "don't-eat-me" signal surface expression in virus-infected cells and the decreased monocyte engulfment of infected cells [124].
p12/p8 and Vacuolar ATPase
Similar to the E5 protein of the bovine papilloma virus, both p12 and p8 can bind to the proton pump V-ATPase through the 16 kilodalton subunit [171][172][173][174]. V-ATPase localizes to and regulates the acidification of intracellular vesicles such as clathrin coated vesicles, endosomes, lysosomes, Golgi vesicles, endoplasmic reticula, and synaptic vesicles [175]. The binding of the V-ATPase with the HTLV-1 p12 and p8 proteins may potentially interfere with functions such as protein trafficking within the lysosomal/endosomal vesicles or the dissociation of receptor-ligand complexes, but acidification of intermediates between early and late endosomes or endosome carrier vesicles remains essential [176,177]. HTLV-1 is known to infect dendritic cells and monocytes/macrophages where acidification of lysosomes may regulate virus entry or egress [49, 178,179], and monocyte functions such as phagocytosis and efferocytosis. Of note, the knocking out of orf-I expression impairs HTLV-1 persistence in dendritic cells [126] and affects efferocytosis.
The Pleiotropic orf-II Encoded p30 and p13 Proteins
The orf-II gene encodes for two proteins: p30, a 241-residue nuclear/nucleolar protein expressed from a doubly-spliced mRNA, and p13, an 87-residue protein coded by a singlyspliced mRNA corresponding to the carboxy-terminal portion of p30 [71,73,75]. HTLV-1 can infect monocytes/macrophages and dendritic cells [49, [180][181][182][183][184][185][186][187], but their role in viral pathogenesis is not fully understood. While the majority of viral DNA in infected individuals is found in CD4 + and CD8 + T-cells, a small percentage is observed in all three monocyte subsets defined by CD14 and CD16 expression [44], suggesting that they might be involved in the pathogenesis of the virus.
p30 Protein Modulates the Interferon Response
Interferons (IFN-Iα and IFN-Iβ) play a critical role in mediating innate and adaptive antiviral immunity. This is accomplished predominantly through their impact on cell activation, cell proliferation, and apoptosis. Activation of the IFN response increases the expression of over 300 genes encoding antiviral and immunoregulatory proteins [186,[188][189][190][191]. IFNs are primarily produced by dendritic cells, fibroblasts, and macrophages. Dendritic cells isolated from HTLV-infected individuals were found to have reduced IFN secretion, suggesting that the virus has strategies to escape the interferon response [186]. Consistent with impaired IFN responses, reduced phosphorylation of members of the IFN cascade (TYK2 and STAT2) were observed in HTLV-1 positive cells [92,[192][193][194][195]. In addition, STAT1 phosphorylation, most likely mediated through the STAT1 negative regulator, was suppressed in ex vivo CD4 + T-cells isolated from HTLV-1-infected patients [64,196].
Early studies demonstrated that the HTLV-1 p30 protein could work as a latency factor by retaining newly transcribed tax/rex mRNA in the nucleus, as well as by repressing LTRmediated transcription [100,111]. It was later demonstrated that in monocytic cells, p30 affects Toll-like receptor signaling and cytokine release [48,63]. TLRs are an important defense against microbial pathogens. Because TLR activation is crucial for dendritic cell maturation, TLRs link innate and pathogen-specific adaptive responses. TLR3, TLR4, TLR7, TLR8, and TLR9 activation can induce an antiviral response by inducing type I IFNs [197][198][199]. The p30 protein, through direct interaction with the transcription factor PU.1, was shown to reduce cell surface expression of TLR4 [63]. In addition, it was further shown that p30 decreases PU.1 recruitment to IFN-responsive gene promoters following stimulation by either lipopolysaccharide (LPS) or poly(IC), which respectively activate the toll-like receptors TLR4 and TLR3 [48]. Following LPS stimulation of monocytes/macrophages, reduced TLR4 expression resulted in the reduced release of MCP1, TNF-α, and IL-8 (proinflammatory cytokines), and an increased release of the anti-inflammatory cytokine, IL-10 [63]. Consistent with p30 affecting cytokine release, high levels of IL-10 secretion from HTLV-1infected cell lines and in the plasma of patients with ATLL have been documented [200,201]. The inhibitory effect of p30 on the IFN innate response likely favors viral persistence in immune competent hosts.
The p13 Protein
The viral protein p13 is produced from orf-II by a singly-spliced mRNA corresponding to the carboxy-terminal portion of p30 [71,73,75]. Using confocal microscopy and co-localization analyses with cellular compartment markers, electron microscopy, and biochemical fractionation, p13 was determined to localize predominantly to the inner mitochondrial membrane [202][203][204]. Several studies have shown that p13 alters mitochondrial function by increasing potassium influx, which in turn activates the electron transport chain favoring reactive oxygen species (ROS) production [203,205,206]. ROS are powerful second messengers that regulate multiple signal transduction pathways. Depending on their levels, ROS may favor cell proliferation, neoplastic transformation, or cell death. Observations made in isolated mitochondria found that p13 increased ROS production in several cell models, suggesting that p13 might contribute to an expansion of the pool of infected T-cells, but could possibly also trigger the apoptosis of transformed cells [207].
The effect of p13 on mitochondrial function could also affect the host immune response to the virus. Several studies have revealed important roles for mitochondria in immune responses [208]. By inducing cell death through mitochondrial pathways, p13 may trigger inflammatory responses in the host through the cyclic GMP-AMP synthase (cGAS)-stimulator of interferon genes (STING) signaling pathway [209]. Mitochondrial size and shape is controlled by the balance between mitochondrial fusion and fission [210]. This dynamic is connected to immune cell differentiation and activation. Naïve CD4 + T-cell activation induces a synchronized program of mitochondrial biogenesis and remodeling [211]. HTLV-1 infects myeloid cells altering the host innate immune responses [43,44,184,187,212,213]. It would be interesting to investigate the role p13 plays in affecting monocyte/macrophage and dendritic cell function. Unlike Tax, Rex and HBZ, the HTLV-1 regulatory genes p12, p8, p30, and p13 are not absolutely required for virus replication or immortalization of human primary Tcells in vitro [76][77][78]. The viral regulatory proteins are known to be expressed in infected individuals as antibodies, and cytotoxic T-lymphocytes to p12, p30, and p13 have been detected in patients [214][215][216]. The importance of the regulatory proteins to viral infection, dissemination, persistence, and clinical status has also been suggested in sequence analysis of the orf-I and orf-II regions in HTLV-1-infected individuals [74,139,206,217,218].
Several studies demonstrated that primary human T-cells immortalized with molecular clones lacking p12 or p30 grew less efficiently than the wild type molecular clone and are more dependent on IL-2 [78][79][80]. Early studies in the rabbit model suggested that p12, p13, and p30 might be important for viral infectivity [219][220][221]; however, it was recognized that these clones also have mutations in HBZ [222]. Subsequent studies re-investigating the role of p12 and p30 in molecular clones not affecting HBZ demonstrated that while HBZ, p12, and p30 were not essential for persistent infection in rabbits, these viral genes were critical for persistence in non-human primates [126,139]. The expression of orf-I is essential for infectivity in the macaque model and the requirement of orf-I for viral infectivity in macaques parallels HTLV-1 infectivity of dendritic cells in vitro [126]. No reversion of the single point mutation was observed in macaques, suggesting that virus-infected cells are eliminated very early following infection, precluding a sufficient round of viral replication to allow for the selection of virus revertant. Our further studies using HTLV-1 orf-I mutant viruses support the importance of p12/p8 expression and CD8 + cells in viral persistence [139]. In a humanized mouse model, we found that infection with wild type HTLV-1 virus resulted in polyclonal expansion of CD4 + CD25 + T-cells. However, when mice were infected with virus ablated for orf-I expression, HTLV-1 p12KO infection only occurred after reversion of HTLV-1 p12KO back to wild type [127]. Similarly, using HTLV-2 in the rabbit model, the authors found that sequences in HTLV-2 corresponding to the p12 region in HTLV-1 are not necessary for infection, but confer increased replicative capacity in vivo [223]. In addition to orf-I, species specific requirements of orf-II and hbz for viral infectivity [126] suggest that non-human primates are the species of choice to test preventive vaccines for HTLV-1 that engage cellular immunity.
Role of NK, CD8, and Monocytes in HTLV-1 Infection
Increases in the HTLV-1 proviral load and persistent infection are likely linked with the virus's ability to evade the host immune response. As stated above, p8 and p12 are dispensable for viral replication in vitro [76,77,126,224], but are essential for viral infectivity/persistence in vivo [126,139]. The p12 and p8 proteins counteract NK cells [143] and CD8 + cytotoxic T-cell (CTL) [139] responses in vitro and augment T-cell proliferation [79,225] and virus transmission [51, 152,168]. The importance of orf-I expression for counteracting NK and CTL responses was validated in macaques by the depletion of either CD8 and NK cells (CD8/NK) or CD8 cells alone prior to virus exposure. HTLV-1 orf-I knockout virus is un-infectious in macaques, but following the depletion of CD8/NK, viral infectivity was restored and all animals were persistently infected with detectable mutated viral DNA in tissues [124]. Similarly, CD8/NK depletion accelerated virus infection after exposure to HTLV-1 wild type. While CD8 depletion alone accelerated the infectivity of HTLV-1 wild type, CD8 depletion, without the concomitant removal of NK cells, incompletely restored the infectivity of orf-I knockout HTLV-1 [124]. These data suggest that the innate function of NK cells is central for the immune control of HTLV-1 infectivity. Indeed, the frequency and function of NK cells is altered in HTLV-1 infection [226]. The frequency of spontaneous proliferation of NK cells correlates with proviral load in infected individuals [227]. Interestingly, NK cells may also play a role in chronic infection as passive transfer of amplified NK cells to a HTLV-1 patient with smoldering ATL resulted in complete remission [228].
Monocyte/macrophage depletion by clodronate prior to viral exposure to HTLV-1 wild type was associated with a faster seroconversion in all macaques, but antibody levels were not sustained, suggesting a possible role of monocytes in persistent infection [124]. The infectivity of orf-I knockout HTLV-1 was not restored by clodronate treatment prior to virus exposure. Interestingly, orf-I expression was associated with defective efferocytosis in part linked to its upregulation of CD47, the "don't-eat-me" signal on infected cells [124]. These findings raise the possibility that orf-I expression by transiently protecting engulfed cells from degradation may facilitate the spread of virus by migratory efferocytosis to tissues. In addition, defective efferocytosis could create a durable and vicious inflammatory response that is unable to clear the virus by inducing further inflammation [229] and regulatory T-cell differentiation via the production of IL-10 and TGF-β [230]. Indeed, high levels of IL-10 and TGF-β and increased regulatory T-cell counts are hallmarks of HTLV-1 infection and may contribute to viral pathogenesis [46]. This study suggests that monocytes play a role early in infection by clearing infected cells. Alternatively, monocytes may provide an early viral reservoir important for maintaining viral persistence. Experiments which simultaneously deplete NK cells, CTLs, and monocytes in vivo are necessary to determine the role of monocytes in the early stages of infection.
Humoral Immunity
While the function of the viral regulatory proteins in modulating the T-cell response is actively being studied, little is known about the role these proteins play in modulating the HTLV-1 humoral response. In a study looking at a cohort of HTLV-1 exposed transfusion recipients, it was noted that antibodies to core, envelope and tax protein appeared within 30-60 days following primary HTLV-1 infection [231]. In most cases, the serum antibody titers correlate with the proviral load, but it is not known if high antibody titers contribute to protection or controlling the viral load [232,233].
Many viral vaccines are directed toward blocking virus entry into target cells. The HTLV-1 envelope (Env) protein is necessary for infection, highly immunogenic and the primary target of neutralizing antibodies [234]. Results from studies using passive immunization in animal models indicate that neutralizing antibodies could be protective. The administration of purified anti-HTLV-1 immunoglobulin from the plasma of seropositive individuals 24 h before HTLV-1 challenge protected cynomolgus monkeys from infection [235]. In addition, anti-HTLV-1 antibodies prevented viral transmission in NOD-SCID/γcnull mice [236] and rabbit models [237]. Furthermore, at birth, infants born to HTLV-1 positive mothers have detectable anti-HTLV-1 antibodies which decrease exponentially until most babies become seronegative by about nine months of age [238]. Interestingly, the duration of breastfeeding is an important risk factor associated with mother-to-child transmission, where longer duration of breastfeeding is associated with increased risk of viral transmission [238]. However, if this is due to neutralizing antibodies or increased repeated viral exposure remains unclear. In a study of 4 L from an HTLV-1 infected rabbit, neonates that were given anti-HTLV-1 hyperimmunoglobulin had a decreased risk of infection compared to untreated liters [239]. However, in rats, the infection of offspring by HTLV-1 positive mothers occurred at a higher rate in this model, which correlated with the proviral load. However, in this same model, passive administration of neutralizing antibodies did not prevent oral transmission [240]. Another complicating factor to consider is that although HTLV-1 Env is required for infection, viral cell-to-cell transmission through the VS, biofilms and cellular conduits is thought to shield the virus from antibodies [241].
As discussed above, NK cells play an important role in controlling viral persistence. Thus, eliciting anti-HTLV-1 antibodies may be important for clearance by antibodydependent cellular cytotoxicity (ADCC). An early study examining ADCC and NK cell activity from newborns, infants and adults suggests that these activities can protect against the transmission of mother-to-child [242]. A more recent study found that a neutralizing anti-Env antibody, LAT-27, induced ADCC, eliminating Tax positive cells and can contribute to the control of infection [243]. A second study looking at NK cell activity in healthy carriers and HAM/TSP patients found that HAM/TSP patients had decreased frequencies of NK cells expressing CD16, the main receptor in the Fc-mediated antibody effector function inducing ADCC. This suggests that NK cells may prevent progression to HAM/TSP [226]. These results are consistent with the findings that ADCC activity was significantly reduced in HAM/TSP patients compared to asymptomatic carriers, due in part to a reduction in ADCC effector activity but not to a lack of anti-HTLV-1 ADCC antibodies [244].
Conclusions
HTLV-1 counteracts host NK and CTL activity and usurps monocyte and dendritic cell immunity [43]. The continuous engagement of immune cells that fail to eradicate infection likely underlies the damaging chronic inflammation that ensues in a portion of HTLV-1-infected individuals (Figure 1). HTLV-1 infection has been reported to significantly alter dendritic cell function, increase the frequency of intermediate and non-classical (pro-inflammatory) monocytes, and decrease the frequency of classical monocytes that mediate the clearance of apoptotic cells and maintain tissue homeostasis [44]. The continuous but ineffective attempts of the immune system to clear the virus may result in exhaustion of both NK and CD8 + cells, as observed in infected individuals with high virus burdens [46,116,226,243,[245][246][247][248].
Although an HTLV-1 preventative vaccine is feasible, no candidate vaccine has ever proceeded to clinical trial. Vaccine development efforts have used recombinant vaccinia virus vectors, protein immunization, DNA vaccine vectors, and peptide vaccines [249][250][251][252][253][254][255][256][257][258][259][260][261]. Collectively, these data suggest that an immune-based intervention based on vaccination alone is unlikely to be effective in the context of chronic HTLV-1 infection. With the current knowledge of HTLV-1 regulatory proteins, investigators should now consider targeting these pathways. For example, we recently showed in the rhesus macaque model that treatment of infected animals with the immunomodulator pomalidomide to target orf-Imediated immune dysregulation caused reactivation of the virus, allowing its recognition by the host immune system [149]. Unfortunately, this response was short-lived, indicating that pomalidomide may not work as a single agent but could rather be used in combination therapy or in combination with vaccines. In addition, when HTLV-1-infected cells were treated in vitro with cytarabine, a therapeutic used in relapse/refractory AML [262], there was a reduction in tunneling-nanotubes induced by the viral p8 protein, reduced virus production, and reduced virus transmission [167]. Integrase inhibitors are another potential avenue to explore. Studies have shown that the integrase strand transfer inhibitors (INSTIs) raltegravir, bictegravir, and cabotegravir (FDA approved treatments for HIV-1) inhibited cell-free and cell-to-cell transmission of HTLV-1 in vitro [263][264][265][266]. Thus, INSTs should be considered in the treatment of HTLV-1, particularly for pre-exposure prophylaxis and in the prevention of mother to child transmission. While the majority of individuals remain asymptomatic, a subset of infected individuals will progress to diseases such as Adult T-cell Leukemia/Lymphoma, HTLV-1-associated myelopathy/tropical spastic paraparesis, HTLV-1-associated uveitis, bronchiectasis, rheumatoid arthritis, and infective dermatitis.
Although an HTLV-1 preventative vaccine is feasible, no candidate vaccine has ever proceeded to clinical trial. Vaccine development efforts have used recombinant vaccinia virus vectors, protein immunization, DNA vaccine vectors, and peptide vaccines [249][250][251][252][253][254][255][256][257][258][259][260][261]. Collectively, these data suggest that an immune-based intervention based on vac- Figure 1. HTLV-1 transmission occurs primarily through cell-to-cell contact. Three modes of transmission have been demonstrated: virological synapse, cellular conduits called tunneling nanotubes, and biofilm matrices. HTLV-1 viral proteins enable the evasion of host immunity and contribute to alterations in the innate and adaptive immune responses. Altered responses to chronic HTLV-1 infection lead to inflammation and T-cell exhaustion, and allow clonal expansion of infected cells. While the majority of individuals remain asymptomatic, a subset of infected individuals will progress to diseases such as Adult T-cell Leukemia/Lymphoma, HTLV-1-associated myelopathy/tropical spastic paraparesis, HTLV-1-associated uveitis, bronchiectasis, rheumatoid arthritis, and infective dermatitis.
In addition, the data suggest that a preventive HTLV-1 vaccine should either prevent infection upfront or eliminate the virus very early on to avoid the establishment of a reservoir that host immunity is unable to clear. Given the HTLV-1 modes of transmission, virus vulnerability to neutralizing antibodies is uncertain. The engagement of less canonical host responses such as ADCC and efferocytosis, based on the ability of NK and monocytes to recognize and effectively dispose of infected cells, may be necessary for an HTLV-1 vaccine to prevent the establishment of infection.
|
2022-09-23T15:33:30.155Z
|
2022-09-20T00:00:00.000
|
{
"year": 2022,
"sha1": "703e6dfe8921f91242e6576bc73e2db69368824f",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1999-4915/14/10/2084/pdf?version=1663835189",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "bf8772ad21708b05c28aa8701e9a954b68f7d794",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
252200166
|
pes2o/s2orc
|
v3-fos-license
|
Charmoniumlike resonant explanation on the newly observed $X(3960)$
Stimulated by the observation of the newly $X(3960)$ observed by the LHCb collaboration, we adopt the one-boson-exchange model and consider the $S-D$ wave mixing effects to study the $D_s\bar{D}_s/D^*\bar{D}^*/D_s^*\bar{D}_s^*$ interactions with $I(J^{PC})=0(0^{++})$. After producing the phase shifts of this coupled channel systems, our results show that there can exist a charmoniumlike resonance, whose obtained mass and width can both well match with the experimental data of the newly observed $X(3960)$. We also find that the $D^*\bar{D}^*$ system plays an important role in the formation of the newly observed $X(3960)$ as a charmoniumlike resonance, and the $D_s^*\bar{D}_s^*$ system makes a significant contribution to the resonant width. As a byproduct, we perform a coupled channel analysis on the $D^*\bar{D}^*/D_s\bar{D}_s^*/D_s^*\bar{D}_s^*$ interactions with $I(J^{PC})=0(1^{+-})$, our results can predict the existence of the $D_s\bar{D}_s^*$ molecule with $1^{+-}$ and the $D_s^*\bar{D}_s^*$ molecule with $1^{+-}$. Their widths are around several and several to several tens MeV, respectively. Experimental searches for these two possible charmoniumlike molecular candidates can be helpful to verify our proposal.
I. INTRODUCTION
Recently, in a talk given at CERN, the LHCb collaboration reported their observations of three new states from the B decay processes [1].In this talk, apart from the two charged states, i.e., the first pentaquark with s quark content observed in the J/ψΛ invariant mass spectrum of the B − → J/ψΛ p process and the T a++(0) c s (2900) shown in the D + s π +/(−) invariant mass spectrum of the B 0(+) → D 0(−) D + s π +(−) process, one another neutral state, namely X(3960), was also observed by the LHCb Collaboration in the D + s D − s invariant mass spectrum of the B decay process B + → D + s D − s K + [1].Usually, when a new neutral state is observed in the invariant mass spectrum that composed by a pair of heavy and antiheavy mesons, our first consideration may often be that if this new state can be treated as a conventional charmonia.Here, since the quantum number of the X(3960) is reported as 0 ++ , the first idea of us is that if it is a new χ c0 state.When checking the theoretical results of the potential model [2][3][4], the χ c0 (2P) state has been denoted as χ c0 (3915) [5] and the position of χ c0 (3P) is around 4.2 GeV.For χ c0 (3915), although its mass is close to the X(3960), whose mass and width are measured as M = 3955 ± 6 ± 22 MeV and Γ = 48 ± 17 ± 10 MeV respectively, due to its mass is below the threshold of D s Ds , i.e., about 3938 MeV, it is puzzled that it can be observed in the D s Ds invariant mass spectrum.While for the χ c0 (3P), its predicted mass is too far away from X(3960), thus denoting the X(3960) as χ c0 (3P) may be not appropriate.
Another reason that X(3960) may not be a good candidate of charmonia is that its decay property is a little different.
In the talk [1], the LHCb Collaboration compared its decay widths to the D + D − and D + s D − s , and the measurement gave that [1] Γ(X(3960) → D + D − ) Γ(X(3960) → D + s D − s ) = 0.29 ± 0.09 ± 0.10 ± 0.08, (1) which means it is easier for X(3960) to decay into D + s D − s rather than D + D − .Since usually it is harder to excite a pair of s s from vacuum compared with u ū(d d), thus conventional charmonia predominantly decay into a pair of D meson, which implies the exotic nature of this new state X(3960) [1].Thus, the next thing we do is naturally to see if this X(3960) can really be assigned as an exotic state, in which a state composed by four valence quarks may be the easist generalization.Since the position of X(3960) is close to the D s Ds threshold, a consideration that it is related to some molecular states can easily raise up.Actually, studies on the D s Ds molecular states has already done before the observation of X(3960) [6][7][8][9][10][11], and it turns out that although a 0 ++ bound state that couples strongly to D + s D − s and weakly to D + D − is found just below D + s D − s threshold, after carrying out a dynamic study of D D and D + s D − s in coupled channels, this bound state disappears.Thus, recently, Ref. [12] reanalyzed this situation and found that if the strength of D D → D + s D − s transition is slightly reduced, that missing state will appear and behave similarly on the D + s D − s invariant mass spectrum as the experimental observation [1,12].
The molecular state interpretion of the X(3960) is also supported by Refs.[13,14].In addition, apart from bound state interpretion, Ref. [14] pointed out that virtual state explanation was also valid.Then, Ref. [15] used the effective Lagrangian approach to calculate the production rate of X(3960) in the B decays utilizing triangle diagrams, and the results showed that both the bound and virtual state interpretions can match the relevant experimental data.
Thus, for X(3960) appeared in the D + s D − s invariant mass spectrum, Refs.[6][7][8][9][10][11][12][13][14][15] explain it as an effect caused by a molecular state located below the D + s D − s threshold.However, considering the fact that its measured mass is above the D + s D − s threshold [1], the resonant state explanation, in our view, is also possible, and this work is to study this possibility.
In general, the resonances can be divided into two types, i.e., the shape-type resonances and the Feshbach-type resonances, and the generation of these two types of resonances is controlled by the potential barriers [16].In addition, we want to emphasize here that the coupled channel effect plays a very important role in producing the Feshbach-type resonances since the mass gaps between the relevant channels will give additional contributions to the potential barriers.
Thus, in this work, considering the measured mass and quantum number of X(3960), we perform an analysis that includes coupled channel effect and S − D wave mixing effect to see if the newly observed X(3960) can be interpreted as a resonance, the included channels are
II. INTERACTIONS
In the OBE model, the relevant effective potentials can be deduced as follows.Firstly, we can write down the scattering amplitude by adopting the effective Lagrangian approach.And then, one can derive the effective potentials based on the approximation relation to the scattering amplitude, where M(AB → CD) denotes the scattering amplitude for the AB → CD process in t−channel.M i and M f are the masses of the initial states and final states, respectively.Then we can finally obtain the effective potentials in the coordinate space V(r) by performing the Fourier transformation, i.e., Here, F (q 2 , m 2 E ) is the form factor, it is introduced at every interactive vertex to compensate the off-shell effect of the exchanged meson.In this work, we take the monopole type form factor, , where Λ, m E and q are the cutoff, the mass and four-momentum of the exchanged meson, respectively.
Based on the heavy quark symmetry and chiral symmetry [19][20][21][22][23], the relevant effective Lagrangians are constructed as Here, the super-fields are expressed as the combinations of the S −wave charmed (anti-charmed) mesons with J P = 0 − and 1 − as they are in the same doublet in the heavy quark limit.The conjugate field reads as H = γ 0 H † γ 0 .P Q( * ) stands for the pseudoscalar (or vector) mesons fields P Q( * ) = (D ( * )+ , D ( * )0 , D ( * )+ s ) T .v µ is the four velocity.In the nonrelativistic approximation, we take the form of respectively, which stand for the vector and axial currents with 8. The P and V denote the light pseudoscalar meson and the light vector meson matrices, respectively, which read as Once expanding the effective Lagrangians in Eq. ( 4), we can obtain In the above Lagrangians, the σ meson coupling g s = 2.82 is estimated from the quark model [24,25].For the π−exchange coupling, g = 0.59 is extracted from the decay width of D * → Dπ [22].Using the vector meson dominance [26], β is fixed as β =0.9.λ = 0.56 GeV s coupled channel analysis, the OBE effective potentials can be expresses as The corresponding subpotentials read as Here, we define several useful functions as follows, i.e., In above effective potentials ( 10)-( 15), D i j , E i j , and F i j stand for the operators for the spin-spin interactions and the tensor forces, respectively.For example, . In the numerical calculations, these operators O are replaced by several nonzero matrix elements by employing f |O|i , where the |i and f | stand for the spin-orbit wave functions for the initial and final states, respectively.For the After prepared the OBE effective potentials, we would like to produce the scattering energy √ s dependence of the phase shifts δ( √ s) for the investigated coupled channel systems by varying the cutoff in the range from 0.80 GeV to 3.00 GeV.Here, the cutoff value Λ in our OBE effective potentials is the only free parameter, it is related to the typical hadronic scale or the intrinsic size of hadrons.According to the experience of the nucleon-nucleon interactions [17,18], the reasonable values of the cutoff are taken around 1.00 GeV.These values are often adopted to the study of the interactions between the heavy hadrons.
With these obtained phase shifts, we can search for possible resonances, where the resonance generally emerges as the phase shifts satisfy δ( √ s 0 ) = (2n + 1)π/2 with n = 0, 1, 2, . ... Here, the s 0 corresponds to the position of the obtained resonance, and its decay width can be estimated by . Meanwhile, we also present the scattering energy √ s dependence of the scattering cross section σ( . By these efforts, we can further check the resonant shapes.s coupled systems with J PC = 0 ++ , respectively.Here, we find the resonance emerge at the cutoff Λ = 1.55 GeV, which locates below the D * D * threshold.With the increasing of the cutoff value, the OBE effective potentials turn into stronger attractive, consequently, the resonant bind deeper and deeper.In particular, when the cutoff decreases to 1.65 GeV, the mass for this obtained resonance happens to overlap with the newly X(3960) with experimental uncertainty.In addition, we identify the resonant width Γ around 10 MeV at Λ = 1.55 GeV.As the increasing of the cutoff value, the decay width becomes much larger.In the cutoff region from 1.57 GeV to 1.68 GeV, our results of the decay width varies from 21 MeV to 70 MeV, which is consistent with the experimental data of the newly X(3960) with the experimental uncertainties.
The most important thing is that we can reproduce the mass and width for the newly observed X(3960) in the cutoff region Λ ≥ 1.65 GeV, simultaneously.In Figure 2, we present the scattering energy √ s dependence of the phase shifts for all the investigated channels of the isoscalar D s Ds /D * D * /D * s D * s coupled systems with J PC = 0 ++ and the scattering cross section for the D s Ds channel at the cutoff Λ = 1.65 GeV.Here, we can identify a resonance at the position √ s = 3.97 GeV as the phase shift of the D s Ds ( 1 S 0 ) channel crosses π/2.We can find a maximum cross section at the resonance energy, and the width is 38.13 MeV.To summary, since the cutoff is close to the reasonable value [17,18], we can conclude that the newly X(3960) can be explained as the isoscalar charmoniumlike resonance with J P = 0 ++ .
In this work, we further explore the roles of the D * D * and D * s D * s channels in generating the X(3960) resonance.We produce the phase shifts for the D s Ds /D * D * coupled systems with J P = 0 ++ and the D s Ds /D * s D * s coupled systems with J P = 0 ++ , respectively.Finally, our results indicate that there can exist resonant properties for these two coupled channel systems in the cutoff region 1.00 < Λ < 3.00 GeV.
In Figure 3, we present the obtained resonant mass dependence of cutoff value Λ in the D s Ds /D * D * coupled systems with J P = 0 ++ and the D s Ds /D * s D * s coupled systems with J P = 0 ++ , respectively.Here, we can see that for the D s Ds /D * D * coupled systems with J P = 0 ++ , the resonance appears at the cutoff Λ larger than 1.60 GeV.In particular, when the cutoff increase to 1.85 GeV, the obtained resonant mass is 3957.03MeV, which is close to the central mass of the X(3960).Here, the shallow area corresponds to the reported experimental mass for the newly X(3960) including the experimental uncertainty.The short slash lines label the upper limit and lower limit for the width of the reported X(3960).In this section, we extend our study on the isoscalar s interactions with J PC = 1 +− by using the same model.After considering the S − D wave mixing effects, the corresponding wave functions can be expanded as Their OBE effective potentials are written as Here, the subpotentials read as with Here, we define In Table I, we summary the corresponding matrix elements 2s ′ +1 L ′ J ′ |O| 2s+1 L J for the spin-spin interactions and tensor force interactions operators in Eqs. ( 22)- (24).TABLE I: Nonzero matrix elements 2s ′ +1 L ′ J ′ |O| 2s+1 L J in various channels for the spin-spin interactions and tensor force interactions operators in Eqs. ( 22)- (24).Here, .
After that, we produce the phase shifts for the isoscalar D 4, the width for the R 2 resonance is larger than that for the R 1 resonance.Especially, in the cutoff region 1.70 ≤ Λ ≤ 2.00 GeV, the width for the R 2 resonance can reach around 20 MeV.And the decay width for the R 1 resonance is still less than 3.00 MeV.
In Figure 5 Here, we also find that the cutoff values are close to those in the case of the newly observed X(3960) as the isoscalar charmoniumlike resonance with J P = 0 ++ as shown in Figure 1.Therefore, if the X(3960) can be assigned as a charmoniumlike resonance, the R 1 and R 2 can be also possible charmoniumlike resonant candidates.Their masses are very close to the D s D * s and D * s D * s thresholds, respectively.The close threshold properties remind us the predictions in our previous paper [27], when we systematically study the interactions between a charmed (charmed-strange) meson and an anti-charmed (anti-charm-strange) meson by using the OBE model, we find the D s D * s state with J PC = 1 +− and the
V. SUMMARY
Very recent, the LHCb Collaboration observed a neutral state X(3960) in the D + s D − s invariant mass spectrum of the B + → D + s D − s K + process [1].According to the mass and decay properties, the X(3960) is very likely to be a charmoniumlike exotic state.Up to now, the inner structure for the newly X( 3960) is still open to discuss.In this work, we propose the X(3960) as the isoscalar DD−type charmoniumlike resonance with J P = 0 ++ , where D stands for the S −wave charmed and charmed-strange mesons.
In order to examine our proposal, we analyze the phase shifts for the The experimental progress, especially the improvement of experimental techniques and the accumulation of the exper-imental data, will provide us a good chance to explore the underlying mechanism or inner structures for the new exotic states.We look forward to the further experiment to verify our proposal.
D s Ds , D * D * , and D * s D * s .This paper is organized as follows.After this introduction, we deduce the coupled D s Ds /D * D * /D * s D * s interactions with I(J PC ) = 0(0 ++ ) by using the OBE model in Sec.II.In Sec.III, we present the corresponding numerical results by producing the phase shifts and predict possible charmoniumlike structures from the isoscalar D * D * /D s D * s /D * s D * s interactions with J PC = 1 +− .The paper ends with a summary in Sec.V.
−1 is determined through a comparison of the form factor between the theoretical result and the lattice QCD.The flavor wave functions |I = 0, I 3 = 0 for the isoscalar D ( * ) s D( * ) s and D * D * are constructed as |D ( * )+ s D ( * )− s and (|D * 0 D * 0 + |D * + D * − )/ √ 2, respectively.When we consider the S -D wave mixing effects, the spin-orbit wave functions for the D * (s) D * (s) systems with 0 ++ are | 1 S 0 and | 5 D 0 .In the isoscalar D s Ds /D * D * /D * s D * III. THE X(3960) AS THE D s Ds /D * D * /D * s D * s COUPLED RESONANCE WITH J PC = 0 ++
FIG. 1 :
FIG. 1: The cutoff Λ dependence of the obtained resonant mass M for the isoscalar D s Ds /D * D * /D * s D *s coupled systems with J PC = 0 ++ .Here, the shallow area corresponds to the reported experimental mass for the newly X(3960) including the experimental uncertainty.
FIG. 2 :
FIG. 2: The scattering energy √ s dependence of the phase shifts for all the investigated channels of the isoscalar D s Ds /D * D * /D * s D * s coupled systems with J PC = 0 ++ and the scattering cross section for the D s Ds channel.Here, the cutoff is taken as Λ = 1.65 GeV.
Compared to the D s Ds /D * D * /D * s D * s coupled systems with J PC = 0 ++ , the cutoff here is slightly larger, therefore, the OBE interactions in the D s Ds /D * D * coupled channel systems are a little weaker attractive than the D s Ds /D * D * /D * s D * s interactions.Because the cutoff values still fall in the reasonable region, we can conclude that D s Ds /D * D * coupled channel systems provide strong enough attractive interactions to form a resonance, and the coupled channel effects originating from the D * s D * s systems play a minor and positive role.For the D s Ds /D * s D * s coupled systems with J P = 0 ++ , the resonance emerges at the cutoff Λ larger than 2.29 GeV.Obviously, this cutoff value Λ is away from the cutoff value in the D s Ds /D * D * /D * s D * s coupled systems with J PC = 0 ++ , which shows that the OBE interactions from the D s Ds /D * s D * s coupled channel systems are weaker attractive.Thus, the coupled channel effects originating from the D * D * system, which is discarded here, provide an important role in forming the X(3960) as the resonance.In the second subfigure in Figure 3, we also present the obtained resonant mass M and width Γ for the D s Ds /D * D * coupled systems with J P = 0 ++ and the D s Ds /D * s D * s coupled systems with J P = 0 ++ , respectively.For the D s Ds /D * D * coupled systems with J P = 0 ++ , the obtained width is less than 1.00 MeV.It is too small compared to the experimental width of the newly X(3960).Therefore, we can conclude that the D * s D * s channels play a very important role in generating the width of the X(3960) as the charmoniumlike resonance.However, for the D s Ds /D * s D * s coupled systems with J P = 0 ++ , the width varies from several MeV to seventy MeV in the mass region of 3940 < M < 4018 MeV.When we align the resonant mass to the X(3960), our theoretical results about the resonant width can fall into the experimental region for the X(3960).In comparison with the results from the D s Ds /D * D * coupled systems with J P = 0 ++ , we find the D * s D * s channel affects the width for the X(3960) a lot, which may be caused by the larger phase space for the D * s D * s decaying to the D s D s final state.Anyway, if the newly X(3960) can be regarded as the isoscalar charmoniumlike resonance, the contribution from the D * s D * s cannot be ignored.
FIG. 3 :
FIG. 3: The resonant mass dependence of the cutoff Λ for the D s Ds /D * D * coupled systems with J P = 0 ++ (red dotted line) and the D s Ds /D * s D *s coupled systems with J P = 0 ++ (blue slash line).Here, the shallow area corresponds to the reported experimental mass for the newly X(3960) including the experimental uncertainty.The short slash lines label the upper limit and lower limit for the width of the reported X(3960).
From
the current numerical results, both the D * D * and D * s D * s systems are very important in formation of the X(3960) as the charmoniumlike resonance.IV.PREDICTIONS OF THE D * D * /D s D * s /D * s D * s COUPLED RESONANCES WITH J PC = 1 +−
FIG. 4 :
FIG. 4: The cutoff Λ dependence of the resonant parameters (mass M and decay width Γ) for the isoscalar D * D * /D s D * s /D * D * interactions with J PC = 1 +− .
, we present the scattering energy √ s dependence of the phase shifts for all the investigated channels of the D * D * /D s D * s /D * s D * s coupled resonances with J PC = 1 +− and the scattering cross section for the D * D * channel.We can identify two
FIG. 5 :
FIG. 5: The scattering energy √ s dependence of the phase shifts for all the investigated channels of the D * D * /D s D * s /D * s D * s coupled resonances with J PC = 1 +− and the scattering cross section for the D * D * channel.Here, the cutoff is taken as Λ = 1.65 GeV.
D s Ds /D * D * /D * s D * s coupled systems with J PC = 0 ++ after adopting the OBE effective potentials and considering the S − D wave mixing effects.Our results show that there can exist a possible charmoniumike resonance in the reasonable cutoff input.The obtained resonant mass and width are consistent with the experimental data of the newly X(3960).Here, we also find that both the D * D * and D * s D * s channels will play important roles in binding and impacting on the decay width of the X(3960) as the charmoniumlike resonance, respectively.In addition, we adopt the same OBE model to study the isoscalar D * D * /D s D * s /D * s D * s interactions with J PC = 1 +− .Finally, we obtain two possible charmoniumlike structures, which can correspond to the D s D * s molecular state with J PC = 1 +− and the D * s D * s molecular state with J PC = 1 +− [27], whose widths are in orders of magnitudes of several and several to several tens MeV, respectively.The η c φ and J/ψη (′) can be the important two-body hidden-charm decay channels for these two bound states.The D s D * s channel is also the only open-charm decay mode for the D * s D * s molecular state with J PC = 1 +− .
|
2022-09-13T01:16:00.395Z
|
2022-09-12T00:00:00.000
|
{
"year": 2022,
"sha1": "b961c3ab737100c7f92922275bd49b03ae3f12ae",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1016/j.physletb.2023.138254",
"oa_status": "GOLD",
"pdf_src": "Arxiv",
"pdf_hash": "b961c3ab737100c7f92922275bd49b03ae3f12ae",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
113830779
|
pes2o/s2orc
|
v3-fos-license
|
Equilibrium Model of Movable Elements of Micromechanical Devices with Internal Suspensions
In this work model of mirror elements equilibrium of the micromechanical components is developed, the behavior analysis of the mirror element of micromechanical mirrors in case changing of control voltages of electrostatic actuators is carried out, an expression for determining the maximum value of deflection voltage at which the snap-down effect will take the following form is obtained in case of the influence of the coefficient of the electrostatic rigidity of electrostatic actuators. The developed equilibrium model of mirror elements and the obtained results of modeling can be used at design of micromechanical mirrors with internal suspensions. Keyword: micromechanical mirrors, equilibrium model, electrostatic actuators, criteria, coefficient.
Introduction
Optical systems have, over the past 50 years, revolutionized and now have a wide range of uses in telecommunications, information display and metrology.In the 1950's the invention of the laser allowed, for the first time, practical commercial and industrial applications of coherent optical systems.Everyday applications of optical systems are abundant, including optical systems that use light to translate electrical signals to visible images (video displays, laser printers), and optical systems that use light to translate from visible images to electrical signals (digital cameras, barcode scanners).Optical systems are although quite useful for many applications, but the devices based on the optical systems have limits of performance because of their components overall dimensions.For example conventional mechanical scanners have significant performance limitations due to their scanning mirrors size.Miniaturization of optical components has enabled many of applications [1,3].
Microelectromechanical systems (MEMS) technology -a set of manufacturing techniques broadly based on semiconductor manufacturing processes -is widespread in the world.The range of areas in which MEMS-devices are demanded promptly extends thanks to small overall dimensions, high-speed performance and rather low price.MEMS promises to bring the benefits of miniaturization to mechanical optical elements: low-cost, reliable opto-mechanical components.MEMS technologies have made micron-to millimeter-sized mechanical systems.MEMS sensors have been widely available for a various applications, including different kinds of sensors.MEMS actuators are widely used in inkjet printers.The application of these MEMS manufacturing techniques create a revolution in opto-mechanical systems [1][2][3].
Microopticoelectromechanical systems(MOEMS) is one of popular promising direction of optical systems development The basic concept of MOEMS is the miniaturization of combined optical, mechanical, and electronic functions into an integrated assembly, or monolithically integrated substrate, through the use of MEMS techniques.MOEMS is a rapidly growing area of research and commercial development with great potential to impact daily life.MOEMS-devices can be applied to optical scanning -both resonant beam scanning and steady-state beam steeringand will ultimately result in all of the performance gains, such as to reduce size and cost, increase speed, and reliability, and accuracy.One of the most MOEMS-components in fabrication of scanner are scanning mirrors [1][2][3]5].
Electromechanical development and research of micromechanical mirrors is one of the directions of microopticoelectromechanical systems development.Micromechanical mirrors provides an overview of the performance enhancements that will be realized by miniaturizing scanning mirrors like those used for laser printers and barcode scanners, and the newly enabled applications, including raster-scanning projection video displays and compact, high-speed fiberoptic components.There are a wide variety of methods used to fabricate micromechanical mirrorseach with its advantages and disadvantages.There are, however, performance criteria common to mirrors made from any of these fabrication processes.For example, optical resolution is related to the mirror aperture, the mirror flatness, and the scan angle.Micromechanical mirrors provides a framework for the design of micromirrors, and derives equations showing the fundamental limits for micromirror performance.These limits provide the micromirror designer tools with which to determine the acceptable mirror geometries, and to quickly and easily determine the range of possible mirror optical resolution [1][2][3][4].
The micromechanical mirrors are used commonly both in microsystems of optical streams management and in the laser and optical rangefinder.The rangefinders are used in orientation and navigation systems of mobile objects on a terrain relief [1,2].
Problem statement
Electrostatic actuators are applied to deviate a mirror element in the offered micromechanical components.All electrostatic actuators are featured by snap-down effect [9][10][11][12][13][14][15].The criteria allowing to define a condition of occurrence of this effect can be obtained using the equilibrium model of the mirror element.
The developed model of mirror elements equilibrium of the micromechanical components can be shown in a normalized view: where W, n, are the dimensionless variables defined by equations: where -relative dielectric permittivity of an air-gap; -an electric constant; , -distances from a rotation axis to edges of fixed electrodes of electrostatic actuators; w-width of fixed electrodes; -distance between fixed electrodes of electrostatic actuators and a mirror element; β, βmax -angle and the maximum error angle of a mirror element; kβ -coefficient of rigidity of elastic suspension of mirror element; -deflection voltage; L -length of a mirror element.In Figure 1, the curves show the behavior of the mirror element of micromechanical mirrors in case changing of control voltages of electrostatic actuators.Optimal points of the curves define two system states: the low branch corresponds to the stable state of the system, and the top branch corresponds to the unstable state of the system.In an unstable state of the system slight change of control voltages leads to the snap-down effect and to the breakage of the device.Thus, the work of electrostatic actuators of micromechanical mirrors should run in the lower part of the curves.The location of the optimum is also affected by the configuration of electrostatic actuators, in particular, the size of stationary electrodes of electrostatic actuators.
Results and discussion
However, equation ( 5) allows defining only the maximum value of constant deflection voltage U1.When applying of the deflection voltage changing according to defined harmonious law the maximum value leading to occurrence of snap-down effect will be more than U1.This is due to the influence of the coefficient of the electrostatic rigidity is created by electrostatic actuators.In this case, an expression for determining the maximum value of deflection voltage at which the snapdown effect will take the following form:
Conclusions
The model of mirror elements equilibrium of the micromechanical components is developed, the behavior analysis of the mirror element of micromechanical mirrors in case changing of control voltages of electrostatic actuators is carried out, the dependence of the relative shift of a mirror element on applying voltage when the relative size n fixed electrode of electrostatic actuators has different values is showed, an expression for determining the maximum value of deflection voltage at which the snap-down effect will take the following form is obtained in case of the influence of the coefficient of the electrostatic rigidity of electrostatic actuators.The developed equilibrium model of mirror elements and the obtained results of modeling can be used at design of micromechanical mirrors with internal suspensions.
Figure 1
shows the dependence of the relative shift of a mirror element W on applying voltage Preprints (www.preprints.org)| NOT PEER-REVIEWED | Posted: 26 July 2016 doi:10.20944/preprints201607.0080.v1U* when the relative size n fixed electrode of electrostatic actuators has different values.
Figure 1 .
Figure 1.Dependence of relative shift of a mirror element W on the applied voltage of
Figure 2
and 3 show dependences of critical values of relative shift of the mirror element W and the applying voltage defining the occurrence of snap-down effect on the relative size of fixed electrodes n.
|
2017-05-01T21:07:42.894Z
|
2016-07-26T00:00:00.000
|
{
"year": 2016,
"sha1": "292d52d9e5da972c9f2ffc127d8a6c3ec7bed39f",
"oa_license": "CCBY",
"oa_url": "https://www.preprints.org/manuscript/201607.0080/v1/download",
"oa_status": "GREEN",
"pdf_src": "Anansi",
"pdf_hash": "292d52d9e5da972c9f2ffc127d8a6c3ec7bed39f",
"s2fieldsofstudy": [
"Engineering",
"Physics"
],
"extfieldsofstudy": [
"Engineering"
]
}
|
119102454
|
pes2o/s2orc
|
v3-fos-license
|
On the duality of three-dimensional superfield theories
Within the superfield approach, we consider the duality between the supersymmetric Maxwell-Chern-Simons and self-dual theories in three spacetime dimensions. Using a gauge embedding method, we construct the dual theory to the self-dual model interacting with a matter superfield, which turns out to be not the Maxwell-Chern-Simons theory coupled to matter, but a more complicated model, with a ``restricted'' gauge invariance. We stress the difficulties in dualizing the self-dual field coupled to matter into a theory with complete gauge invariance. After that, we show that the duality, achieved between these two models at the tree level, also holds up to the lowest order quantum corrections.
I. INTRODUCTION
For a long time, it has been recognized that it is important to establish connections between apparently unrelated situations so that unifying pictures may emerge. In this context, the duality between the Abelian Maxwell-Chern-Simons (MCS) and self-dual (SD) theories in three dimensional spacetime found in [1] is a paradigmatic example. Extensions of this relation involving non-Abelian gauge fields were considered by various authors [2].
However, while the duality is well established for the free case, the situation becomes more subtle when interactions with others dynamical fields are taken into account. For instance, in [3] a master field action was used to show that the duality between the MCS and SD models coupled to fermions requires the addition of a Thirring current-current interaction in the MCS Lagrangian. However, problems were met in using the same method to study the duality when interactions with a bosonic field were present. In [4,5], a so-called gauge embedding procedure was developed to overcome these problems. In this way, it became possible to build a theory dual to the SD model coupled to bosons or fermions. In the later case, this dual model turned out to be the MCS theory with a Thirring interaction, as found in [3], while in the former case a more complicated situation arose. Indeed, the theory dual to the SD model coupled to bosons was found to be a modified MCS theory with an unusual field-dependent coefficient for the Maxwell term.
Another interesting question concerns the realization of the duality for supersymmetric models. A first step in this direction was the use of the master field action approach in [6] to study the equivalence between the supersymmetric MCS and SD theories in the superfield formulation. However, using this method, it was not possible to go beyond the simplest case of Abelian theories without any coupling to matter. Our aim here is to propose a generalization of the gauge embedding method to construct a theory dual to the supersymmetric self-dual model interacting with a scalar superfield. We will show that this dual theory involves both a Thirring interaction, as well as the modified MCS part. After building the duality at the classical level, we will show that it survives when the first order quantum corrections are taken into account.
It is important to remind that this modified MCS Lagrangian does not define a genuine gauge theory in the usual sense, since its action is invariant under a "restricted" gauge invariance, that is to say, when only the basic spinor superpotential undergoes a gauge transformation. As we shall see, it is very difficult to generalize the gauge embedding method in order to turn the dual of the SD model into a genuine gauge theory. This paper is organized as follows. In Section II, we review the duality between supersymmetric SD and MCS models, without coupling to dynamical sources, using the superfield formalism. Afterwards, in Section III, we include the interaction of the SD with a matter superfield, and use the gauge embedding method to build the dual of this theory. The dual equivalence so obtained is shown to be maintained by the lowest-order quantum corrections in Section IV. In Section V, we describe the difficulties that arise when one tries to find a dual for the SD model which is a "genuine" gauge theory. Our conclusions, together with some comments on the applicability of these methods to the noncommutative extensions of these models, are found in Section VI.
II. DUALITY FOR THE FREE THEORIES
Our starting point is the superfield Maxwell-Chern-Simons theory which is described by the action is the usual superfield strength and A α is the spinor superpotential. Hereafter, we follow the conventions of [7]. The action (2.1) is invariant under the gauge transformation δA α = D α ǫ.
After the addition of the gauge-fixing term the propagator of the A α field takes the form were δ 12 ≡ δ 2 (θ 1 − θ 2 ). From Eq. (2.4), we obtain the propagator of the W α superfield, (2.5) Let us now consider the self-dual theory whose action is specified by where B α is a spinor superfield and Ω α = 1 2 D β D α B β is an analog of the superfield strength defined in the MCS theory. The propagator of the B α superfield is Notice that the propagators for W α and B α both have a pole at p 2 = −4m 2 . Near this pole the B α propagator becomes where the dots stand for terms which stay finite as p 2 → −4m 2 . Thus the superfield B α of the supersymmetric self-dual model seems to play the same role as the superfield strength 1 2m W α of the MCS theory. This conclusion is further substantiated by the equations of motion derived from the actions in (2.1) and (2.6), Denoting the vector components of the superfields W α and B α respectively as f m = we find that the propagators for these fields in the neighborhood of the above mentioned pole coincide, We remind the reader that f m is the dual of the F mn tensor, the field strength of the "electromagnetic" component field in A α .
BEDDING METHOD
Here we investigate the persistence of the duality pointed out in the previous section when interaction with matter is included. In this situation the approach developed in [3], based on the use of the Green functions for the field equations, cannot be directly applied. In fact, the coupling of the gauge superfield to the (scalar) matter superfield cannot be represented in the form A α J α because of the presence of an extra "diamagnetic" term characteristic of the minimal coupling. To circumvent this problem we use a gauge embedding approach, similar to the one developed in [5].
Let us introduce the action of the self-dual model coupled to matter, The pure matter sector of this theory is invariant under the transformations φ → φe iǫ , Our aim consists in transforming the whole self-dual theory (3.1) into a gauge theory, in a sense to be clarified later. We start by recasting the pure matter sector of the action (3.1) in the form, where we used the notation The "gauge" current of the model differs from the above expression by a B field dependent term, and the equations of motion for the gauge and scalar superfields are Here µ 2 = m 2 − g 2 8 φφ is the field-dependent "mass" of the B α superfield. We will refer to δS δB α ≡ K α , given by the left-hand side of (3.5a), as the Euler vector and we note that K α can be rewritten in terms of the "gauge" current J α as The gauge embedding procedure is an iterative method that starts by the introduction of an auxiliary field Λ α , which is a Lagrange multiplier for the Euler vector corresponding to the spinor superfield (the introduction of the iterative method with respect to both the spinor and scalar superfields, which in principle would provide complete gauge invariance in the resulting model, becomes much more complicated, as we show in Section V). We therefore define the first-order iterated Lagrangian while the change in the Euler vector K α is and, therefore, if we define δΛ α = D α ǫ, the variation δL (1) turns out to be so that it can be canceled by the variation of the term 4µ 2 Λ α Λ α . Thus, the second-order iterated Lagrangian is invariant under the gauge transformation δB α = D α ǫ. The Lagrange multiplier Λ α can be eliminated using its equation of motion and we finally arrive at the gauge invariant Lagrangian (3.14) whose explicit form is After renaming the spinor superfield in the previous equation as A α and some rearrangements, we can cast the action we have found for the theory dual to the SD Lagrangian in (3.1) as where the superfield strength W α has been defined in Eq. (2.2) and the J α is given in (3.3).
In the pure spinor sector, the Lagrangian (3.16) is similar to the superfield Maxwell-Chern-Simons action. However, the W α W α term has an unconventional field dependent coefficient, as it happens in some generalizations of the Abelian Higgs model (see for instance [9]). We stress again that the action obtained from Eq. Thirring interaction. We also remark that the gauge field interaction with the matter given by the term W α J α is the superfield analog of the "magnetic" coupling ǫ abc ∂ a A b J c [3].
Let us now compare the equations of motion for the spinor superfield in the self-dual model (3.1) and for the superfield strength in the DMCS model (3.16). After introducing the operator they are given respectively by Given the inverse of the (∆ −1 ) β α operator as ∆ α ρ (∆ −1 ) β α = δ β ρ , the solution of Eq. (3.18a) can be readily obtained, while, for solving Eq. (3.18b), one starts by applying ∆ α ρ to Eq. (3.17) to obtain which can be used to write the solution of Eq. (3.18b) as and therefore as g → 0 one recovers the relation B α = W α 2m . It still remains to verify the equivalence for the matter sectors of these models. To this end we consider the equation of motion for the scalar superfield φ corresponding to the DMCS model, By using the expression (3.22) we arrive at which coincides with the equation of motion for the matter superfield in the SD model. This confirms the complete duality equivalence of these two models.
The Lagrangian in Eq. (3.16) contains nonrenormalizable interactions but, concerning renormalizability, we do not expect these to generate difficulties at the quantum level. As it happens in the nonsupersymmetric model [10], the Thirring interaction is renormalizable in the framework of the 1 N expansion for a N-component scalar superfield. Indeed, in that case we can eliminate the four-scalar vertex J α J α in favor of S α φ i ↔ Dαφi − 1 2 S α S α where S α is an auxiliary superfield, whose propagator is, up to a constant, equal to the one for the gauge spinor superfield in the CP N −1 model [8]. It behaves as 1/k for large k momentum, drastically improving the power counting. One may even entertain the hope that renormalizability also holds for finite N, although if a direct proof is not feasible at the moment.
IV. INCLUSION OF RADIATIVE CORRECTIONS
After establishing the duality between the SD model defined by Eq. (3.1) and the DMCS model in Eq. (3.16) at the classical level, we shall now present some calculations to verify whether this equivalence persists at the quantum level. In more concrete terms, we will examine the radiative corrections to the two point vertex functions of the B α , W α and φ superfields for both theories, up to the second order in the coupling constant g, and we will verify that they are compatible with Eq. (3.22).
The interacting parts of the SD Lagrangian is given by, while for the DMCS model, up to second order in g, we have two similar interaction terms, together with the Thirring interaction, We start by considering the first quantum corrections to the two-point function of the spinor superfields B α and W α . In both cases, the relevant superdiagrams are those depicted in Fig. 1, where each internal line stands for the <φφ > propagator, and the external wavy lines represent either the external B α or W α superfields. The evaluation of these diagrams yields a finite result, being equal to for the SD theory and, for the DMCS model, were we employed the notation I(k, p) = {(k 2 + M 2 )[(k + p) 2 + M 2 ]} −1 . Notice that the contribution (4.5) goes into (4.6) and vice-versa under the exchange of B α by W α 2m . At the approximation we are working with, this is consistent with Eq. (3.22), since the terms involving φ in the right hand side of (3.22) contain additional powers of the coupling constant g. Hence, the duality between the SD and the DMCS models is maintained after the inclusion of the first quantum corrections induced by the diagrams in Fig. 1.
To further examine the persistence of the duality at the quantum level, we focus now on the corrections to the two-point function of the scalar superfield, which arise from the super- The evaluation of the supergraph in Fig. 2a is the simplest one. The result is the same both for the SD and for the DMCS theories, and it is given by (we note that the "longitudinal" term of the B α propagator, proportional to 1 m 2 p 2 , do not contribute since it is proportional to D 2 D α D α δ 12 | θ 1 =θ 2 = 0).
To calculate the contribution from the graph in Fig. 2b we regroup some terms in the propagator of the B α superfield, which can be cast as The second term of this expression is constant, whereas the first term is equal to the propagator of W α up to a factor 1/4m 2 . This difference is, however, compensated by the factor 1/4m 2 in the quartic vertex of the DMCS model, Eq. (4.2), so that the contributions of the diagram 2b in the DMCS model and the one corresponding to the first piece of the B α propagator in the SD model are identical, and read (4.9) We stress that this expression is exact, including superficially divergent as well as finite parts.
The contribution from the second (constant) term of the B α propagator in Eq. (4.8) can be found to be At the end of the day, we conclude that the first quantum corrections to the two-point vertex function of the scalar field for the self-dual and the dualized Maxwell-Chern-Simons theories are identical, given by the sum of S 2a , S 2b and S 2b ′ . This result confirms the duality between these two models when these quantum corrections are taken into account.
V. DIFFICULTIES IN A COMPLETE GAUGE EMBEDDING PROCEDURE
The gauge embedding procedure developed in Section III allowed us to obtain the theory defined by the action (3.16), dual to the SD model coupled to a scalar superfield, characterized by the "restricted" gauge symmetry δA α = D α ǫ, where ǫ is an infinitesimal parameter and the matter superfield is kept untouched by the gauge transformation. A natural question is whether one can adapt this method to obtain a theory in which gauge transformations affect also the matter superfield, as it takes place in the usual supersymmetric electrodynamics [7]. In this section, we develop the "complete" gauge embedding procedure, introducing Lagrange multipliers for both the spinor and scalar superfields.
From a formal viewpoint, the gauge embedding prescription is the following: starting with the Lagrangian L(Φ i ), where Φ i is the set of the dynamical variables in the theory, are the corresponding Euler vectors. Next, if we denote the gauge transformation of each field as ∆Φ i , the total variation of the Lagrangian L(Φ i ) is 2) The first-order iterated Lagrangian is defined by where Λ i are the Lagrange multipliers, and the corresponding variation under a gauge transformation is To simplify this expression, we choose the Lagrange multiplier Λ i to change, under a gauge transformation, as δΛ i = ∆Φ i , and therefore To cancel this variation we should augment L (1) by some function of the Lagrange multiplier, f (Λ), judiciously chosen so that the second-order iterated Lagrangian, is gauge invariant. The equation to be satisfied by f (Λ) for this purpose is where f ,i (Λ) = ∂f ∂Λ i . In summary, when Eq. (5.7) has a nontrivial solution f (Λ), the gauge embedding method will provide us, in principle, with an invariant action given by (5.6). In Section III, we considered the situation in which the spinor superfield is transformed but not the scalar one, and in this case we were able to go through all steps of this procedure, obtaining the action (3.16). Now we turn to the case where the scalar superfield is also transformed and we will show that, even if we can find a nontrivial solution for (5.7), the application of the gauge embedding method turns out to be extremely cumbersome.
In the present case, besides the Euler vector for the spinor superfield B α , we introduce an Euler vector for the scalar superfieldφ, (5.8) and the conjugated vectorK for φ. Correspondingly, we will introduce the Lagrange multipliers Λ andΛ. The first-order iterated Lagrangian is given by and its variation under the infinitesimal gauge transformations is given by after choosing the variations of the Lagrange multipliers as follows, We also write the equation (5.7) in the case under consideration, where f Λ = ∂f ∂Λ , fΛ = ∂f ∂Λ , f ,α (Λ) = ∂f ∂Λ α . Next, we evaluate the variations for the Euler vectors K α , K,K, starting with the spinor one, (5.14) It is interesting to compare this with Eq. (3.10), to see the effect of the variation of the scalar superfield, which was absent in that case. As for the variation of the remaining Euler vectors, one finds 15) and the complex conjugate of the above expression for δK.
Finally, inserting (5.15) and (5.14) into (5.13) and collecting the factors multiplying D 2 ǫ, D α ǫ, and ǫ, respectively, we found the condition (5.13) to be equivalent to the following set of equations, ) is a constraint on the Lagrange multipliers Λ,Λ, which can be inserted into Eqs. (5.16b) and (5.16c), to obtain Equations (5.17) can actually be solved, and the solution reads which, going back to (5.6), gives the second-order Lagrangian The corresponding equations of motion for Λ α , Λ,Λ are Their solutions have the highly cumbersome form In principle, we could eliminate Λ α , Λ andΛ from the second-order Lagrangian in Eq. (5.19) using their equations of motion but, as can be seen from the explicit solutions we have just quoted, in practice this would be extremely complicated. However, even without writing explicitly the effective Lagrangian obtained with our complete gauge embedding procedure, we note that, because of (5.14), the Maxwell term in this effective Lagrangian would not appear with the field-dependent coefficient 1/µ 2 . There is no simple way to relate the resulting effective theory to the dualized Maxwell-Chern-Simons we have obtained in Eq. (3.16). We see that, even if it does not provide us with a "genuine" gauge theory, the gauge embedding procedure adopted in Section III seems to be more adequate since it leads to a more tractable theory.
VI. CONCLUSIONS
We considered the dual equivalence between the supersymmetric self-dual and the supersymmetric Maxwell-Chern-Simons models in three spacetime dimensions. This duality was shown to take place for the free theories, leading to the known duality between the vector components of these superfields. To contemplate the situation where the interaction with matter is present, we used a gauge embedding method to build the dual to the supersymmetric SD model coupled to a scalar superfield, and found it to be a modified MCS theory, with an unusual field-dependent coupling for the Maxwell term, together with a Thirring interaction and a nonpolynomial "magnetic" coupling of the matter to the gauge superfield.
Then, we shown that the dual equivalence of these two models is maintained by the quantum corrections, at least in the one-loop approximation.
Also, we developed a prescription for a generalized gauge embedding procedure which allows one to obtain a dual theory invariant under gauge transformations for both gauge and scalar superfields. However, this result is at the most of academic interest, since it becomes very difficult to write explicitly the resulting effective Lagrangian in this case.
We close this paper by recalling recent discussions in the literature about whether the duality between self-dual and topological gauge theories is realized in noncommutative space-times [11,12,13], usually making use of the Seiberg-Witten (SW) map [14]. In [15], the question was analyzed without the recourse to the SW map, and it was argued that the noncommutative SD model is not dual to the noncommutative generalization of the MCS theory, but instead a modified noncommutative MCS dual model was unveiled (this conclusion is consistent with the analysis using the SW map in [13]). One might hope that the gauge embedding method developed in this work could further elucidate these issues.
However, after carefully applying the steps described in Section III in the noncommutative situation, one stops at the noncommutative version of Eq. (3.13), which assumes the form where µ 2 = m 2 − g 2 8 φ * φ 2 , and the asterisk denotes the Groenewald-Moyal product. Therefore, in the noncommutative case, one cannot eliminate the Lagrange multiplier from the second order iterated Lagrangian using its equation of motion (6.1). For the moment, this is a major stumbling block in applying the methods developed in this paper to the noncommutative version of the models studied here.
Acknowledgements. A. Yu. P. is grateful to P. Minces for useful discussions. This
|
2019-04-14T02:54:35.123Z
|
2006-04-04T00:00:00.000
|
{
"year": 2006,
"sha1": "ed29a114e14d743ead07fe9d78e9e082213bc20f",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/hep-th/0604019",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "ed29a114e14d743ead07fe9d78e9e082213bc20f",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
261797780
|
pes2o/s2orc
|
v3-fos-license
|
Formulating biomass allometric model for Paraserianthes falcataria (L) Nielsen (Sengon) in smallholder plantations, Central Kalimantan, Indonesia
Abstract The forests in Central Kalimantan, Indonesia have been heavily impacted by logging, mining, fires, and other degradation activities for over 30 years. To address this, the Indonesian government has promoted community-based forest management schemes. One such scheme, called Hutan Kemasyarakatan (HKm), has introduced Sengon (Paraserianthes falcataria) in smallholder plantations in Rungan Barat, Gunung Mas, Central Kalimantan. However, accurate estimation of biomass is crucial for carbon sequestration credits, but there are no specific allometric models for estimating Sengon above-ground biomass (AGB) in this area. To create a site-specific AGB allometric model for Sengon, 23 trees were felled to collect fresh biomass data. Various tree variables, such as diameter at breast height: 1.3 m (DBH), total height, merchantable height, and stem bole volume were measured for each sample tree. The average wood basic density of Sengon at the study site was also calculated. A total of nine alternative candidate regression equations were fitted and tested to select the best-fit AGB allometric model. Also, to assess the adaptedness of the identified AGB allometric model, comparisons with the models from literature, and comparisons between two interchangeable methodologies (i.e. direct biomass allometric model and biomass expansion factor (BEF)-based biomass estimation) were undertaken. This study has developed a regression function, denoted as to estimate the AGB of Sengon trees in smallholder plantations in Central Kalimantan, Indonesia. The formulated regression function demonstrated better estimation performance compared to common pantropical and regional AGB allometric models. In terms of the BEF-biomass approach, the AGB estimation derived from Smalian’s volume was relatively accurate, close to the mean AGB obtained by the formulated model in this study. In summary, this study proposes using the developed model, based solely on DBH, to accurately estimate AGB and carbon sequestration potential in Sengon trees. The accurate estimation of AGB using this model has additional advantages, including facilitating carbon credit acquisition and informing long-term management decisions.
Introduction
Climate change and its repercussions have sparked concerns in national development proposals on a global scale, with forests being regarded as a critical nature-based solution for combating climate change (Osaka et al. 2021;Stefanakis et al. 2021).In particular, developing countries predominantly experience the burden of climate change's harmful consequences due to fragility, lack of adequate endurance, and poor adaptability (Nath and Behera 2011;Makundi 2014).The transition of tropical forests into agricultural land and overexploitation of forested land substantially disrupts global carbon cycles in a negative way and exacerbates the forested land cover change that accounts for between 10% and 20% of total carbon emissions (Pachauri and Reisinger 2007;van der Werf et al. 2009).
Kalimantan's existing forests are categorized primarily into two types: (1) Intact forests, which are mostly located at higher altitudes, out of the range of logging corporations, and (2) Low-land fragmented forests, which extend to swamps and include agroforestry, plantations, scrublands, and farmlands (Ferraz et al. 2018).Central Kalimantan was reported to experience devastating deforestation in Indonesia (Broich et al. 2011;Suwarno et al. 2015), where more than 30 years of extensive logging, mining, and other forms of degradation have seriously affected all forest ecosystems (Kronseder et al. 2012;Moeliono and Limberg 2012).The Central Kalimantan province lost almost 0.9 million ha of forest between 2000 and 2008 (Broich et al. 2011;Suwarno et al. 2015).However, in 2018, Central Kalimantan exhibited one of the lowest rates of deforestation among the Indonesian provinces, with a rate of 0.38% that was 81% lesser than the 1990-2012 deforestation baseline.Also, as of 2018, Central Kalimantan had accumulated 7.8% of Indonesia's total tropical forest biomass carbon (Earth Innovation Institute 2020).The Indonesian government's commitment to sustainably managing and utilizing forest resources is accountable for such a decline in deforestation, which is demonstrated by the certification of sustainable forest management to prevent illegal logging, the reinforcement of a customized law enforcement unit, the resolution of land disputes, and the upholding of community land rights and forest tenure (Ministry of Environment and Forestry 2020).Forests in Indonesia are administered by the Ministry of Environment and Forestry across several Forest Management Units (Kesatuan Pengelolaan Hutan: KPHs).The Protection Forest Management Unit: Kesatuan Pengelolaan Hutan Lindung (KPHL) is one of the important categories of KPH.The Indonesian government launched Hutan Kemasyarakatan (HKm1 ) scheme in 2001 as a follow-up to the KPHL, which means "Community Forests/Social Forests", to curb increased forest degradation, enhance the conservation of remnant forests, and promote local livelihoods (Pender et al. 2008;Fisher et al. 2018).The HKm scheme allows local people to cultivate on state-owned deforested land designated as Protection Forest or Production Forest (Pender et al. 2008).
The German organizations, named Fairventures Worldwide (FW) and Fairventures Social Forestry (FSF), operate in Central Kalimantan to restore degraded landscapes, sustain timber production, enhance livelihoods, and produce carbon credits pursuant to the HKm scheme of the Indonesian government and small-scale farmers' perceptions.FW and FSF are committed to supporting local forest concession holders in terms of logistics (e.g.providing quality seedings) and technical aspects of forest management operation, as well as the provision of carbon sequestration credits from international markets since it has a significant impact on climate change mitigation.As a part of this sustainable commitment to restore degraded forested areas in Rungan Barat, Gunung Mas, Central Kalimantan, Sengon has been largely introduced by FW and FSF with the engagement of local farmers to meet the objectives of livelihood security of local communities and national ecosystem conservation (Fairventures Worldwide 2021; Fairventures Social Forestry 2022).Sengon has acquired widespread concern as a fast-growing multipurpose plantation species in Indonesia on both private smallholder plantations and public lands for industrial and rehabilitation purposes (Nawir et al. 2007).Nevertheless, accessing carbon credits, which is considered to be a viable green income stream for those Sengon plantations, completely depends on the accurate estimation of biomass and corresponding carbon.
The framework for climate change mitigation prioritizes enhancing access to financing through global carbon markets as a means of reducing deforestation and forest degradation and increasing forest carbon stock (Aukland et al. 2003;Ebeling and Yasu� e 2008).For countries intending to respond to climate change mitigation through various forest projects, the estimation of forest carbon stock is essential (B€ ottcher et al. 2009;Birdsey et al. 2013).Also, the Paris Agreement promotes developing nations to respond to climate change mitigation by lowering emissions from deforestation and forest degradation, safeguarding existing carbon storage, and further enhancing forest carbon stock (Grassi et al. 2017).These mechanisms have paved the ground for the emergence of credible and practical approaches for estimating biomass carbon in a variety of land use systems, including plantation forests.Also, accurate estimation of tree biomass is required to comprehend forest structure, amount of carbon sequestered, forest productivity, and forest's contribution to alleviating contemporary climate change issues, (Westman and Rogers 1977;Chambers et al. 2001;Saint-Andr� e et al. 2005;Zianis et al. 2005;Henry et al. 2013;Pachauri et al. 2014), as well as to make sustainable forest management decisions (Peng 2000).The most basic practice for assessing biomass carbon stock in a forest is to use valid allometric equations for individual tree biomass estimation (Gibbs et al. 2007;van Breugel et al. 2011).Accordingly, allometric equations are being widely employed to estimate individual tree biomass, and eventually the biomass carbon in a forest area (Sileshi 2014;Traor� e et al. 2018;Kebede and Soromessa 2018;Mukuralinda et al. 2021).In addition, enhancing forest carbon sequestration, maintaining biodiversity, and supporting the livelihoods of forest-dependent communities lead to the implementation of innovative carbon credit market mechanisms, such as Reducing Emissions from Deforestation and Forest Degradation (REDDþ) (Mugasha et al. 2013).REDDþ, a result-based approach, focuses on carbon stock accounting as the most critical outcome metric for the provision of remuneration (K€ ohl et al. 2020).Due to Indonesia's participation in the worldwide REDD þ program, the assessment of forest carbon stock and stock fluctuations has emerged as a popular research area in Indonesia (Anitha et al. 2015).For accurate quantification, monitoring, and reporting of the consequences or advantages of REDD þ operations, precise biomass allometric equations are imperative (Gibbs et al. 2007;Somogyi et al. 2007).Until now, considering the composites of tropical forest species, several biomass pantropical allometric models have been developed (Haase and Haase 1995;Brown 1997;Nelson et al. 1999;Chambers et al. 2001;Ketterings et al. 2001;Chave et al. 2005;Pearson et al. 2005;Chave et al. 2014).Although the use of common pantropical allometric models straightforwardly estimates tree biomass and carbon stock for a wide range of species, the application of previously reported pantropical allometric models results in considerable bias in biomass estimation (Clark et al. 2001;Pilli et al. 2006;Basuki et al. 2009;van Breugel et al. 2011;Alvarez et al. 2012;Hossain et al. 2021).As reported, even though Chave et al.'s (2005Chave et al.'s ( , 2014) ) equation functions admirably in a few locations in South-East Asia (Rutishauser et al. 2013) and Africa (Vieilledent et al. 2012;Fayolle et al. 2013), the preponderance of other studies reveal greater uncertainty in biomass prediction than those formulated locally (Basuki et al. 2009;Kenzo et al. 2009;van Breugel et al. 2011;Alvarez et al. 2012;Goodman et al. 2014).Furthermore, biomass differs depending on site characteristics (Alvarez et al. 2012), forest type (Rutishauser et al. 2013), wood density (Enquist et al. 1999), life history (Henry et al. 2010), crown size (Goodman et al. 2014), and climatic zones (Brown et al. 1989).Irrespective of biomass allometric equations, biomass expansion factor (BEF) and wood basic density (WBD) can also be used to convert the stem volume into total above-ground biomass (AGB) of a tree when AGB allometric equations of a species are unavailable (Somogyi et al. 2007;Lisboa et al. 2018).
In Indonesia, out of the available biomass allometric models, 47% were developed for varieties of natural forest ecosystems (i.e.dryland forest, peat swamp forest, and mangrove forest); while 52% were developed for plantation forest ecosystems (including community forests).Also, the vast majority of studies on biomass estimation took place in plantation forests in Java (Krisnawati et al. 2012).Central Kalimantan, on the other hand, has a paucity of information regarding forest inventories (Kronseder et al. 2012).In terms of available species-specific biomass allometric models in the Indonesian plantation forest ecosystem, Mangium (Acacia mangium) contributes 14%, followed by Puspa (Schimi awllichii) at 6%, Tusam (Pinus merkusii) at 5%, and Sengon (Paraserianthes falcataria) at 5% (Krisnawati et al. 2012).There are two commonly reported allometric models for estimating AGB of Sengon trees across the Indonesian plantations.These models were developed in Java, specifically in Jateng (Rusolono 2006) and Jatim (Siregar 2007).Despite the rapid introduction and establishment of Sengon plantations in Central Kalimantan, which has substantial site and environmental variations from Java, there are still no allometric regression functions that can be utilized to provide accurate biomass estimation for Sengon.It is, thus, indispensable to develop such a biomass allometric model in Central Kalimantan for Sengon.The objectives of this study are: (1) to find a best-fit allometric equation to estimate the AGB of Sengon, (2) to compare the formulated best-fit AGB allometric equation with common pantropical AGB allometric equations and Sengon's existing AGB allometric equations in the Indonesian plantation ecosystem, and (3) to evaluate the efficiency of AGB estimation by using an alternative approach: employing the BEF-biomass method that incorporates stem bole volume and WBD.
Study area
This study was conducted in Rungan Barat, Gunung Mas, Central Kalimantan, Indonesia, in smallholder plantations sanctioned by the HKm scheme of the Indonesian government with which FSF has contracted (1 � 10'04.9"Sand 113 � 28'18.2"E).The entire study area consists of dispersed plantation stands from four planting seasons (i.e. 2018-2019, 2019-2020, 2020-2021, and 2021-2022), spanning approximately 400 ha and managed by FSF with the active engagement of local concession holder farmers (Figure 1).The vegetation of the studied smallholder plantation forest stands is dominated by three tree species: Sengon (Paraserianthes falcataria), Jabon (Anthocephalus cadamba), and Acacia (Acacia mangium).However, the composition of the plantation stands is predominantly monoculture of Sengon trees and more areas of the plantable patches are expected to be established mostly with the economically important species.Local farmers have long prioritized the monoculture of Sengon because of its high quality and demand for industrial wood, and various agroforestry techniques are being tested on an increasing scale with the species.Nonetheless, the overall HKm terrain consists of fragmented secondary forests, contiguous secondary forests, plantations, farmlands, shrublands, barren lands, and old-growth intact forests at high altitudinal elevations.The study site is located near Palangka Raya, the capital city of Central Kalimantan.This region experiences a tropical rainforest climate with an average annual temperature of 26.3 � C, which is around 1.57 � C higher than the average for Indonesia.The annual precipitation in Palangka Raya is around 2666 mm (Climate-data.org2022).
AGB estimation methods
In this study, two methods were adopted to estimate AGB of Sengon.These included direct estimation from formulated allometric model and indirect estimation from stem bole/merchantable volume using BEF.
Destructive sampling and direct estimation of AGB
A destructive technique was employed, as illustrated in Hossain et al. (2016)'s manual on biomass allometric equation development, to assess the AGB of all individual sample trees.A reconnaissance survey, followed by a standard circular forest inventory plot (15 m radius) in each age-graded plantation stand at the study site (Figure 1), was performed prior to the destructive biomass assessment to obtain baseline information on probable size distributions (i.e.diameter classes) of Sengon.Once the population of Sengon had been categorized into six DBH (Diameter at Breast Height: 1.3 m) classes covering all the age gradations: 2-6 cm, 6-10 cm, 10-14 cm, 14-18 cm, 18-21 cm, and � 21 cm, a preferential sampling technique was applied to sample four trees from the first five DBH classes and three trees from the last DBH class (hence, a total of 23 trees) for destructive biomass assessment, with the intention of representing trees from all DBH classes while avoiding defective trees and edge effects.The sample trees were then felled as close to the ground level as practicable and separated into three components: stem bole (up to merchantable height), branch (diameter � 2 cm), and foliage (twigs: < 2 cm in diameter, flowers, fruits, and seeds).Before felling a tree, the DBH and total height (TH) were measured.The Merchantable Height (MH) of the tree was measured once it had been felled.
The fresh weight of each tree component for all sample trees was measured in the field using a digital hanging weighing balance (max.300 kg, precision 0.1 kg).For each tree component, a total of six to nine sub-samples ranging from 0.064 to 1.545 kg were weighed in the field using a digital weighing balance (max.7 kg, precision 0.0001 kg) and taken to the laboratory for oven drying.The dry weight of the stem bole, branch, and foliage sub-samples was determined in the laboratory by drying them for 10 days at 105 � C in an oven until a constant weight was achieved.A digital laboratory weighing balance (max.3 kg, precision 0.00001 kg) was used to record the dry weight of each sub-sample.The following formula was used to determine the fresh-to-dry weight conversion ratio of each tree component: Conversion ratio ¼ Dry weight of the sub sample ðkgÞ Fresh weight of the sub sample ðkgÞ The fresh weight of each tree component of a sample tree was multiplied by the average conversion ratio obtained from respective component sub-samples to assess the dry weight (biomass) as follows: Dry weight=Biomass ðkgÞ ¼ Fresh weight ðkgÞ � Conversion ratio Subsequently, the total AGB of a sample tree was assessed by adding the biomass of each tree component as follows:
Indirect estimation of AGB using conversion factor
The indirect method of estimating AGB involves the conversion of stem bole/merchantable volume to total AGB using BEF.This process requires the determination of the WBD, which reflects the amount of carbon stored in a given volume of stem bole and accounts for the variation in the tree species density.The method for calculating WBD was based on the approach described by Malimbwi et al. (1994) and Lisboa et al. (2018).To determine the WBD, two square specimens measuring 6 � 6 � 6 cm 3 were randomly selected from the parts of the stem bole (bottom, middle, and top) and branch among the sample trees.Overall, we sampled a total of eight square specimens from the stem bole and branch components.Those specimens were then taken to the laboratory, where they were oven-dried at 105 � C and weighed.Thereafter, each of the specimens was immersed in water for one week to restore its wet/fresh volume.Using the water displacement method, each specimen was submerged in a 2-L container that was successively leveled with an accuracy of 0.1 cm 3 to obtain the value for fresh volume by recording the amount of water displaced.Finally, the WBD was calculated as the average ratio of the dry weight to the fresh volume of the specimens, as described in the following equation: where WBD ¼ wood basic density (kg m À 3 ); sdw ¼ specimen dry weight (kg); sfv ¼ specimen fresh volume (m 3 ).
Assessment of BEF.
The value for BEF was determined as the average ratio of total dry weight (total AGB) to stem bole dry weight (stem bole biomass) of all destructively sampled trees as in the following equation: where TDW (kg) ¼ total (stem bole, branch, and foliage) dry weight; SDW (kg) ¼ stem bole dry weight; n ¼ number of sample trees.Source: Lisboa et al. (2018).
Stem bole volume estimation.
To determine the merchantable volume (otherwise, stem bole volume) of each sample tree, the stem was cut into 1 m billets for larger trees and 0.5 m billets for smaller trees until the merchantable height, which marks the beginning of permanent leaf containing lateral branches, was reached.Smalian's formula was used to determine the volume of each stem billet as it was employed in similar investigations (e.g. Henry et al. 2010;Lisboa et al. 2018;Hossain et al. 2021; Oluwajuwon 2022) (Equation ( 3)).The volumes of all the billets were summed to determine the merchantable volume of each sample tree.Additionally, the merchantable volume of each individual sample tree was determined using the models developed by Siswanto (2008) for Sengon in the Indonesian plantation ecosystem (Equations ( 4) and ( 5)).This was to assess the comparative performance of using Smalian's method and existing volume models in providing volume estimates for a more accurate total AGB evaluation through the BEF-biomass method.
Indirect BEF-biomass estimation.
Using the indirect method, the total AGB of all sample trees was estimated by converting the stem bole volume using the biomass conversion and expansion factor (BCEF), which is obtained from the product of WBD and BEF (Equations ( 6) and ( 7)).BCEF accounts for the variation in biomass distribution among different tree components and helps to appropriately convert the stem bole volume and biomass to the total AGB.
Model selection
A total of nine commonly employed regression forms in biomass estimation modeling for forest trees were evaluated as potential models for estimating the total AGB of Sengon trees (Brown 1997;Nelson et al. 1999 1).These regression models were fitted using nonlinear methods without any log-linear transformation of the equations, with the aim to identify the best-fit allometric equation for accurately estimating the AGB of Sengon.Some of the models only incorporate DBH, while others include both DBH and TH.The best-fit equation was the one with the lowest Akaike information criterion (AIC), Bayesian information criterion (BIC), root mean square error (RMSE), and mean percentage error (MPE) (Equations ( 8)-( 11)) (Chave et al. 2005;Sileshi 2014;Mugasha et al. 2016;Lisboa et al. 2018;Hossain et al. 2021).The Shapiro-Wilk and Runs tests were conducted to assess the residuals' normality and autocorrelation of the best-fit equation, respectively.However, where required, less regard was given to the normality assumption of the residuals where no indication of autocorrelation was detected, following Lisboa et al. (2018).
The "nlstools" package of the statistical software R (4.2.0) was employed to compute all of the regression parameters and coefficients (R Core Team 2022).Since the nonlinear function was considered without any linear transformation of complex equations, R 2 and Adjusted R 2 were not considered as model selection criteria.
ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi Here, n ¼ number of trees; Yp ¼ predicted value from model; Yo ¼ observed value in field measurement; Y ¼ mean of the observed value; K ¼ number of parameters estimated.
Model evaluation and comparison
The efficiency of pantropical AGB allometric models, existing AGB allometric models in the Indonesian plantation environment for Sengon ( 10)-( 12)) (Mayer and Butler 1993;Kachamba et al. 2016;Lisboa et al. 2018;Hossain et al. 2021).Two-tailed paired t-tests were performed to assess the significant variances in biomass estimation between the developed AGB allometric model in this study and AGB allometric models considered from the literature.Two-tailed Wilcoxon rank (paired samples) tests were further performed to assess the variations in biomass prediction between the two interchangeable methods: (1) Using the direct biomass allometric equation, and (2) Using the BEF-biomass method.
Here, n ¼ number of trees; Yp ¼ predicted value from model; Yo ¼ observed value in field measurement; Y ¼ mean of the observed value.
Biomass allometric model
The computed coefficients and model selection parameters for the candidate AGB models are listed in Table 3.The correlation between observed and predicted biomass values is shown in Figure 2, while the distribution of residuals is presented in Figure 3.Among the nine regression equations considered in this study ( 3).MD2 and MDH2 were therefore primarily shortlisted as the potential best-fit models.Furthermore, in order to determine the overall best model between the two power models, we examined the significance of integrating the TH parameter in MDH2, which was the only difference between the two models.It was observed that although MDH2 had a lower RMSE value of 4.17, its regression intercept and slope for TH were not statistically significant at a 95% Overall, MD2 model was chosen as the best-fit AGB allometric model (Figure 4) for Sengon in smallholder plantations in Central Kalimantan, Indonesia.This decision was based on the lower AIC and BIC values compared to MDH2, the nonsignificant slope reported for TH in MDH2, the nonsignificant regression intercept of MDH2, and the challenges of measuring TH in a forested environment.The validation of the linearity assumption between observed and predicted biomass values for MD2 (Figure 2) and the absence of evidence for autocorrelation in its residuals (Runs, p > 0.05) provide further confirmation of the statistical credence of the model.The normality of the residuals was not a principal assumption considered in this study, as it is not strictly applicable in biological growth conditions, nonlinear regression fitting cases, or when no autocorrelation of residuals is already confirmed.
Comparison of biomass allometric models
The predictive accuracy metrics of the AGB allometric models considered from the and the model identified in this study (MD2) are presented in Table 4 and Figure 5.In this study context, models from the literature were tested and their predicted AGB had an RMSE ranging from 18.03 to 129.The equation from Chave et al. (2005Chave et al. ( , 2014) ) provided a reasonable estimate of AGB, comparable to other commonly used pantropical models we considered (RMSE ¼ 10.07, ME ¼ 0.97, and MPE ¼ 7.29%) (Table 4 and Figure 5).Although its performance was not significantly different from MD2 (ttest, p > 0.05), it slightly overestimated the AGB (Figure 6).This relatively good fit may be attributed to its higher degree of generalizability of the model across various pantropical forest types and its comprehensive representation of diverse growth patterns and dataset despite its original development from tropical natural forest contexts.On the other hand, the pantropical equation by Pearson et al. (2005) exhibited a substantial overestimation trend with the highest level of bias (RMSE ¼ 129, ME ¼ À 4.57, and MPE ¼ À 117.21%), followed by Brown (1997) with RMSE ¼ 111.77,ME ¼ À 3.18, and MPE ¼ À 107.70% (Table 4 and Figure 5).When compared with MD2, both models (Brown 1997;Pearson et al. 2005) reported statistically significant differences (ttest, p < 0.05) (Figure 6).Considering existing AGB allometric models specific to Sengon in Indonesian (Java) plantations, Siregar (2007) and Rusolono (2006) tended to overestimate AGB compared to MD2 (Figure 5), although the differences were not statistically significant (t-test, p > 0.05) (Figure 6).In comparison with Rusolono (2006), Siregar (2007) recorded a lower RMSE (18.03) and higher ME (0.89); approaching the predictive accuracy of MD2 and Chave et al. (2005Chave et al. ( , 2014)).However, the absolute MPE (9.66%) by Rusolono (2006) was lower than the absolute MPE (43.12%) by Siregar (2007) (Table 4).
Indirect biomass estimation
The AGB of each sample tree was estimated indirectly using BEF, WBD, and stem bole volume, which was computed as a part of this study (Equations ( 5) and ( 6)).The stem bole volume was estimated employing Smalian's formula (Equation ( 2)) and two existing merchantable volume models developed by Siswanto (2008) in the Indonesian plantation context (Table 2).This indirect approach of AGB estimation was performed to assess whether there was any significant discrepancy between the identified AGB allometric model (MD2) and the indirect approach.The average BEF and WBD values (mean ± SE) obtained in this study were 1.69 ± 0.05 and 282.52 ± 12.39 kg m À 3 , respectively.The maximum AGB (mean ± SE) was estimated from the stem bole volume computed by Siswanto (B) with an average of 82.03 ± 17.73 kg.This was followed by Siswanto (A) (76.22 ± 16.46 kg) and Smalian (51.22 ± 10.31 kg).The mean AGB estimated using Smalian's volume was lower than the mean value obtained by MD2 (55.11 ± 11.68 kg); yet, it was the closest compared to the other two Siswanto's estimates (Table 5).Generally, there was no significant difference in predicting AGB between the considered approaches of determining stem bole volume and MD2 (Wilcoxon, p > 0.05) (Figure 7).This elucidates the rationale for choosing the WBD and BEF values computed in this study over literature values, as well as validates the accuracy of the proposed AGB allometric model (MD2) for Sengon in the study context.
Biomass allometric model
In this study, a biomass allometric model for Sengon in smallholder plantations, Central Kalimantan, Indonesia, was developed to estimate total AGB (hence, AGB carbon stock).Accurate AGB estimation through allometric models is a widely recognized approach for understanding the benefits of forest restoration or management actions in mitigating climate change.It also plays a crucial role in securing financial support from international carbon markets as part of national-and community-level livelihood strategies (Aukland et al. 2003;Chave et al. 2005;Gibbs et 2014).For instance, the ubiquitous log-linear transformation of complex regression equation is assumed to influence the predictive accuracy (Krisnawati et al. 2012;Sileshi 2014) while fitting the dataset in spite of the ability to enhance homoscedasticity of residuals (Sileshi 2014;Saha et al. 2021).In contrast, the findings of this study support the use and suitability of nonlinear regression functions to link dependent and independent variables without considerable alterations, such as natural log-linear transformation.Their predictive accuracy could vary depending on the specific type of regression function used (Litton and Kauffman 2008;Sileshi 2014;Lisboa et al. 2018).
The selected AGB allometric model in this study (MD2) aligns well with the findings of several other studies which concluded that a nonlinear power regression equation with DBH as the sole explanatory variable provides more accurate AGB estimates (Brown 1997;Mugasha et al. 2013;Mate et al. 2014;Mwakalukwa et al. 2014;Kachamba et al. 2016;Mugasha et al. 2016;Lisboa et al. 2018).Other researchers have also highlighted the superior efficiency of power equations in estimating AGB regardless of the number of predictors involved (Niklas 2006;Hauk et al. 2015).These reports support the notion that plant growth is a multiplicative and nonlinear process in nature (West et al. 1999;Marquet et al. 2005;Packard 2014).Dendrometric variables, such as tree diameter and height, are commonly regarded as independent variables in biomass allometric models (Brown 1997;Chave et al. 2005;Pearson et al. 2005;Ravindranath and Ostwald 2007;Pearson et al. 2013;Chave et al. 2014).In the fitted model (MD2) recommended in this study, tree height was not included as an explanatory variable.While some researchers argue that using DBH as the only independent variable results in a reliable biomass allometric model (Williams et al. 2005; Kebede and Soromessa 2018), other studies have found that including tree height improves estimate accuracy and model fit (Chave et al. 2005;Tumwebaze et al. 2013).However, the findings of this study indicate that including both DBH and TH as explanatory variables (MDH1 and MDH2) did not significantly improve the predictive accuracy compared to MD2 (Table 3).This is parallel to the findings of Jenkins et al. (2003), Johansson (1999), Lisboa et al. (2018), andPorte et al. (2002).According to Ebuy et al. (2011), a model that solely employs DBH is more suitable for handling data from forest inventories.Unlike DBH, height is not directly measured in forest inventories, which makes it more susceptible to measurement errors and unstandardized height measurement (Sileshi 2014;Lisboa et al. 2018;Magalhães et al. 2021).Information on tree height is particularly important in diameter-height and BEF models, which serve purposes other than direct computation of AGB (Lisboa et al. 2018;Mahmood et al. 2020).Despite having a higher probability of prediction, models that incorporate tree height might consequence in biased results due to measurement errors in tree height (Magalhães et al. 2021).Although Chave et al. (2005) employed tree height to estimate AGB in pantropical moist forests, Feldpausch et al. (2012) emphasized that doing so led to a 13% underestimation of carbon storage.
In cases where tree height is preferred to be integrated in biomass modeling, it is commonly taken into account either as a separate variable in addition to DBH (as in MDH2) or as a combined variable (DBH 2 �TH) (as in MDH1) (Sileshi 2014;Magalhães et al. 2021).In our study, including tree height separately as a second variable (MDH2) performed better than the combined (Table 3).While this finding is consistent with Magalhães et al. (2021), Monika et al. (2015), and Vahedi et al. (2014), it contradicts Bi et al. (2004) and Carvalho and Parresol (2003), who reported that the model with a combined predictor (DBH 2 �TH) produced better estimations.Beyond the variables considered in this study, other predictors that could be used to develop and improve the accuracy of biomass allometric models include wood density, crown ratio, and taper attributes (Temesgen et al. 2015), depending on the desired precision and the accessibility of these variables in the inventory contexts.
The precision of biomass estimation should be improved by formulating equations that compensate for the causes of variance in the allometric coefficients (Brown et al. 1989;Chave et al. 2014;Mukuralinda et al. 2021).In this study, since the sample trees were taken from the same ecoregion, wood density was not taken into account as an additional independent variable while developing the model, which is aligned with Magalhães et al. (2021).
Comparison of biomass allometric models
It is imperative to contrast the application of a site-and species-specific model for estimating AGB with that of the common pantropical and regional allometric AGB models in order to validate the accurate estimation of biomass.Since tropical forests have a higher species diversity, generalized allometric models for tropical species have drawn a great deal of attention.There have been a number of models for predicting AGB that involve mixtures of tropical species (Haase and Haase 1995;Brown 1997;Nelson et al. 1999;Chambers et al. 2001;Ketterings et al. 2001;Chave et al. 2005Chave et al. , 2014;;Pearson et al. 2005).Among these models, the most well-known are those developed by Brown (1997), Chave et al. (2005Chave et al. ( , 2014)), and Pearson et al. (2005), and were considered in this study.We compared the pantropical allometric AGB models with the developed siteand species-specific AGB model (MD2) following ME, RMSE, and MPE.The most widely used pantropical model by Chave et al. (2005Chave et al. ( , 2014) ) was observed to be more accurate than the other considered pantropical models, even though it slightly overestimated biomass in this study (Table 4).This result is in agreement with the hypothesis that Chave et al.'s (2005Chave et al.'s ( , 2014) ) model performs well for different forest types in several regions in South-East Asia (Rutishauser et al. 2013) and Africa (Vieilledent et al. 2012;Fayolle et al. 2013).The pantropical AGB allometric models from Pearson et al. (2005) and Brown (1997) did not match the context of the study; however, according to Lisboa et al. (2018), Brown's (1997) model was tested well fitted in a mountain moist forest in Mozambique.
Biomass accounting using allometric models is complicated due to the major influences of various factors, such as tree species, topography, temperature, and rainfall (Chave et al. 2014).Moreover, the applicability of allometric models can be explained by factors like habitat type (i.e.forests, plantations, or agroforestry), tree growth forms, and site characteristics (Brown et al. 1989;Brown 1997;Chave et al. 2005;Henry et al. 2010;Alvarez et al. 2012;Rutishauser et al. 2013).However, Gibbs et al. (2007) asserted that developing a biomass allometric equation particular to a species or site would not typically increase the accuracy of AGB estimation.Contrary to the findings of Gibbs et al. (2007), this study's results, regarding the performance of the common pantropical models, are consistent with the findings of several other studies (Clark et al. 2001;Pilli et al. 2006;Basuki et al. 2009;van Breugel et al. 2011;Alvarez et al. 2012;Ngomanda et al. 2014;Hossain et al. 2021), who found that the pantropical models did not accurately capture the variability of biomass estimation on the global scale.Two of the pantropical models that had been put to the test (i.e. Brown 1997;Pearson et al. 2005) demonstrated such, while Chave et al.'s (2005Chave et al.'s ( , 2014) ) moist pantropical model did better than them and could be used in this study region provided a site-adapted wood density value is considered (Table 4 and Figure 5).Regardless, our proposed model (MD2) provides comparatively more accurate estimation, is more convenient and is best suited for the Sengon forests in the Kalimantan region.When the existing AGB models for Sengon developed by Rusolono (2006) and Siregar (2007) in Java were considered, a trend of overestimation was observed (Figure 5).This indicates that these models were not well suited for estimating AGB in Sengon smallholder plantations in Central Kalimantan, where pronounced climatic and site differences exist compared to the Java province where the models were developed, regardless of statistical difference.
Indirect biomass estimation
Estimating the biomass of a comparatively larger area, such as the entire country or a specific region, might be impractical using the site-and species-specific allometric models for each species (Brown 1997;Komiyama et al. 2005;Mahmood et al. 2016).In these circumstances, the wood density, BEF, stem volume, and form factor are used to estimate the AGB (Nogueira et al. 2008;Lisboa et al. 2018;Mahmood et al. 2020), as demonstrated in this study.The previously recorded wood density for Sengon at the Indonesian plantation site: 271 kg/m 3 (Budiman et al. 2020) and 230-500 kg/m 3 (Varis 2011) can be compared with the average WBD obtained in this study (0.28 g/cm 3 or 282.52 kg/m 3 ) (Table 5).Muller-Landau (2004) recommended that the site average WBD value should be weighted by wood volume for biomass computations, thus validating the use of 282.52 kg/m 3 as WBD for estimating Sengon AGB in this study.
In contrast to the previously recorded BEF value for Sengon in Indonesia: 1.34 (Rusolono 2006) and the BEF value suggested by the Intergovernmental Panel on Climate Change (IPCC) for tropical broadleaf species: 3.4 (2.0-9.0)(Penman et al. 2003), this study computed an average BEF of 1.69 (Table 5).Although the BEF estimate at the study location was lower than the IPCC's value for tropical broadleaf forest stands, it was comparable to the value reported by Rusolono (2006).Tropical forests, as opposed to temperate forests, have larger tree crowns, which, according to Brown (2002), leads to higher BEF for a given volume and tree size.The average BEF for primary, secondary, and nonproductive rainforests in Sri Lanka, as per Brown et al. (1989), were 2.02, 2.26, and 4.48, respectively.These figures indicate that the mean BEF found in this study is even lower than the values for Sri Lanka's primary and secondary forests, possibly as a result of frequent management interventions at the FSF plantation site, where trees prefer to grow taller with clear boles rather than large crown sizes.However, the mean BEF value computed in this study is comparable to the mean BEF value found by Segura and Kanninen (2005) in Costa Rica's tropical humid forest (1.60).
In order to assess the applicability of existing merchantable volume models (Siswanto 2008) for the studied species in the Indonesian plantation context, as well as Smalian's volume in estimating AGB, the BEF and WBD for Sengon were initially estimated at the site level, similar to Lisboa et al. (2018).Apart from statistical difference, the two existing merchantable volume models tended to overestimate AGB with a relatively greater deviation from the developed AGB allometric model (MD2) in this study.This is consistent with Lisboa et al. (2018)'s findings on the relatively higher biomass accounting tendency of indirect BEF-biomass approaches.However, Smalian's volume somewhat underestimated AGB, close to the accuracy of the model (MD2), which is in opposition to the results of Lisboa et al. (2018).The accuracy of the suggested model (MD2) is further confirmed by this closer estimation of AGB utilizing Smalian's volume coupled with siteadapted mean BEF and WBD values.Thus, in addition to the developed biomass model (MD2), this study recommends the use of merchantable volume derived from Smalian's formula with computed WBD and BEF values at the site and species level in AGB estimation.
Conclusion and recommendations
Developing accurate allometric equations is crucial for estimating tree biomass in forest environments, as it indirectly aids in monitoring and assessing the global carbon cycle.However, employing the general pantropical allometric equations poses a drawback in accurately estimating biomass for specific species.This issue becomes particularly problematic for the estimation of carbon stock in tropical forests in Asia or South-East Asia due to their biological heterogeneity and variety of species.The primary objective of this study was to develop biomass allometric model for the total AGB estimation of Sengon in smallholder plantations, Central Kalimantan, Indonesia.This served as an initial step toward accurately computing AGB carbon stock, and ultimately providing carbon compensation from global carbon markets.The best-fit AGB allometric model was developed using the power regression equation (MD2), which only used DBH as an independent variable: Y ¼ 0:08062 � DBH 2:36816 and displayed the highest predictive accuracy metrics.The inclusion of TH as an additional independent variable with DBH did not significantly improve the model's fit.As a result, this study advises using the model with DBH alone as the predictor rather than incorporating both DBH and TH.Comparing global and regional AGB allometric models revealed that the selected model in this study (MD2) was more accurate for predicting the AGB of trees based on field observations.This indicates that the proposed model is valid and can be used to estimate AGB in the study area and plantation within the Kalimantan region for the given species with a higher level of accuracy compared to the global and regional models.Furthermore, the average BEF value from this study (1.69) can be utilized to estimate AGB by converting the merchantable volume from forest inventory data and Smalian's formula as illustrated by the nonsignificant difference to MD2.However, BEF overestimated the AGB when existing merchantable volume models in Java were taken into account despite statistical differences.To recapitulate, the findings of this study will contribute to the longterm management of smallholder Sengon plantations as a strategy to combat climate change and will provide a solid foundation for estimating sequestered AGB carbon.Moreover, this study model could be improved with additional data, especially with larger trees, which were not available at the study site.Future research could consider a comprehensive review of site characteristics and their inclusion in allometric model fitting for even more accurate biomass estimation and to account for growth variance.It is also recommended, if feasible, to validate the suggested model using a separate destructive dataset.
criteria, MD2 and MDH2 were found to have performed reasonably better than the other models owing to their lower values of AIC (138.84 and 138.91) and BIC (142.24 and 143.45), respectively.Conversely, MD3 and MH2 were attributed to the least performance metrics with as high RMSE values as32.38 and 35.22,respectively (Table
Figure 2 .
Figure 2. Observed vs predicted plots of the AGB candidate models.
Figure 3 .
Figure 3. Residuals' distribution of the AGB candidate models.
Figure 5 .
Figure 5. Graphical representation of the predictive accuracy of the allometric model identified in this study (MD2) against those considered from the literature.
Figure 6 .
Figure 6.Illustration of the statistical significances among the AGB allometric models under consideration ( � significant at a ¼ 0.05, ns: not statistically significant at a ¼ 0.05).
Figure 7 .
Figure 7. Illustration of the statistical significances among the indirect methods of AGB estimation in comparison to this study model (MD2).
Table 1
MDH2) considered both DBH and TH.All the regression models recorded statistically significant slopes (p < 0.05) at a 95% confidence interval, except MDH2 with nonsignificant estimate for TH (p > 0.05).Nevertheless, based on the model selection
Table 1 .
Candidate models for biomass allometric equation development.
Table 2 .
Considered models for comparison.
Table 3 .
Computed coefficients and comparative statistics of the AGB candidate models.
a Indicates the best-fit model identified in this study.��� Significant at a ¼ 0.001, �� significant at a ¼ 0.01, � significant at a ¼ 0.05.ns: not statistically significant at a ¼ 0.05.
Table 4 .
Predictive accuracy metrics of the model identified in this study (MD2) and the models considered from the literature.
Table 5 .
Descriptive statistics of BEF, WBD and stem bole volume, and the AGB estimation using BEF-biomass method and this study model (MD2).
|
2023-09-14T15:05:37.943Z
|
2023-09-12T00:00:00.000
|
{
"year": 2023,
"sha1": "98915e765cb9ae8f72de86a3f78d1af52d7e980e",
"oa_license": "CCBY",
"oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/21580103.2023.2256355?needAccess=true",
"oa_status": "GOLD",
"pdf_src": "TaylorAndFrancis",
"pdf_hash": "06227dc6dcfd3ce21cae6fe4cabc958effee0177",
"s2fieldsofstudy": [
"Economics"
],
"extfieldsofstudy": []
}
|
8829379
|
pes2o/s2orc
|
v3-fos-license
|
Physical Characterization of Gemini Surfactant-Based Synthetic Vectors for the Delivery of Linear Covalently Closed (LCC) DNA Ministrings
In combination with novel linear covalently closed (LCC) DNA minivectors, referred to as DNA ministrings, a gemini surfactant-based synthetic vector for gene delivery has been shown to exhibit enhanced delivery and bioavailability while offering a heightened safety profile. Due to topological differences from conventional circular covalently closed (CCC) plasmid DNA vectors, the linear topology of LCC DNA ministrings may present differences with regards to DNA interaction and the physicochemical properties influencing DNA-surfactant interactions in the formulation of lipoplexed particles. In this study, N,N-bis(dimethylhexadecyl)-α,ω-propanediammonium(16-3-16)gemini-based synthetic vectors, incorporating either CCC plasmid or LCC DNA ministrings, were characterized and compared with respect to particle size, zeta potential, DNA encapsulation, DNase sensitivity, and in vitro transgene delivery efficacy. Through comparative analysis, differences between CCC plasmid DNA and LCC DNA ministrings led to variations in the physical properties of the resulting lipoplexes after complexation with 16-3-16 gemini surfactants. Despite the size disparities between the plasmid DNA vectors (CCC) and DNA ministrings (LCC), differences in DNA topology resulted in the generation of lipoplexes of comparable particle sizes. The capacity for ministring (LCC) derived lipoplexes to undergo complete counterion release during lipoplex formation contributed to improved DNA encapsulation, protection from DNase degradation, and in vitro transgene delivery.
Introduction
Gene therapy offers tremendous potential for the treatment of numerous diseases with demonstrated applications in vaccine development. Despite continuing successes of viral based gene therapeutics achieving significant clinical outcomes [1][2][3][4], these highly efficacious vectors present important safety concerns with respect to undesired immunostimulatory effects and/or insertional mutagenesis [5][6][7][8]. Furthermore, the application of viral vectors is hindered by limited repeat administrations due to pre-existing immunity, size of delivered gene construct, scale-up, as well as high production costs, contamination during production, and lack of desired tissue selectivity [5,9]. Non-viral delivery vectors are generally advantageous over viral vectors with respect to safety, production costs, scalability, the ability to transfect larger sized DNA, and adaptability for different delivery options (e.g. targeted delivery, time-dependent release, enhanced circulation times, repeat administrations) [9,10]. However, while preferential from a safety perspective, non-viral systems generally suffer associated low transfection efficiencies, an important obstacle that must be addressed in order for such systems to be recognized as effective vehicles for gene delivery.
Extensive efforts have been focused into the rational design of effective synthetic vectors with the capacity for DNA compaction and encapsulation, targeted delivery, cellular uptake and internalization, endosomal escape, and nuclear localization. Such efforts have culminated into the design and application of numerous cationic compounds as gene delivery vectors which contributed to the development of commercial cationic lipids, including Lipofectamine™ and Lipofectin R , suited for gene delivery. In consideration to the relatively high cost and short shelf-life associated with commercial vectors, cationic gemini surfactants have been synthesized as potential candidates for non-viral delivery. Gemini surfactants are amphiphilic molecules composed of two surfactant monomers (cationic, anionic, or neutral) chemically linked by a spacer (Fig 1). Gemini surfactants confer advantages of reduced cytotoxicity and cost effectiveness as they possess a critical micelle concentration (CMC) that is one to two orders of magnitude lower than their monomer counterparts [11][12][13]. Gemini surfactant derived synthetic vectors offer numerous advantages including: 1) high positive charge for effective DNA complexation at low concentrations; 2) efficient DNA compaction generating smaller complexes than their monomeric counterparts; 3) effective endosomal escape; and 4) suitability for long term storage in lyophilized formulations, over two months at ambient temperatures, without losing functionality [14,15]. As such, different formulations of gemini surfactants, from traditional cationic m-s-m or N,N-bis(dimethylalkyl)-α,ω-alkanediammonium surfactants (where m and s represent the number of carbon atoms in the alkyl tails and the polymethylene spacer group) to peptide or carbohydrate based compounds, have been previously studied for applications in gene therapy [12].
Among the different m-s-m gemini surfactants, the 16-3-16 derivative has been extensively studied due to its structural nature, promoting effective DNA complexation, and its capacity to adopt structural polymorphisms critical to endosomal escape and successful gene delivery. The 16-3-16 gemini surfactant possesses a trimethylene spacer (s = 3) that provides compatible head group distances (~0.49 nm) with the spacing of phosphate groups (0.34 nm) in DNA [16]. The increased positive charge (relative to monomeric surfactants and lipids) promotes efficient DNA binding and compaction, generating particles suitable for gene delivery. Numerous reports have previously indicated the ability of 16-3-16 gemini-based lipoplexes, in combination with 1,2-dioleoyl-sn-glycero-3-phosphatidylethanolamine (DOPE) neutral lipid, to form higher ordered phase structures including inverted hexagonal and cubic phase structures [12,[17][18][19]. Such structures are highly dependent on the lipoplex composition with hexagonal structures predominantly present at high mol ratios of DOPE and cubic phase structures at high mol ratios of gemini surfactant [18]. The ability of such gemini-based lipoplexes to adopt structural polymorphisms is considered to be one of the most important factors contributing to improved gene delivery [9,11,12,16,[18][19][20].
Highly efficacious gene therapeutics demand contributions from sound design of both the synthetic vector as well as the enclosed DNA cargo. Conventional recombinant plasmid DNA (pDNA) employed in non-viral gene delivery typically consists of two essential components: i) an eukaryotic expression cassette for the expression of the gene of interest, and ii) a prokaryotic backbone with an origin of replication for plasmid amplification and an antibiotic resistance gene cassette for selection [21]. While safer than their viral counterparts, non-viral delivery of such circular covalently closed (CCC) pDNA vectors, alone or packaged within synthetic vectors, offers a limited safety profile as they often result in the transfer of antibiotic resistance genes as well as other unwanted prokaryotic sequences with CpG motifs. The unnecessary delivery of antibiotic resistance genes may enable horizontal gene transfers, giving rise to antibiotic resistant pathogens. Unmethylated CpG dinucleotides, or CpG motifs, have the potential for eliciting immunostimulatory responses which reduce the efficacy of the gene therapeutic and may induce detrimental effects in the treated host [22][23][24][25]. Hence, the removal of the prokaryotic backbone in the generation of linear covalently closed (LCC) DNA minivectors serves the dual purpose of enhancing the safety of the delivered vector while improving the delivery process through the formation of smaller vectors that increase extracellular and intracellular bioavailability [21,26].
LCC DNA minivectors are small, dumbbell shaped vectors possessing hairpin ends enclosing an eukaryotic expression cassette. The hairpin loops offer vast improvements in protection from exonucleases conferring greater stability, an issue that drastically hinders the successful delivery of linear DNA.LCC DNA vectors were shown to exhibit enhanced transgene expression over CCC pDNA counterparts as demonstrated by cytoplasmic and nuclear microinjections along with transfection using Lipofectamine™ [27][28][29]. In addition, LCC DNA minivectors offer a heightened safety profile as insertional mutagenesis is inhibited by the covalently closed terminal ends conferring double-strand breaks that cause chromosomal disruption and cell death in the low frequency event of chromosomal integration [26,29].
We previously described an E. coli based one-step in vivo LCC DNA minivector production system for facile and efficient means of producing LCC DNA minivectors from parental CCC pDNA substrates (Fig 2) [26,28]. The parental pDNA (Fig 3) is composed of an eukaryotic expression cassette flanked by two multi-target sites, called "Super Sequence" (SS), acting as recognition sites for PY54 bacteriophage derived Tel protelomerase. Temperature induced, in vivo expression of Tel protelomerases and their subsequent enzymatic activity, for excision and resolution of covalently closed terminal ends, result in the conversion of parental CCC pDNA into two smaller species: 1) a LCC backbone DNA carrying the unnecessary prokaryotic backbone, and 2) a LCC DNA minivector referred to as DNA ministrings [26]. Application of the novel in vivo DNA minivector production system permit the production of LCC DNA ministrings as well as the generation of safe and effective lipid-based synthetic vectors upon lipoplex formation with gemini surfactants.
Conventional gemini-based synthetic vectors for gene delivery generally consist of CCC pDNA vectors that differ in linear topology, DNA interactions, and physicochemical properties in comparison to LCC DNA ministrings. In light of these differences, we sought to characterize and compare the physical properties of the resulting lipoplexes after complexation with 16-3-16 gemini surfactants. Despite the size disparities between pDNA vectors (CCC) and DNA ministrings (LCC), differences in DNA topology resulted in the generation of lipoplexes of comparable particle sizes.
Strains and Plasmids
The pNN9 vector [26] was used as the parental pDNA substrate for the production of LCC DNA ministrings and for the generation of CCC pDNA derived lipoplexes. E. coli K-12 strains were used to generate all recombinant cell constructs and JM109 was employed as hosts for plasmid amplification.
Production of CCC pDNA, LCC DNA Products and LCC DNA Ministrings E. coli JM109 was used for amplification of the pNN9 parental vector (5.6 kb) ( Table 1). A single colony of JM109[pNN9] was grown overnight in 5 ml Luria Bertani (LB) + ampicillin (Ap) One-step in vivo LCC DNA minivector production system. The in vivo production system involves a recombinant E. coli for thermoregulated expression of Tel protelomerase. In the temperature inducible system, protelomerase expression is repressed by a CI[Ts]857 repressor at temperatures below 37°C. Temperature upshift to 42°C causes instability and dissociation of the thermolabile repressor which allows for controlled expression of protelomerase. Subsequent enzymatic activity of the expressed protelomerase on parental pDNA vector substrates results in DNA processing into LCC DNA ministrings.
The one-step in vivo LCC DNA minivector production system, Tel + W3NN[pNN9] E. coli, was used for the production of enhanced green fluorescent protein (eGFP) LCC DNA ministrings. A single colony of Tel + W3NN[pNN9] was grown overnight in 5 ml LB + Ap (100 μg/ ml) under repressed conditions at 30°C with aeration. Two batches of fresh cells were grown from the overnight culture at 1:100 dilution of 50 ml LB + Ap (100 μg/ml) in 250 ml Erlenmeyer flasks at 30°C to late log phase A 600 = 0.8. Cells were then collected, centrifuged at 4K RPM for 10 min, and re-suspended in 1 ml of LB + Ap. The re-suspensions were added into a preheated 2L Erlenmeyer flask containing 500 ml of LB + Ap (100 μg/ml) for incubation at 42°C until A 600 = 1.0; followed by an additional 60 min incubation under the same conditions. Cultures were subjected to gradual temperature downshift and grown at 30°C overnight. Cells were harvested and plasmid extracted with E.Z.N.A. Plasmid Maxi-Prep Kit (Omega, VWR). The 2.4 kb LCC DNA ministrings were subsequently purified using agarose gel electrophoresis
Characterization of 16-3-16 Gemini-based Lipoplexes
Particle Size and Zeta Potential. Particle sizes for DNA, 16-3-16, DOPE, and resulting DNA/16-3-16 & DNA/16-3-16/DOPE lipoplexes were measured by dynamic light scattering using a Malvern Zetasizer Nano ZS instrument (Malvern instruments, UK). Particle size distributions were obtained from light scattering (θ = 173°) in water at 25°C and the measured sizes were reported using a percent volume distribution. Samples were measured in triplicates of triplicate and the resulting averages were reported.
Zeta potential (z) for the abovementioned samples was measured by Laser Doppler Electrophoresis using zeta potential capillary cells and a Malvern Zetasizer Nano ZS instrument (Malvern instruments, UK). All measurements were made at 25°C and samples were measured in triplicates of quintuplicate with averages being reported.
DNase Sensitivity Assay. The DNase sensitivity assay involved the incubation of lipoplexes with DNase I (1 unit per 1 μg DNA) (Promega) and the DNase reaction buffer (Tris-HCl, M g SO 4 , CaCl 2 ) for 30 minutes at 37°C. Subsequently, DNase I was inactivated by the addition of DNase stop solution (ethylene glycol tetra acetic acid (EGTA)) and denatured upon 10 minute incubation at 60°C. Lipoplexes were disrupted with the addition of phenol:chloroform:isoamyl alcohol (25:24:1, v/v) (Invitrogen) for the recovery of non-degraded DNA upon centrifugation. The extent of DNase I induced degradation was assessed by agarose gel electrophoresis upon equal loading across all samples.
In vitro Transgene Delivery Assay. Human-derived ovarian cancer cells, OVCAR-3 (Invitrogen) were grown in RPMI+ GlutaMAX supplemented with 20% fetal bovine serum, 100 μg/ml streptomycin, and 100 IU/ml penicillin. All cell culture reagents and cell culture equipment were provided by Life Technologies (Carlsbad, CA) and VWR (Radnor, PA), respectively. Cationic lipid transfection reagents Lipofectamine™ LTX, and Plus reagents were obtained from Invitrogen. To transfect cells, 5.0 × 10 5 OVCAR-3 cells were seeded into 24-well culture plates 24 h before transfection in complete media without antibiotic. One hour prior to transfection, the culture medium was replaced with serum-free RPMI medium. 0.4 μg of DNA (pNN9 or DNA ministring), diluted in 50 μl of serum-free OptiMEM culture medium, was mixed with different aliquots of 1.5 mM 16-3-16 gemini surfactant solution to yield N + /P − charge ratios of 3:1 and 5:1. After 15 min incubation at room temperature, appropriate aliquots of 1 mM DOPE were incorporated to achieve a constant gemini to DOPE ratio of 1:2.5. The subsequent complexes were further incubated for 30 minutes at room temperature. Cationic complexes of Lipofectamine™ LTX were prepared according to manufacturer's protocol with no deviation. The mixture of pNN9/16-3-16 & pNN9/16-3-16/DOPE and DNA ministring/16-3-16 &DNA ministring /16-3-16/DOPE complexes was added drop-wise to each well (duplicate). The plate was centrifuged for 5 min at 200 RPM, at room temperature, prior to incubation at 37°C. At 5 hours post transfection, the transfected media was replaced with fresh complete media free of antibiotic. Transfection efficiency was assessed by flow cytometry after subsequent 48 h incubation at 37°C.
Flow Cytometry. Transfection efficiency was determined 48 h after transfection by flow cytometry. Cells were trypsinized, washed with PBS, and counted. Data were collected from 10 4 events. Cells were stained by ten microliters of the cell membrane impermeable, intercalating red fluorescent propidium iodide (PI), 20 mg/ml Sigma-Aldrich (St Louis, MO), to measure cytotoxicity after transfection by excluding dead cells from viable cells. Untreated cells served as controls for cytotoxicity and GFP expression. GFP expression levels were calculated by multiplying the mean relative fluorescence values of transfected cells by the percentage of transfected cells. This parameter is considered to be directly proportional to the total amount of produced transgene product. All data were expressed by GraphPad as mean ±SEM. Statistical differences were determined using a two-way ANOVA test followed by Bonferroni's post-tests. Significance was set at a p<0.05.
Particle Size and Zeta Potential (ζ)
With regards to 5.6 kb pNN9 (CCC) and 2.4 kb DNA ministring (LCC), particle sizes of the two DNA vectors were surprisingly similar despite their inherent differences in DNA composition ( Table 2). The differences were accounted by the supercoiled nature of CCC pDNA contributing to more compact conformations than their linear counterparts. Such notions were supported by the differences in observed zeta potentials (z) as DNA supercoiling in pNN9 (CCC) masked a fraction of the negative charges attained from the phosphate groups of DNA. In contrast, the structural nature of the linear isoforms contributed to more prominent surface charges as indicated by a greater (-) zeta potential. By itself, 16-3-16 gemini surfactants were able to rapidly self-assemble into micelles due to high concentrations of the stock solution and due to conditions at which lipoplexes were generated. The 1.5 mM 16-3-16 stock solution was generated at concentrations well above the CMC of the gemini surfactant (0.0255 mM) and as lipoplexes were generated below the Krafft temperature (T k = 42°C), referred to as the minimum temperature at which surfactant forms micelles, 16-3-16 micelles were rapidly selfassembled [16,31]. The propensity for 16-3-16 gemini surfactant to form micelles/vesicles of varying sizes resulted in the observed high polydispersities as indicated by a PDI value of 0.752.
With respect to DNA/16-3-16 and DNA/16-3-16/DOPE lipoplexes, the two vectors generated lipoplexes of comparable particle sizes and zeta potentials across the three tested charge ratios (Table 3). For DNA/16-3-16 lipoplexes, both CCC/16-3-16 and LCC/16-3-16 lipoplexes exhibited the progressive formation of uniformly sized particles, as indicated by decreasing PDI values, at increasing charge ratios. With respect to DNA/16-3-16/DOPE lipoplexes, substantially larger particle sizes were observed for lipoplexes at charge ratios of 2:1. The large particles exhibited at lower charge ratios were likely the result of aggregation upon charge neutralization and subsequent addition of more gemini surfactant, at 5:1 and 10:1 charge ratios, resulted in a dramatic decrease in particle sizes with lower polydispersities. In closer inspection of particle size fluctuations across the spectrum of progressively increasing charge ratios (Fig 4), charge neutralization and aggregation for LCC/16-3-16/DOPE lipoplexes was observed to occur at the lower charge ratio of 1:1 whereas significant aggregation of CCC/16-3-16/DOPE lipoplexes occurred at a charge ratio of 2:1. Aggregation of the resulting lipoplexes and interference with light scattering measurements, upon charge neutralization, contributed to large standard deviations and populations of highly variable particle sizes [32,33].
DNA ministrings exhibit improved transfection efficiency. We previously constructed a pGL2 (Promega, Madison, WI) vector derivative that expressed enhanced green fluorescent protein (eGFP) under the control of an SV40 promoter and two specially designed target sequences of the Tel protelomerase referred to as the super sequence (SS) (Fig 3). The derivative LCC DNA ministring was generated from parent CCC plasmid using a one-step heatinducible mini DNA vector production system as previously described [28]. Transfection complexes were prepared at the DNA/16-3-16 at charge ratios of 3:1 and 5:1, with and without As lipoplexes approach charge neutralization, a significant increase in particle size led to highly variable particles conferring large aggregate formation. Large aggregates for LCC/16-3-16/DOPE and CCC/ 16-3-16/DOPE lipoplexes appeared most prominent at charge ratios of 1:1 and 2:1 respectively. Progressive decreases to particle sizes at higher charge ratios led to stable and uniform particle formation.
doi:10.1371/journal.pone.0142875.g004 DOPE, in accordance to the particle size and DNase sensitivity results. Direct comparison with equal amounts (by weight) of LCC DNA ministrings and conventional CCC parent plasmids indicated no statistically significant differences with respect to gemini-mediated transfection efficiencies (Fig 7-A). However, transfection using Lipofectamine™ indicated that LCC DNA ministrings imparted significantly higher transfection efficiency than their parental CCC counterparts (P < 0.001). No statistically significant differences were observed with respect to gemini-mediated cytotoxicity, in presence or absence of helper lipid DOPE, between LCC DNA ministring and CCC pNN9 derived lipoplexes (Fig 7-B). . Transfection efficiency was measured as the number of eGFPexpressing cells divided by the total number of cells. PI was added to assess transfection associated cytotoxicity. All data were expressed by GraphPad as mean ±SEM. Statistical differences were determined using a two-way ANOVA test followed by Bonferroni's post-tests. Stars indicate significantly higher transfection efficiencies of LCC DNA ministrings compared to pNN9 parent plasmid (P 0.001). doi:10.1371/journal.pone.0142875.g007
Discussion
With respect to the influences of DNA topology, direct comparisons between CCC pNN9 and LCC DNA ministrings cannot be made due to the inherent differences in the size of the two respective DNA vectors. More apparent differences may have arisen had the parental supercoiled CCC pNN9 been compared with its parental LCC counterpart, however, results from this study denoted certain differences arising from the influences of DNA topological conformations. The supercoiling effect contributed to a lower effective negative charge for CCC pNN9 [34,35] which led to lower surface charges (z = -25 ± 9 mV) when compared to LCC DNA ministrings (z = -35 ± 2 mV). In addition, DNA supercoiling reduced the overall size of the circular plasmid, which contributed to comparable particle sizes between pNN9 and DNA ministrings despite the fact that pNN9 was the larger sized plasmid. Such differences had significant effects on the interactions between DNA and 16-3-16 gemini surfactant in terms of counterion release during lipoplex formation. Previously, DNA/16-3-16/DOPE lipoplexes, comprised of linear calf thymus DNA (ctDNA), demonstrated complete release of Na + counterions during lipoplex formation; in contrast, a significant fraction of counterions remained bound during complex formation of CCC pDNA-derived lipoplexes [34,35]. Counterion displacement was suggested to be inhibited due to the compact conformation of supercoiled CCC pDNA inducing geometric constraints on gemini/DNA interactions.
Resulting particle sizes and zeta potentials for DNA/16-3-16 and DNA/16-3-16/DOPE lipoplexes were in agreement with literature [19,36] as all lipoplexes possessed positive zeta potentials critical to in vitro transfection. Upon inspection of particle size variations for lipoplexes across different charge ratios, both CCC/16-3-16/DOPE and LCC/16-3-16/DOPE exhibited significant increases in particle sizes at charge ratios corresponding to charge neutralization and large aggregate formation. For CCC/16-3-16/DOPE lipoplexes, substantial large aggregation formation was observed at a higher charge ratio of 2:1 in contrast to 1:1 for LCC/16-3-16/ DOPE lipoplexes. Differences may be attributed to the antagonistic interactions between 16-3-16 gemini and DOPE [37] in combination with incomplete counterion release for CCC/16-3-16 lipoplexes, prompting more prominent DOPE induced instabilities that prevented the generation of stable, discrete lipoplex particles. Lipoplex instabilities were exemplified by the lower and highly variable (+) zeta potentials (z = 10 ± 18 mV) for CCC/16-3-16/DOPE lipoplexes at a charge ratio of 2:1. Such zeta potentials were indicative of charge neutrality that contributed to aggregation and the observed large particle sizes.
DNA ministring (LCC) derived lipoplexes exhibited improved DNA encapsulation and protection properties as evidenced by improved DNA recovery upon DNase I exposure. For both CCC/16-3-16/DOPE and LCC/16-3-16/DOPE lipoplexes, the higher charge ratios of 5:1 and 10:1 elicited better DNA encapsulation, protecting the DNA cargo from degradation. However, such protection was more prominent in LCC/16-3-16/DOPE lipoplexes and this was attributed to the higher (-) zeta potential of DNA ministrings and the complete release of counterions during complexation. The highly negative zeta potentials exhibited in DNA ministrings denoted significant surface charges for extensive electrostatic interaction with the positively charged 16-3-16 gemini surfactant, leading to complete counterion release and reduced head group repulsions. Reduced head group repulsion between individual gemini surfactants conferred better encapsulation, effectively protecting the residing DNA from exposure to DNaseI. With regards to LCC/16-3-16 and LCC/16-3-16/DOPE lipoplexes, improved DNA encapsulation and protection for LCC/16-3-16 lipoplexes were attributed to tight associations between DNA ministring and gemini surfactant as supported by high (+) zeta potentials [18].
Conclusion
The differences in topology between conventional CCC pDNA vectors and LCC DNA ministrings influenced the complexation of gemini surfactants during lipoplex formation and the generation of lipid-based synthetic vectors. Such differences contributed to variations in particle size as well as the capacity for effective DNA encapsulation and protection from DNase I degradation. Further investigation, through additional physical characterization (e.g. isothermal titration calorimetry (ITC) & small angle X-ray scattering (SAXS)), will be warranted to fully ascertain the influences of DNA topology on transfection capacities of gemini-based synthetic vectors.
|
2016-05-12T22:15:10.714Z
|
2015-11-12T00:00:00.000
|
{
"year": 2015,
"sha1": "616f35fcbe36b08869e416f24d458b491fd83b72",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0142875&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "616f35fcbe36b08869e416f24d458b491fd83b72",
"s2fieldsofstudy": [
"Biology",
"Chemistry",
"Engineering"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
204954450
|
pes2o/s2orc
|
v3-fos-license
|
The XBPF, a new multipurpose scintillating fibre monitor for the measurement of secondary beams at CERN
The Beam Instrumentation group at CERN has developed a new scintillating fibre beam monitor for the measurement of secondary particle beams in the CERN Experimental Areas. The monitor has a simple design that stands out for its low material budget, vacuum compatibility, good performance, low cost, and ease of production. By using different read-out techniques the monitor can perform several functions, such as measurement of the profile, position and intensity of the beam, momentum spectrometry, generation of fast trigger signals, and measurement of the time-of-flight for particle identification. The monitor has been successfully commissioned in the recently created test beams of the CERN Neutrino Platform, where it has shown an excellent performance as described in the paper.
Introduction
The Experimental Areas at CERN host a rich and diverse program of high-energy physics experiments and research and development in particle detectors and accelerator technology. These facilities provide on user's request a rich variety of secondary beams ( −∕+ , +∕− , +∕− , +∕− , +∕− , and diverse heavy ions) over a wide range of energies and intensities, typically 0.1 GeV to 450 GeV and 10 2 to 10 7 particles per second per mm 2 [1]. The transverse profile and position of these beams is measured with various types of wire gaseous chambers, mainly Multi-Wire Proportional Chambers (MWPC) and Delay Wire Chambers (DWC) [2] that date from the late 70's. The performance of many of these monitors is seriously compromised due to ageing problems and radiation damage and their maintenance is very difficult due to outdated components and loss of expertise. Furthermore, these wire chambers cannot fulfil the requirements of a new test beam facility at CERN, the Neutrino Platform, which has further motivated the search for a new technology for their replacement.
The CERN Neutrino Platform has been created in the framework of an international collaboration on R&D for neutrino detection technologies. This new facility is instrumented with two newly constructed beam lines -H2-VLE and H4-VLE described in [3] -that provide low energy and low intensity secondary beams of 0.3 GeV∕c to 12 GeV∕c in bursts of 10 to 1000 particles per burst. The duration of a burst is typically 4.8 s and the average beam spot size is large, in the order of several cm diameter. The first experiments using this facility are the two ProtoDUNE detectors -NP02 and NP04 -which serve to study the
Table 1
Comparison of the material budget of multi-wire chambers and SciFi profile monitors produced with fibres of 0.5 mm or 1 mm thickness. Table 1. More details about this calculation can be found in [10] (Appendix D.1, page 163). The main limitations of scintillating fibres are their difficult readout due to the low light signal reaching the photodetectors and the large number of channels that are typically needed. Nevertheless, the recent invention of the Silicon Photomultiplier (SiPM) [12][13][14] has helped to overcome these limitations. This new device is a compact solid-state photodetector with high detection efficiency, single-photon detection capabilities, and a low price per channel. Its major drawback is a high dark count rate, which in certain models can reach levels of MHz∕mm 2 Scintillating fibres are used extensively as beam hodoscopes and tracker detectors in high-energy physics experiments due to their good space and time resolution, high detection efficiency and high-rate capabilities. Major high-energy physics experiments in the 1980's, such as UA2 [15] and DØ [16], were equipped with SciFi trackers that mainly used Charged-Coupled Devices (CCD) to read-out the light generated by the fibres. In 1991, the FAROS Collaboration (RD-17) at CERN [17] started an extensive research and development program on new beam hodoscopes based on fine scintillating fibres read-out with Position-Sensitive Photomultipliers (PSPM). This program led to some important beam hodoscopes, a good example of which is the SciFi hodoscope of the COMPASS Experiment (NA58) at CERN [18] in the early 2000's. The SciFi tracker stations of COMPASS evolved over time, fostering a rich research and development program [19][20][21]. The standard SciFi station was made of 7 layers of staggered fibres SCSF-78MJ from Kuraray, with circular cross-section and a diameter of 0.5 mm. By using PSPM read-out with specially designed peak-sensing electronics, the COMPASS hodoscope reported [18] a spatial resolution of ∼ 125 μm, a time resolution of ∼ 540 ps (r.m.s.), and a detection efficiency above 98% for beam fluxes up to ∼ 10 8 muons per second.
The latest generation of SciFi hodoscopes and trackers have replaced PSPM by newer high-performance Multi-Anode Photomultipliers, for example in the ATLAS ALFA Experiment [22], or by Silicon Photomultipliers in the Mu3e Experiment [23] and the LHCb SciFi tracker [24].
eXperimental Beam Profile Fibre (XBPF) monitor
The plastic scintillating fibres used are the SCSF-78 from Kuraray with square cross-section and 1 mm thickness. Other models from the manufacturer Saint-Gobain were also studied (BCF-12 multi-clad), but the SCSF-78 were favoured because of their lower self-attenuation and higher finish quality, as described in [25]. These fibres are composed of a large scintillating core (96% of the fibre thickness) and a thin cladding that allows trapping part of the scintillation light by total internal reflection. The core is made of polystyrene mixed with a proprietary formulation of fluorescent dopants that optimise the spectral emission of the fibre at 420 nm, matching therefore the quantum efficiency of SiPMs. Further information on plastic scintillating fibres and their physics principles can be found in [26,27].
The detection area of the XBPF is formed by 192 fibres that are packed along one plane as a single layer (the number 192 optimises the use of the readout electronics, as explained in Section 3.2). The square shape of the fibres has two advantages: it allows for an optimal packing and ensures that all beam particles deposit similar amounts of energy in the monitor, therefore homogenising its response. The fibres are held in place by a structure made of black Polyoxymethylene (POM) -a thermoplastic frequently used in precision parts -and two rods of stainless steel that provide the necessary rigidity to the assembly. An ultra-thin foil of Kapton© polyimide of 25 μm thickness is glued over the fibres to maintain their position.
The light from every fibre is read-out on one end by an individual SiPM, indicating which fibre has been activated and thus permitting the reconstruction of the beam profile or the track of a particle from multiple monitors. After studying several SiPM from different manufacturers (Hamamatsu, Ketek, and SensL), the model chosen is the S13360-1350 from Hamamatsu. This model has shown the lowest dark count rate (below 100 kHz∕mm 2 ), the lowest cross-talk (below 1%), and it has a slightly larger active area (1.3 mm × 1.3 mm against 1 mm × 1 mm), which benefits the optical coupling with the fibres. A thin mirror foil is glued on the non read-out end of the fibres with the purpose of reflecting back part of the scintillation light, thus increasing the total signal reaching the photomultipliers.
As previously explained, the XBPF has been designed to be vacuum compatible. The photomultipliers are located outside vacuum, with the fibres exiting via a feed-through that is sealed with an epoxy resin that ensures the required leak tightness and outgassing levels for the CERN Experimental Areas (vacuum level of 10 −3 mbar). Fig. 1 shows a XBPF during production. Fig. 2 shows a XBPF during installation in H4-VLE.
The fibres of the XBPF have been treated to avoid optical cross-talk between fibres, which is produced by scintillation photons emitted in the ultra-violet that can escape a fibre and excite neighbouring fibres. The treatment is an ultra-thin vaporisation of ∼ 100 nm of aluminium over the fibres.
eXperimental Beam Trigger Fibre (XBTF) monitor
A second version of the monitor has been specifically created for the Neutrino Platform with the aim of measuring the beam intensity with high accuracy, of producing fast trigger signals, and of measuring the time-of-flight of the beam particles for their identification. In this version, the 192 fibres are grouped together into two bundles to be read-out by two Photo Multiplier Tubes (PMT) H11934-200 from Hamamatsu. These PMT have a low dark count rate of a few counts per second and a low Transit-Time Spread (TTS) of ∼ 300 ps, making them well-suited for TOF applications. Fig. 3 shows a XBTF ready for testing in the laboratory.
Readout electronics
The electronics architecture of the XBPF and XBTF systems is shown in Fig. 4. It is divided into three main systems: trigger, beam profile, and timing.
Trigger system
The trigger system combines the signals from several XBTF, creating an unambiguous signal when a beam particle has reached a Proto-DUNE detector. This beam-trigger signal is sent to the XBPF profile monitors and the ProtoDUNE Experiments to trigger their acquisition. In the present configuration, the analogue signals from the PMTs are processed by a Constant Fraction Discriminator (CFD) -the N842 from CAEN. This type of discriminator is well-suited for timing measurements, as they have a low time jitter and show immunity to signal walk [28].
Beam profile
The readout electronics of the XBPF profile monitors is formed by a front-end board attached to the detector and a back-end board grouped in a central acquisition chassis in the electronics barrack. Both boards communicate via a high-speed optical link.
The front-end board has the following main components, which are highlighted in Fig. 5: • 192 SiPM that detect the light generated by the fibres.
• Hamamatsu C11204 to power up the SiPMs. This device has a temperature feedback system to maintain a stable gain of the SiPM. • 6 CITIROC ASIC [29] that process in parallel the analogue signals from the SiPM. • Xilinx FPGA Artix 7 that configures the CITIROC slow control, reads the CITIROC digital output, packages the data, and sends it out in a 10 MHz data stream to a Gbit transceiver. • A SFP module with Gbit transceiver to transfer the data via optical fibre to the back-end. The back-end module is the VFC-HD [31], a Versa Module Eurocard (VME) general-purpose digital acquisition board, developed by the CERN Beam Instrumentation group and fully compatible with White Rabbit. The data stream from the front-end includes both the information from real particles and the noise from the SiPM, which can be at a rate of several kHz when considering all 192 channels. In order to suppress the noise events, the VFC also receives the global-trigger signal and only records the events coinciding with that signal. Every recorded event has the information of the status of the 192 fibres (hit, no-hit), plus a White Rabbit 8 ns-precision timestamp of when the event occurred.
The XBPF provides simultaneously single particle information and a beam profile integrated over the duration of a burst. However, single particle information can be switched off when only the beam profile is wanted in order to minimise data generation.
Single particle tracking has been tested in the East Area at CERN up to beam intensities of ∼ 10 5 particles per second per mm 2 [11]. With the hardware as it is, single particle tracking could be used, in theory, with beam intensities close to 10 7 particles per second per mm 2 . This estimation is made from the sampling rate of the XBPF, which at present is 10 MHz per channel. However, the response of the system at high intensities might depend greatly on the beam characteristics, particularly the time distribution of the particles in the beam.
If only the beam profile and position information is going to be used, for example for beam steering, the maximum limit of 10 7 could be in principle avoided by reducing the efficiency of the detector in an homogeneous way among all channels. This could be achieved, for example, by introducing a light filter between the fibres and the SiPM in order to reduce the number of detected photons without altering the shape of the beam profile.
Timing system
This system timestamps digital signals with a resolution of 81 picoseconds in a clock distributed over White Rabbit. It is based on the FMC-TDC [32], which is a Time-to-Digital Converter (TDC) board developed at CERN for timing applications using the White Rabbit technology. This precise timing system is used to timestamp signals from the XBTF, other beam instrumentation, and several signals from the Proto-DUNE trigger system. It allows the Neutrino Platform physicists to correlate in time the information generated by the beam instrumentation and the neutrino detectors.
Due to the good time resolution of the FMC-TDC, the fast signals of the XBTF are also used to measure the time-of-flight of particles between the first and last XBTFs of the beam line. The first XBTF is very close (less than 7 m) to the beam target where the secondary particles are created, being therefore subjected to a large flux of secondary particles -around 10 6 particles per burst. However, less than 10 3 particles are captured by the beam line optics and reach the last XBTF, located right in front of the ProtoDUNE Experiment. In order to avoid fake signals in the TOF system, the beam-trigger signal is also sent to the FMC-TDC board with the aim of filtering the TOF data by selecting only time-matched beam particles.
The H4-VLE beam line
This paper focuses on the H4-VLE beam line, where a total of 8 XBPF and 3 XBTF have been installed and all systems are fully operationalprofile, trigger, and TOF. A vacuum tank of the XBPF can accommodate two monitors -XBPF or XBTF -in any desired combination, therefore giving great flexibility to the design of the beam line. The layout of H4-VLE is shown in Fig. 6, indicating the position and function of every monitor. Fig. 7 shows two photographs taken during the installation of the monitors.
Both the XBPF and the H4-VLE beam line were successfully commissioned during the first two weeks of September 2018. Fig. 8 shows the profile of the very first beam delivered to the NP04 detector, as measured by the XBPFs placed directly in front of the experiment. The data taking of NP04 took place between the beginning of October 2018 and the middle of November 2018. During that period, H4-VLE provided protons, pions, kaons, muons, and positrons in the range of momenta 1 GeV∕c to 7 GeV∕c, with the addition of two extra runs at sub-GeV range, 0.5 GeV∕c and 0.3 GeV∕c. The maximum and minimum intensities achieved were respectively ∼ 1500 and ∼ 100 particles per burst.
Momentum spectrometer
It is possible to measure the momentum of the beam particles with a system composed of three profile monitors around a dipole magnet. In this configuration, the momentum is calculated from the deflection angle exerted by the magnet to the particle, the length of the magnet, and its magnetic field. Such a momentum spectrometer has been set up with the ''X'' planes of XBPF 697 (the number indicates the distance in metres to the primary beam target), XBPF 701, XBPF 702, and a dipole magnet placed between XBPFs 697 and 701. Fig. 6 shows a diagram with the layout of the spectrometer. Further details about the design and operational principle of this spectrometer can be found in [7], section 4.1.
The three XBPF are aligned with the magnetic field of the dipole in such a way that the deflection angle exerted by the magnet can be measured, thus measuring the particles' momentum. The length of the magnet is well known and its magnetic field is set up by the beam line physicists according to the desired beam momentum. In fact, this system acts as a beam momentum station, selecting particles within a certain momentum range which are further carried by the beam line optics to the ProtoDUNE detector.
The performance of the momentum spectrometer during the 2018 data taking period is described in detail in [30], section III-D.
Cherenkov threshold counters
H4-VLE is also equipped with two Threshold Cherenkov Counters (XCET1 and XCET2), which is a standard beam monitor at CERN for particle identification. The information of the XCET and TOF systems can be combined (time information and particle-tagging information) to provide a powerful tool for the diagnosis of the beam composition, as presented in Section 5.3. More information on the XCET used in the Neutrino Platform can be found in [33].
Results: performance of the XBPF and XBTF
The performances of the XBPF and XBTF systems were excellent over the whole data taking period, in addition to not suffering any downtime. Furthermore, the data from the monitors are in good agreement with the expected values obtained with Monte Carlo simulations of the beam line. The performance of the beam line and its instrumentation has been analysed in detail by a team of beam line physicists of CERN and the Neutrino Platform, who have reported their results in [30].
XBPF
The detection efficiency of the XBPF monitors can be easily measured from the number of triggered acquisitions with no fibres activated. Fig. 9 shows the efficiency measured for all XBPF stations during the 2018 data taking.
An offline cut on the hit map has been applied to select exclusively single-track events because we suspect that the first monitors could have an overestimated efficiency value. These monitors were subjected to very large particle fluxes -higher than 10 6 particles per second -due to their proximity to the beam target, therefore exceeding the original specification of 1000 particles per second (Section 1). The XBPFs of the Neutrino Platform were configured to record all fibre hits occurring 500 ns before and after a beam trigger signal was received. This 1 μs time window facilitated the alignment of the beam trigger with the XBPF events and seemed safe according to the original specifications of 1000 particles per second. However, the unexpected large particle flux may have resulted in several particles being recorded for a given trigger, which would artificially increase the measured efficiency of the detectors.
This issue can be easily solved by reducing the acquisition time window, which does not require any major hardware modification and will be deployed in the next run.
XBTF
The efficiency of the XBTF could not be directly measured in H4-VLE, as in the case of the XBPF. Nevertheless, measurements in a dedicated test bench in the laboratory have shown an efficiency of: Such a value is in good agreement with the Monte Carlo simulations of the beam line, as shown in Fig. 10.
Time-of-flight
The TOF system also performed well, with a measured time resolution of < 900 ps (r.m.s.), as expected from previous beam tests [11]. Given the 28.6 m distance between the two XBTFs of the TOF system, this time resolution makes it possible to differentiate protons from pions, electrons, muons and kaons at momenta below 3 GeV∕c. The combination of the TOF and XCET information, as shown in Fig. 11, is a powerful tool for particle identification that is extensively used by the ProtoDUNE physicists.
We believe that the main factors influencing the time resolution of the TOF system are: Fig. 11. Time-of-flight data combined with the information from the Cherenkov Threshold Counters. At 1 GeV∕c, when the XCET 1 does not give signal, pions and muons can be identified thanks to the TOF. When XCET 1 gives signal, positrons can be tagged instead. The combination of the two systems provides a powerful tool for the study of the beam composition. Source: Image taken from [30].
• Generation and propagation of photons inside scintillating fibres.
• Transit-Time Spread (TTS) of the PMT, which for the H11934 is 300 ps according to Hamamatsu. • Time walk of the CFD, which for the N842 is 400 ps according to CAEN. • Conversion of the NIM signals from the CFD to TTL logic level with the module N89 from CAEN. Such conversion is necessary because the FMC-TDC only accepts TTL signals. The jitter introduced by this module has been measured by us to be 110 ps. • Time resolution of the FMC-TDC, which is 81 ps according to our own measurements.
On a first order approximation, it can be assumed that most of these factors are independent and can be summed in quadrature [34]: which allows us to estimate the contribution from the light generation and propagation within the fibre: =∼ 735 ps. We believe that the main limitations to the performance of the TOF system are: • The large time walk of the CFD and the conversion to TTL logic level. • Photons bouncing back from the mirror of the XBTF that smear the time resolution of the fibres.
It is foreseen to install in the next run a new model of CFD, the 6915 from Phillips Scientific, which features 75ps time walk and TTL signal output. Thanks to this upgrade, the time resolution is expected to improve to 800 ps, which will facilitate particle identification at 3 GeV∕c.
Concerning the mirror, unfortunately it is not possible to remove it since that would compromise the efficiency of the trigger system, which is more important for the ProtoDUNE Experiments than a better TOF resolution. A possible solution would be to install new XBTFs without mirror exclusively dedicated to TOF in the available space of the present XBTF vacuum tanks.
A measurement of the light yield of the XBTF would help to better understand the contribution from the scintillating fibres. Unfortunately, it was not possible to perform such measurement before the commissioning of the Neutrino Platform beam lines due to lack of time but it is programmed for the next run.
Similar scintillating fibre hodoscopes, such as a prototype of the COMPASS hodoscope, have reported a time resolution of about = 400 ps in dedicated beam tests [19]. This prototype module was made of three staggered layers of Kuraray SCSF-78 multi-clad fibres with circular cross-section and 1 mm diameter, which were read-out by Multi-Anode Photomultipliers (H6568 from Hamamatsu with a TTS of 280 ps), measuring a light yield of around 30 photoelectrons per Minimum Ionising Particle. The more modern experiment Mu3e [23] is developing a SciFi tracker for timing measurements that achieves a time resolution < 500 ps. It is based on three staggered layers of square fibres Saint-Gobain BCF-12 of 250 μm with SiPM (Hamamatsu 13360-1350) coupled on both ends and read-out by a dedicated chip, the MuTRIG readout chip [35].
Conclusions and outlook
The new scintillating fibre monitor has completely fulfilled the requirements of the Neutrino Platform in terms of performance and functionality. The XBPF has measured the beam profile and the beam momentum with high efficiency and the required spatial resolution. The XBTF has measured the beam intensity and provided trigger signals to the NP04 Experiment with an efficiency similar to the XBPF. Moreover, the signals from the XBTF have been reused to create a time-of-flight system with a time resolution of ∼ 900 ps, which has provided valuable particle identification for the lower momentum beams. Thanks to the low material budget of the monitors, the minimum achieved momentum in H4-VLE was 0.3 GeV∕c, exceeding the original specifications.
New Constant Fraction Discriminators with lower time walk and direct TTL signal output have been tested in the laboratory and will be implemented in the next run. The time resolution of the TOF is expected to improve to = 800 ps after this upgrade.
At present, the XBPF is being produced for the East Experimental Area at CERN, which is undergoing a complete renovation to be finished by 2021 [36]. The XBPF will replace all gaseous wire chambers, thus consolidating in this way the profile monitoring in this facility.
Some studies on radiation hardness of Kuraray SCSF-78 fibres report that up to total absorbed doses of approximately 100 kGy, the fibres keep around 80% of their detection efficiency [37]. Monte Carlo simulations of the XBPF described in [10] foresee a life span of several decades in the CERN Experimental Areas before reaching that dose. However, a comprehensive study on the radiation damage of polystyrene based scintillating fibres [38] reports that annealing processes have a major impact in the radiation hardness and such processes need an atmosphere rich in oxygen to occur. Since the XBPF are placed in vacuum, their radiation hardness might be compromised. The same study [38] reports that a later exposition to oxygen triggers the annealing processes, meaning that perhaps an eventual exposure of the XBPF to air, as happens for example during technical stops, may be enough to recover the fibres from radiation damage.
In case of severe degradation of the performance due to radiation damage, the modular design of the XBPF allows for a simple and costeffective replacement of the damaged fibres, with a cost lower than 1000 CHF per fibre plane.
The present design of the XBPF has been chosen based on a compromise between performance and cost. The monitor is inexpensive and can be produced by the Beam Instrumentation group at CERN on demand. It is easy to maintain, has a low power consumption and does not need gas or cooling systems. If a certain application requires higher performance, the XBPF can be easily modified to accommodate several layers of staggered fibres in order to improve the detection efficiency and spatial resolution. However, such modifications tend, as a trade-off, to raise the overall cost and to complicate the production processes.
Declaration of competing interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
|
2019-10-24T09:16:28.923Z
|
2019-10-28T00:00:00.000
|
{
"year": 2019,
"sha1": "4bdb6d4661a77ee9c1df8b54d9ae6d71659e0960",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1016/j.nima.2019.162996",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "165cb60d61a99ae78848d000a4be51e10bf048fd",
"s2fieldsofstudy": [
"Physics",
"Engineering"
],
"extfieldsofstudy": [
"Physics"
]
}
|
2980371
|
pes2o/s2orc
|
v3-fos-license
|
The Ratchet Effect
Supplemental Digital Content is available in the text.
H ealth systems are under pressure from increasing incidence of chronic disease. 1,2 Chronic conditions are prevalent in older populations, with 70%-80% of people aged 65 or over having at least 1 chronic condition. 3 Chronic diseases, including type 2 diabetes mellitus, heart failure, and chronic obstructive pulmonary disease (COPD), are leading causes of avoidable hospitalizations. 4 Chronic diseases are prolonged in duration and rarely completely cured. 5 Understanding the impact of chronic disease on health care utilization requires modeling over extended periods of time. To perform this analysis, fixed reference points in disease progression must be identified. Reference points are required to inform clinical practice, identify trends in health care utilization, and to study phenomena in morbidity, such as "compression" (whereby morbidity is delayed while not similarly extending lifespan) 6,7 and "expansion" (whereby death is delayed while not delaying onset of morbidity). 8 Identifying such reference points are challenging.
Proximity to death has been used as fixed reference point and approximately a quarter of lifetime health care are consumed in the last year of life. [9][10][11] However, these studies focus on patients in the last few years of life, rather than longer lived patients with chronic disease. Furthermore, death, as a reference point, precludes future health management at its identification.
To examine longitudinal demand, this paper describes an identifiable fixed reference point. This "cardinal event" is defined as a disease-specific diagnosis upon hospital admission, where such an event has not occurred in the previous 2 years. This definition identifies the first admission associated with a particular diagnosis in a potential series of admissions, within a reasonable period of time.
Worldwide, there has been no previous study examining longitudinal hospital demand in the periods before and after hospital admissions for chronic disease. In this paper, longitudinal analysis is applied to hospital utilization around cardinal events in 3 of the most common chronic conditions: heart failure, type 2 diabetes, and COPD. The changes in utilization associated with such events, in terms of inpatient days and emergency department (ED) presentations, are compared with that seen in cardinal events in asthma, diabetic patients undergoing cataract procedures, and dialysis.
METHODOLOGY Ethics Statement
Ethical approval was provided by the Department of Health Western Australia Human Research Ethics Committee and the University of Western Australia Human Research Ethics Committee. Informed consent was not sought and data were analyzed anonymously.
Study Population
Data were extracted from the Western Australia Data Collections, including the Hospital Morbidity, Emergency, and Mortality datasets. This dataset included all records from all public and private acute hospitals in Western Australia. 12 All records (for any diagnosis) between 2002 and 2010 (inclusive) were obtained for patients who had at least 1 diagnosis (principal and/or secondary/alternative, at an ED presentation or admission to hospital within this time period) with one of the following International Statistical Classification of Diseases and Related Health Problems, Tenth Revision, Australian Modification (ICD-10-AM) codes: E9-E14 (impaired glucose metabolism, including type 1 and type 2 diabetes), I50 (heart failure), and J40-J47 (including bronchitis, COPD, and asthma). This dataset defined the population of patients from which cardinal events would be extracted.
Records for individuals identified as Aboriginal or Torres Strait Islander (3.7% of the Western Australian population 13 ) were excluded from the analysis due to statistically significantly different distributions of age at diagnosis, rate of hospitalization, and mortality (data not shown).
Definition of Cardinal Event
"Cardinal events" were defined as the first day of an admission to hospital (not including ED visits that did not result in an admission) associated with a particular principal (and not additional) diagnosis code, where the same principal diagnosis was not associated with an admission by that patient in the previous 2 years. This excluded all admissions in 2002 and 2003 as admissions in these years could not be identified using these criteria. This resulted in a "rolling" 2year clearance period. Cardinal events after 2008 were excluded to allow sufficient time (2009 and 2010) for subsequent events to be recorded within the Western Australia Data Collections, as records enter the database after discharge. A minority of individuals had multiple events (2 and, rarely, 3 such events with the same diagnosis) occurring in the 5-year study period (see the Results section).
Statistical Analysis
A generalized linear mixed effects (logistic) model 14 was used to analyze the proportion of diagnosis-specific cardinal events in each study year (2004)(2005)(2006)(2007)(2008), including presence/absence of cardinal event as the dependent variable, year as the independent variable, and random effects for the unique subject identifier. This provides a correlation structure to account for those individuals who had 1, 2, or 3 cardinal events during the study period (see Supplemental Digital Content 1, http://links.lww.com/MLR/A767). Note that few patients had 3 events and the correlation structure in this group may be imprecise.
Patient data before and after a cardinal event were aligned using the day of the cardinal event as time zero. After adjusting for entry into the monitored data period, loss to follow-up, and death, the number of person-days contributing to the denominator value (n) changed with time. The dataset is therefore both right and left censored, that is, patients may not have data for the full 6 years before or 4 years after the cardinal event (see Table, Supplemental Digital Content 2, http://links.lww.com/MLR/A768). Admissions to hospital (including transfers) were linked to form an inpatient history (while retaining inpatient days actually used), with leave days (ie, absence from hospital, occurring in <2% of admissions) being excluded. Similarly, an ED presentation history was prepared. Daily hospital utilization was calculated as the number ED presentations or inpatient days (calculated and presented separately in the Results section) in the numerator divided by person-days in the denominator on a given day.
Hospitalizations with a principal diagnosis code of dialysis were removed from the analysis to avoid bias from such events. Same-day admissions for dialysis was associated with approximately 40% of all admissions in our study population, but were associated with only 0.5% of all patients (data not shown). The exception was that same-day admissions for dialysis were included where the cardinal event diagnosis was itself dialysis. Note that dialysis cardinal events only include patients who have been diagnosed with a chronic disease.
Nonparametric regression was used, based on locally weighted polynomial regression 15 (LOESS), to provide an accurate estimate of mean hospital utilization (as measured by either ED presentation or inpatient days), excluding a 2-week time window prior and after the cardinal event. This allowed 95% confidence intervals (CI) to be simply generated using bootstrap techniques. LOESS combines the simplicity of least squares regression with the flexibility of nonlinear regression. Simple models are fitted to localized subsets of the data to build a curve that describes the variation in the data. This method does not require a global function or any assumption of form to fit a model to the data (see Supplemental Digital Content 3, http://links.lww.com/ MLR/A769).
Estimates were displayed for the period up to 31 days before, and for the period beginning 31 days following, the cardinal event. The loess function (stats package in the R Statistical Graphics version 2.15.3 software 16 ) was used to fit a kernel-weighted, polynomial regression with a second-order polynomial, with a Gaussian kernel with a span of 1.4 and weights equal to the denominator value on each day. The span was chosen by eye to balance bias (an over-smoothed curve) against variance (a curve that "chases the points"). Ninety-five percent confidence limits were calculated using a bootstrap function (bootobject and boot.ci functions from the boot package in R 17 ). A total of 5000 bootstrap replicates were used (sampling with replacement) and 95% confidence limits were obtained using the bootstrap percentile method. 18 Total number of ED presentations and inpatient days were separately calculated for 3 periods relative to a cardinal event: "around event," which is between À 30 and +30 days; "before event," between À 31 days and À 6 years; and "after event," between +31 days and +4 years. Ninety-five percent CI for the hospitalization rates were calculated using the bootstrap percentile interval, using 1000 replicates sampled with replacement. 18
Cardinal Events
The total number of cardinal events examined is 47,443. Overwhelmingly, only a single cardinal event for each diagnosis type can be identified for any individual within the time period examined, with the percentage of events being the second for an individual with, for example, heart failure, type 2 diabetes, and COPD being 3, 4, and 6, respectively (Table 1). Less than 5 individuals experience a third cardinal event.
Longitudinal Analysis
The longitudinal hospital utilization (defined as inpatient days or ED presentations) associated with cardinal events are shown in Figures 2 and 3. The period examined was from 6 years before the cardinal event, to 4 years following. Estimates are displayed for the period up to 31 days before the cardinal event, and for the period beginning 31 days following the cardinal event. Multiple cardinal events for the same individual are included in this analysis. Excluding such events did not significantly alter the analysis shown below (data not shown). The n-values, estimates, and confidence bands at À 6, À 4, À 2, +2, and +4 years relative to the cardinal event are detailed in Table (Supplemental Digital Content 2, http://links.lww.com/MLR/A768).
As shown in Figures 2A-C, the rate of inpatient days in the years around a cardinal event in heart failure, type 2 diabetes, and COPD are strikingly similar. Six years before the cardinal event, inpatient days were 3-5 days/year. This increases over the next 4 years, more rapidly in the 2 years before the cardinal event. After the event, inpatient days decrease rapidly in the first year, before reaching new levels between approximately 9 and 14 days/year at 2 years following the cardinal event, and between approximately 14 and 16 days/year at 4 years following the cardinal event.
Excluding 1 year immediately before and after the event, rates of inpatient days are consistently 2-3 times higher after the event when compared with before.
In contrast, inpatient days/year in "cataract with diabetes" cardinal events increase steadily across the 10-year period, from approximately 2-7 days/year (Fig. 2D). Inpatient days/year around asthma cardinal events follow an-other trajectory, increasing from approximately 1 day/year at 6 years prior and plateauing at approximately 2 days/year at 3 years before the cardinal event. This rate is maintained until after the cardinal event, where rates immediately following the cardinal event are approximately 3 days/year, before reducing to approximately 2 days/year at 4 years following the cardinal event, that is, the same rates observed 3 years before the cardinal event (Fig. 2E). Inpatient rates around dialysis cardinal events increase from approximately 5 days/year in the years before the cardinal event, to approximately 90 days/year in the years following the cardinal event (Fig. 2F). As noted in the methodology, dialysis admission events are included in the analysis of dialysis cardinal events, but not in other cardinal event analyses.
In cardinal events associated with heart failure, type 2 diabetes, and COPD (Figs. 3A-C), the rate of ED presentations per year (presentations/year) increases from approximately 0.5 presentations/year (6 y pre-event) to 0.6-0.8 presentations/year (2 y pre-event). Rates peak around the cardinal event, before returning to levels of between 1 and 1.5 ED presentations/year in the years following the cardinal event. These levels remain above 1 presentation/year until at least 4 years after the cardinal event. Excluding 1 year immediately before and after the event, rates of ED presentations were consistently 2-3 times higher after the event when compared with before.
In contrast, ED presentations/year in cataract with diabetes cardinal events increase steadily across the 10-year period, from approximately 0.25 ED presentations/year at 6 years prior, to 0.6 ED presentations/year (Fig. 3D). ED presentations/year around asthma cardinal events increase from approximately 0.5 ED presentations/year at 6 years prior, increasing to 0.8 ED presentations/year 2 years before the cardinal event. The rate peaks around the cardinal event, before returning to a similar rate 4 years after the cardinal event (Fig. 3E). Inpatient rates around dialysis cardinal events increase from approximately 0.5 ED presentations/ year at 6 years prior, to 0.9 ED presentations/year 2 years before the cardinal event. Following a peak around the cardinal event, these rates then drop to approximately 1.4 ED presentations/year after 4 years (Fig. 3F).
Total Utilization Across Periods
The utilization patterns described above excludes utilization in a period around the cardinal event, from 30 days before to 30 days after. To examine this period, total number of ED presentations and inpatient days were separately calculated for 3 periods relative to a cardinal event: "around event," "before event," and "after event," and shown in Figure 4 (see Table, Supplemental Digital Content 4, http://links.lww.com/MLR/A770). As shown, hospital utilization in the period around the cardinal event for heart failure, type 2 diabetes, COPD, and asthma, represents only 10%-15% of the total demand for inpatient days and for ED presentations across the 10-year period examined.
DISCUSSION
Cardinal events describe a fixed reference point in disease progression, enabling examination of trends in hospital utilization over time. We demonstrate that for 3 of the most common chronic diseases, the cardinal event heralds a marked and sustained change in days in hospital and ED presentations. The 2-year clearance period was defined to maximize the lead-in and follow-up data available, while minimizing multiple events. Different clearance periods before the cardinal event, for example, 4 years rather than 2, did not significantly change our findings (data not shown) but reduced the follow-up period available for analysis. Alternatively, date of "first" diagnosis with disease may be used as a fixed reference point. However, the identification of a "first event" is not straightforward. "First events" are subject to "prevalent pool effects," where the actual first event may occur outside the period under examination. A clearance period at the start of a study can reduce this effect, 19,20 but this results in early "first-time" events in the data collection period having a shorter clearance period than later "first-time" events, that is, nonequivalent time-biased reference points. "Back-casting" methods 21 are unsuitable in identifying reference points, as these allow for adjustment of incidence rates to remove the prevalent pool effect, but do not allow for identification of individual events. The definition used here for cardinal events overcomes these issues.
Cardinal events represent 40%-60% of the chronic disease admissions examined. Type 2 diabetes-related cardinal events are increasing, consistent with worldwide trends. 22 No changes in the Australian coding standards for ICD-10-AM adequately explain this consistent trend. However, a limitation of this methodology is potential biases introduced by inconsistent and changing coding practices over time.
Following a cardinal event for type 2 diabetes mellitus, COPD, heart failure, and dialysis, there is a marked increase in all-cause demand that is sustained for the following 4 years. This change seems to be relatively stable for years after the event and is analogous to a "ratchet effect," that is, a change relatively resistant to reversal. This ratchet effect is consistent across the 3 most common chronic diseases and is not seen in following cardinal events for asthma or in cataract with diabetes.
We excluded 60 days around the cardinal event to produce the smoothed utilization curves. It could be argued that most hospital utilization would occur around the cardinal event, rather than outside this period. However, 85%-90% of the demand occurs outside of this 60-day period.
Other studies have examined health service utilization following a diagnosis of chronic disease, [23][24][25][26] or in the period preceding a diagnosis. 27 However, these studies did not examine the change (comparing pre and post) in demand associated with such a diagnosis. It is noteworthy that health service utilization before and after a diagnosis of type 2 diabetes in primary care has been examined, and no ratchet effect was observed in that setting. 28 This is the first time that all-cause inpatient days and ED presentations both before and after a hospital diagnosis with chronic disease has been examined. Furthermore, previous studies have not used the robust statistical methods or large datasets as the study presented here. [23][24][25][26]29,30 The ratchet in hospital utilization following a cardinal event in chronic disease is not expected. Rather, it may be anticipated that an admission occurs during an increasing pattern of hospital utilization. For example, it has been observed that hospital admission rates steadily rise around the time of a diagnosis of type 2 diabetes in general practice. 28 In a planned admission to hospital for a routine procedure (ie, a cataract operation for a diabetic patient), such gradual increases in all-cause hospital admissions and ED presentations were observed. Alternatively, an admission may reflect an exacerbation of chronic disease, resulting in a rise in hospital use which subsides following treatment, as observed in the case of asthma. Where patients undergo a dramatic alteration in condition or clinical care, a dramatic and sustained increase in utilization may result. For patients commencing dialysis, this reflects an expected change, where usual treatment requires dialysis 2-3 times per week and ED presentations are known to increase. 31 Unexpectedly, this pattern was observed in type 2 diabetes, heart failure, and COPD. Some ED presentations after these 3 chronic disease cardinal events would be due to patients entering dialysis. However, the small number of patients entering dialysis, and the relative increase in ED presentations after dialysis initiation, demonstrates that dialysis is a minor contributor to this increase.
Factors driving the ratchet effect may include changes in clinician and patient behavior, and changes in severity of underlying disease. There may be increased planned care following an admission. However, planned care alone does not account for the increases in all-cause ED presentations following the cardinal event. Clinicians may encourage patients to re-present to ED with exacerbations. Individuals or their carers may make new choices in the appropriate site for management of their disease. Cardinal events may also be a transition across a threshold in disease progression. Finally, increased demand may be associated with patients approaching death. In this paper, records for individuals who were identified at least once as Aboriginal or Torres Strait Islander were excluded from the analysis due to statistically significantly different distributions of age at diagnosis, rate of hospitalization, and mortality. This does not weaken the generalizability of our findings outside of Australia, as Aboriginal and Torres Strait Islanders are a group unique to Australia. Current analysis is focused on outlining the differences in hospital demand in this group (manuscript in preparation). 32 Furthermore, we have specifically focused on cardinal events as defined by a diagnosis in hospital, as such events are well defined and of importance to the providers of hospital-based services. We have not examined cardinal events associated with other identifiable reference points in disease progression (eg, a diagnosis in primary care). Furthermore, we have not examined changes in outpatient utilization, although such changes may be significant. This paper identifies dramatic and sustained increases in all-cause inpatient days and ED presentations associated with specific chronic disease events. This ratchet effect associated with cardinal events is important in understanding the impact of chronic disease on hospital demand. Events that herald such a marked transition in health service demand have not been previously described. Better primary care with a focus on chronic disease may delay such cardinal events, and programs may be identified that manage more appropriate care following cardinal events. The identification of such a reference point in care allows these possibilities to be examined.
|
2016-05-16T10:11:39.457Z
|
2014-09-15T00:00:00.000
|
{
"year": 2014,
"sha1": "8105ea3b4e11dba0e7dd72b9a79cbb2d611938b0",
"oa_license": "CCBYNCND",
"oa_url": "https://europepmc.org/articles/pmc4174034?pdf=render",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "8105ea3b4e11dba0e7dd72b9a79cbb2d611938b0",
"s2fieldsofstudy": [
"Medicine",
"Political Science"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
248955840
|
pes2o/s2orc
|
v3-fos-license
|
A novel Cbx1, PurB, and Sp3 complex mediates long-term silencing of tissue- and lineage-specific genes
miRNA-based cellular fate reprogramming offers an opportunity to investigate the mechanisms of long-term gene silencing. To further understand how genes are silenced in a tissue-specific manner, we leveraged our miRNA-based method of reprogramming fibroblasts into cardiomyocytes. Through screening approaches, we identified three proteins that were downregulated during reprogramming of fibroblasts into cardiomyocytes: heterochromatin protein Cbx1, transcriptional activator protein PurB, and transcription factor Sp3. We show that knockdown of Cbx1, PurB, and Sp3 was sufficient to induce cardiomyocyte gene expression in fibroblasts. Similarly, gene editing to ablate Cbx1, PurB, and Sp3 expression induced fibroblasts to convert into cardiomyocytes in vivo. Furthermore, high-throughput DNA sequencing and coimmunoprecipitation experiments indicated that Cbx1, PurB, and Sp3 also bound together as a complex and were necessary to localize nucleosomes to cardiomyocyte genes on the chromosome. Finally, we found that the expression of these genes led to nucleosome modification via H3K27me3 (trimethylated histone-H3 lysine-27) deposition through an interaction with the polycomb repressive PRC2 complex. In summary, we conclude that Cbx1, PurB, and Sp3 control cell fate by actively repressing lineage-specific genes.
miRNA-based cellular fate reprogramming offers an opportunity to investigate the mechanisms of long-term gene silencing. To further understand how genes are silenced in a tissue-specific manner, we leveraged our miRNA-based method of reprogramming fibroblasts into cardiomyocytes. Through screening approaches, we identified three proteins that were downregulated during reprogramming of fibroblasts into cardiomyocytes: heterochromatin protein Cbx1, transcriptional activator protein PurB, and transcription factor Sp3. We show that knockdown of Cbx1, PurB, and Sp3 was sufficient to induce cardiomyocyte gene expression in fibroblasts. Similarly, gene editing to ablate Cbx1, PurB, and Sp3 expression induced fibroblasts to convert into cardiomyocytes in vivo. Furthermore, high-throughput DNA sequencing and coimmunoprecipitation experiments indicated that Cbx1, PurB, and Sp3 also bound together as a complex and were necessary to localize nucleosomes to cardiomyocyte genes on the chromosome. Finally, we found that the expression of these genes led to nucleosome modification via H3K27me3 (trimethylated histone-H3 lysine-27) deposition through an interaction with the polycomb repressive PRC2 complex. In summary, we conclude that Cbx1, PurB, and Sp3 control cell fate by actively repressing lineage-specific genes.
Cellular reprogramming has the potential to transform regenerative medicine. The standard approach for cellular reprogramming is to use specific combinations of transcription factors, pharmacological agents, or miRNAs (1)(2)(3). With respect to miRNA-based reprogramming, we were interested to identify miRNAs that would reprogram fibroblasts into cardiomyocytes. Through a screening approach, we found that fibroblasts could be converted into cardiomyocytes via a combination of four miRNAs (miR-1, miR-133, miR-208, and miR-499), which we called miR combo (4). Importantly, miR combo directly reprograms fibroblasts into cardiomyocytes without the cells passing through an intermediate stem cell state (4). Since our initial discovery, we have gone on to demonstrate that miR combo improves cardiac function in heart injury models (5). Moreover, we have also found that reprogramming via miR combo utilizes immunity and epigenetic pathways (6,7). In addition to reprogramming fibroblasts to cardiomyocytes, miRNAs have also been used to reprogram cells to pluripotency (8) as well as to neurons (9). Compared with transcription factor-based approaches, miRNA-based reprogramming is fundamentally different as it involves the downregulation of a large number of mRNAs (5,10,11). The implication of miRNA-based reprogramming is that cells maintain their identity via repressive mechanisms (5,10,11). Indeed, the majority of genes in eukaryotes are typically silent. Genes that are active during embryogenesis are quickly silenced and remain so throughout development. Moreover, tissue-specific genes are mostly silent at an early stage of development and remain so in most cell types, only undergoing reactivation in their tissues of expression. While there has been considerable focus on gene activation, far less attention has been paid to understanding long-term gene silencing (12). It is believed that long-term gene silencing involves sequence-dependent repression factors, DNA methylation, timing of replication, and histone modifications (12)(13)(14)(15). It is unknown if these mechanisms work independently of each other or in combination. Similarly, it is also unclear how essentially random processes such as DNA methylations or histone modifications are localized to specific genes. For example, the enzymes that modify histones lack any intrinsic ability to distinguish between histones on different genes. Despite this, silencing histone modifications are highly localized. Consequently, understanding the mechanisms underpinning miRNA-based reprogramming is likely to provide important insights into how genes are silenced.
Identifying potential repressors of the cardiomyocyte phenotype
We have previously demonstrated that a combination of four miRNAs (miR-1, miR-133, miR-208, and miR-499), which we call miR combo, reprograms fibroblasts into cardiomyocytes (4,5). Considering that miRNAs initiate the degradation of their mRNA targets, this implies that repressors of cardiomyocyte genes should be found among the targets of miR combo. To identify potential targets, we analyzed our recent RNA-Seq study that compared various cardiac populations in the mouse heart (18). Through this approach, we found that when compared with undifferentiated cells, cardiomyocytes were depleted for 80 transcription factors and RNA-binding proteins. The list was filtered by removing proteins previously implicated in the differentiation to noncardiomyocyte lineages such as blood vessels or neurons. After filtering, ten potential candidates were identified: Cbx1, Csde1, Ddx5, Egr1, Fhl2, Fli1, PurB, Sp3, Tcf4, and Zfp36 (Fig. 1A). Of these ten candidates, Cbx1, Csde1, Ddx5, Egr1, PurB, Sp3, and Zfp36 were found to be targets of miR combo (Fig. 1B).
To evaluate the role of these ten candidates in regulating the expression of cardiomyocyte-specific genes, knockdown experiments were performed. Knockdown of each candidate was robust (Fig. 1C). Following knockdown of the candidate repressor in fibroblasts, expression levels of cardiomyocytespecific genes were measured. With the exception of Csde1, knockdown of each candidate repressor induced the expression of at least one cardiomyocyte-specific gene (Fig. 1D). Of the ten potential repressors, knockdown of Cbx1, PurB, or Sp3 increased the expression of >90% of the measured cardiomyocyte-specific genes (Fig. 1D, representative graphs in Fig. S1). As Cbx1, PurB, and Sp3 appeared to be universal regulators of the cardiomyocyte phenotype, they were studied further.
Knockdown of Cbx1, PurB, or Sp3 was sufficient to reprogram fibroblasts into cardiomyocyte-like cells as evidenced by expression of the cardiomyocyte protein Actn2 (Fig. 1E) as well as sarcomere formation (Fig. 1F).
In contrast to cardiac fibroblasts, knockdown of Cbx1, PurB, or Sp3 did not induce cardiomyocyte gene expression in lung or tail-tip fibroblasts (Fig. S2).
To further investigate the role of Cbx1, PurB, and Sp3 as repressors, we investigated the effect of their knockdown on fibroblast, endothelial, and neuronal specific gene expression. As expected, knockdown of Cbx1, PurB, and Sp3 strongly induced cardiomyocyte-specific gene expression. In contrast, loss of Cbx1, PurB, and Sp3 reduced the expression of fibroblast-specific genes ( Fig. 2A). This result suggested that the fibroblasts were indeed exiting the fibroblast phenotype.
Similarly, loss of Cbx1, PurB, and Sp3 reduced endothelial gene expression ( Fig. 2A). Generally, neuronal markers were not expressed in fibroblasts or expressed at low levels. Despite the apparent induction of a few neuronal markers, loss of Cbx1, PurB, and Sp3 generally reduced neuronal gene expression ( Fig. 2A).
Loss of Cbx1, PurB, and Sp3 expression is associated with cardiomyocyte development If Cbx1, PurB, and Sp3 play an important role in repressing cardiomyocyte-specific genes, their expression should decrease during cardiomyocyte development. In support of this hypothesis, we found that inducible pluripotent stem (iPS) cell differentiation to cardiomyocytes was associated with a significant decrease in the expression of Cbx1, PurB, and Sp3 (Fig. 2B). To demonstrate that the loss of Cbx1, PurB, and Sp3 was not because of a general loss of expression, we also measured the expression of Sox6; a transcription factor expressed during cardiomyocyte differentiation (19). As expected, Sox6 expression increased during iPS differentiation to cardiomyocytes (Fig. 2B).
To verify these results, we utilized publically available RNA-Seq data. We chose two separate RNA-Seq studies that measured mRNA changes in human iPS cells undergoing differentiation into cardiomyocytes. Churko et al. (20) provided an averaged mRNA read count (ten technical replicates) of a single human iPS cell line at various time points during cardiomyocyte differentiation. In contrast, Pavlovic et al. (21) provided mRNA read count data from 12 individual human iPS lines. In addition, the Pavlovic study also provided the mRNA read data from the cardiac tissues that were used to generate the individual iPS lines. Analysis of both datasets supported our findings. In the Churko study, Cbx1, PurB, and Sp3 expression were all reduced in iPS-derived cardiomyocytes when compared with iPS cells (Fig. 2C). Again, Sox6 expression increased (Fig. 2C). PurB levels were not measured in the Pavlovic study; however, in all 12 human iPS cell lines, cardiomyocyte differentiation was associated with significant loss of Cbx1 and Sp3 expression (Fig. 2D). Cbx1 and Sp3 expression were similar in iPS-derived cardiomyocytes and the heart tissue from which the iPS cells were generated (Fig. 2D). In contrast, Sox6 expression increased during iPS cell differentiation to cardiomyocytes in all 12 lines (Fig. 2D).
Cbx1, PurB, and Sp3 knockout induces fibroblasts to reprogram into cardiomyocytes in vivo
We wanted to determine if Cbx1, PurB, and Sp3 also repressed cardiomyocyte genes in in vivo fibroblasts. In vivo, expression of the three proteins was found to be localized to fibroblasts and absent in cardiomyocytes (Fig. 3, A and B). CRISPR-CRISPR-associated protein 9 (Cas9) gene editing was employed to ablate Cbx1, PurB, and Sp3 expression in vivo. To identify functional guide RNAs (gRNAs), 2 to 3 gRNAs for the first exon of Cbx1, PurB, or Sp3 were introduced individually into cultured cardiac fibroblasts along with Cas9. Immunoblotting indicated that the gRNAs were effective in ablating Hodgkinson et al. (18) were analyzed for the mRNA levels of the indicated genes in freshly isolated cardiomyocytes and noncardiomyocytes. Individual data points (open circles) and mean (horizontal bar) are shown. One-way ANOVA with Bonferroni post hoc tests was used to determine significance; ***p < 0.001. N = 3. B, cardiac fibroblasts were transfected with the direct cardiac reprogramming cocktail miR combo. A nontargeting miRNA (negmiR) was used as a control. Four days after transfection, expression of the indicated transcription factors was determined by quantitative PCR (qPCR). Expression values were normalized to negmiR. N = 5. Individual data points (open circles) and mean (horizontal bar) are shown. One-way ANOVA with Bonferroni post hoc tests was used to determine significance; ***p < 0.001. C, cardiac fibroblasts were transfected with siRNA targeting an individual putative repressor. A nontargeting siRNA was used as a control. After 4 days, expression was determined by qPCR. Expression values were normalized to the control siRNA. N = 3. Individual data points (open circles) and mean (horizontal bar) are shown. One-way ANOVA with Bonferroni post hoc tests was used to determine significance; ***p < 0.001. D, cardiac fibroblasts were transfected with siRNA targeting an individual putative repressor. A nontargeting siRNA was used as a control. After 14 days, expression of the indicated cardiomyocyte-specific genes was determined by qPCR. Expression values were normalized to the control siRNA. The heatmap summarizes the results of ten cardiomyocyte-specific genes. Increased expression of greater than twofold and with a significance <0.05 is shown in red. N = 3 to 5. One-way ANOVA with Bonferroni post hoc tests was used to determine significance; *p < 0.05. E, cardiac fibroblasts were transfected with siRNA targeting an individual putative repressor. A nontargeting siRNA was used as a control. After 14 days, the cells were incubated with antibodies to the cardiomyocyte-specific protein Actn2. Representative images are shown. N = 4. The scale bar represents 50 microns. F, higher resolution images of the cells shown in E are shown to display sarcomeres. The scale bar represents 50 microns. N = 4. Quantification of the percentage of Actn2+ cells displaying Cbx1, PurB, and Sp3 expression (Fig. 3C). To test efficacy of knockout in vivo, the gRNAs for the three proteins as well as Cas9 were subsequently packaged into lentivirus particles and injected into the mouse heart. Seven days later, cardiac tissue was analyzed for the expression of Cbx1, PurB, and Sp3. In control cardiac tissue, Cbx1, PurB, and Sp3 expression was robust and localized to the nucleus (Fig. 3D). However, expression of the three proteins was absent in cardiac tissue isolated from mice receiving the repressor gRNAs (Fig. 3D).
Having demonstrated the efficacy of the approach, control and repressor targeting gRNAs were delivered into the hearts of Fsp1-Cre:tdTomato fibroblast lineage-tracing mice. In these Fsp1-Cre:tdTomato fibroblast lineage-tracing mice, fibroblasts are permanently marked with tdTomato (4). In control mice, injected with lentiviruses containing Cas9 and nontargeting gRNAs, there were no tdTomato+ cardiomyocytes; indicating that tdTomato+ fibroblasts do not normally differentiate into cardiomyocytes (Fig. 3E). In contrast, following the ablation of Cbx1, PurB, and Sp3 expression, 10% of cardiomyocytes in the vicinity of the injection site were tdTomato+; indicating fibroblast conversion into cardiomyocyte-like cells (Fig. 3E).
Cbx1, PurB, and Sp3 bind specifically to cardiomyocyte-specific genes in fibroblasts
To understand the mechanism by which these repressors actively repress the cardiomyocyte phenotype in fibroblasts, we first employed ChIP-Seq. Chromatin derived from mouse cardiac fibroblasts was incubated with Cbx1, PurB, or Sp3 antibodies, and the resulting immunoprecipitated DNA was sequenced via high-throughput sequencing. Analysis of the dataset indicated that Cbx1, PurB, and Sp3 shared many of the same targets (Fig. 4, A; see Table S1 for full target gene list). In the cardiomyocyte-specific genes Ttn, Ryr2, and Kcnj6, Cbx1binding sites were present in the promoter exclusively (Fig. 4B). In contrast, PurB-binding sites were only present within the coding sequence (Fig. 4B). Sp3-binding sites were found in both promoter and within the coding sequence (Fig. 4B).
Gene Ontology (GO) analysis of repressor-bound genes gave further support to notion that Cbx1, PurB, and Sp3 play an important role in regulating the cardiomyocyte phenotype. Significant GO terms included those for cation transport, formation of the action potential, and muscle contraction ( Fig. 4C; full GO analysis is provided in Tables S2-S4). Additional significant GO terms included those for calcium signaling as well as biological processes including transcription regulation, cell adhesion, and the cell cycle ( Fig. 4C; full GO analysis is provided in Tables S2-S4).
Cbx1, PurB, and Sp3 regulate nucleosome architecture
We hypothesized that Cbx1, PurB, and Sp3 inhibited cardiomyocyte-specific genes in fibroblasts by modifying the nucleosome architecture. To test this hypothesis, we performed MNase-Seq. MNase-Seq is used to map nucleosomes. Nucleosomes are the basic unit of DNA compaction and a fundamental component of chromatin. Cardiac fibroblasts were transfected with either a control nontargeting siRNA or an siRNA that targets Cbx1, PurB, or Sp3. Chromatin was isolated 7 days later and incubated MNase. As shown in Figure 5A, MNase digestion conditions were optimized to cut the DNA in lengths of one nucleosome (147 bp). The MNase-digested samples were then submitted for high-throughput sequencing. The nucleosome architecture of active eukaryotic genes comprises of a nucleosome-free region just upstream of the transcription start site and an array of regularly spaced nucleosomes over the gene (22). In control cells, this pattern is absent at a genome-wide level (Fig. 5B). This suggests that in fibroblasts, the majority of genes are silent. Gene silencing appears to require Cbx1 and PurB as the loss of either protein induced nucleosome-free regions to appear (Fig. 5B). Loss of Sp3 differs in that the nucleosome architecture of control cells is retained (Fig. 5B). However, seeing as the read density was higher in the Sp3 siRNA-transfected cells, the data suggest that Sp3 plays a role in histone binding (Fig. 5B). At the level of individual genes, in control fibroblasts, cardiomyocyte-specific genes such as Ryr2 and Actn2 contain a large number of nucleosomes (Fig. 5C). Following knockdown of Cbx1, PurB, or Sp3, these nucleosomes disappear (Fig. 5C). In contrast, knockdown of Cbx1, PurB, or Sp3 had no effect on nucleosome patterning in noncardiomyocyte genes (Fig. 5D).
Cbx1, PurB, and Sp3 bind as a complex and interacts with the PRC2 complex
The ChIP-Seq data suggested that Cbx1, PurB, and Sp3 may act as a complex. To investigate complex formation, coimmunoprecipitation experiments were performed. Cbx1 immunoprecipitates were found to be highly enriched in Sp3 (Fig. 6A). Similarly, Sp3 was also highly enriched in PurB immunoprecipitates (Fig. 6A). The data suggest shared protein complexes, with Cbx1-Sp3 and PurB-Sp3 dimers being readily apparent. Cbx1 binding to PurB is somewhat unclear as binding was apparent when the Cbx1 antibody was used but not when the PurB antibody was used instead (Fig. 6A). This may be due to steric inhibition between the PurB antibody and the Cbx1 protein.
Having identified binding between the repressors, we wanted to determine how Cbx1, PurB, and Sp3 regulated gene activity. The MNase-Seq data suggested that Cbx1, PurB, and Sp3 were necessary for nucleosome patterning on cardiomyocyte-specific genes especially on gene promoters. Considering their role as repressors, we hypothesized that Cbx1, PurB, and Sp3 were important for the formation of inhibitory nucleosomes. Based on our prior miR combo studies, we further hypothesized that these inhibitory nucleosomes contained H3K27me3. Indeed, the combined sarcomeres. Individual data points (open circles) and mean (horizontal bar) are shown. One-way ANOVA with Bonferroni post hoc tests was used to determine significance; **p < 0.01. Cbx1, PurB, and Sp3 repress tissue-specific genes . Ablation of Cbx1, PurB, and Sp3 expression reprograms fibroblasts into cardiomyocytes. A, cardiac tissue was isolated from 8-week-old Fsp1-Cre:tdTomato mice. In these mice, Fsp1 fibroblasts are marked permanently with tdTomato. Tissue slices were incubated with tdTomato (red) and repressor (green) antibodies. Nuclei (blue) were stained with 4 0 ,6-diamidino-2-phenylindole (DAPI). Representative images from three individual mice. The scale bar represents 50 microns. B, cardiomyocytes and fibroblasts were isolated from 1-day-old C57BL6 mice. RNA was analyzed for the expression of the cardiomyocyte marker Scn5a, the fibroblast marker Postn, as well as the expression of the three repressors. Expression values are shown as a fold enrichment in fibroblasts when compared with cardiomyocytes. N = 3. Individual data points (open circles) and mean (horizontal bar) are shown. One-way ANOVA with Bonferroni post hoc tests was used to determine significance; ***p < 0.001. C, guide RNAs (gRNAs) for Cbx1, PurB, and Sp3 were cloned into a plasmid containing CRISPR-associated protein 9 (Cas9), and the resulting construct was transfected into cultured cardiac fibroblasts. After 7 days, protein extracts were probed for the presence of Cbx1, PurB, or Sp3. N = 3. Representative blots are shown with the loading control Gapdh. D, the Cbx1, PurB, and Sp3 gRNAs were cloned into a lentivirus-generating plasmid containing Cas9. Control nontargeting gRNA was cloned into the same plasmid as a control. Lentiviral particles were isolated and injected into the heart of an 8-week-old C57BL6 mouse. One week after cardiac injection, tissue slices were analyzed for repressor expression (green). Nuclei (blue) were visualized via DAPI. The scale bar represents 50 microns. Representative images from three individual mice. E, the Cbx1, PurB, and Sp3 gRNAs were cloned into a lentivirus-generating plasmid containing Cas9. Lentiviral particles were injected into the hearts of fibroblast lineage-tracing mouse Fsp1-Cre:tdTomato. In this model, fibroblasts and their progeny are permanently labeled with the fluorescent protein tdTomato. Two months after injection, heart sections within 500 microns of the injection site were incubated with tdTomato and cardiac troponin-T knockdown of Cbx1, PurB, and Sp3 was found to reduce H3K27me3 levels (Fig. 6B). The formation of H3K27me3 is dependent upon the PRC2 complex, which comprises catalytic (Ezh1, Ezh2) and regulatory (Suz12, Eed) subunits (23). We found that Cbx1, PurB, and Sp3 associated with either one or both of the Eed isoforms (Fig. 6C).
In further support of this notion, knockdown of PurB was found to have no effect on Cbx1-Sp3 complex formation (Fig. 6D).
Discussion
Tissue-specific genes are mostly silent. They are typically silent during early development and remain so in most cell types, only undergoing reactivation in their tissues of (cardiomyocyte-specific marker) antibodies. Representative images are shown. The scale bar represents 50 microns. N = 3 per group. The number of cardiomyocytes derived from the reprogramming of fibroblasts (tdTomato+ cardiac troponin-T+) is expressed as a percentage of the total cardiomyocyte (cardiac troponin-T+) population. A two-tailed t test was used to determine significance between the two groups; ***p < 0.001. Figure 4. Cbx1, PurB, and Sp3 bind to cardiomyocyte-specific genes in fibroblasts. A, chromatin derived from cardiac fibroblasts was incubated with antibodies for Cbx1, PurB, or Sp3. An isotype antibody was used as a control. Immunoprecipitated DNA was analyzed by high-throughput sequencing. Bioinformatic approaches were used to determine Cbx1-, PurB-, and Sp3-binding sites. The Venn diagram details the number of genes with Cbx1-, PurB-, and Sp3-binding sites. B, Cbx1-, PurB-, and Sp3-binding sites in the cardiomyocyte-specific genes Ttn, Ryr2, and Kcnj6. C, Cbx1-, PurB-, and Sp3-binding peaks in the Nebl gene. D, Gene Ontology analysis of the genes to which Cbx1, PurB, and Sp3 were bound. expression. A number of mechanisms have been proposed for long-term gene silencing including sequence-dependent repression factors, DNA methylation, timing of replication, and histone modifications (12)(13)(14)(15). It is unknown how these mechanisms relate to each other, whether they are independent or function together. Moreover, it is unclear how silencing DNA or histone modifications are localized to specific genes. A nontargeting siRNA was used as a control. After 7 days, chromatin was isolated and digested with micrococcal nuclease (MNase). Following MNase digestion, the resulting undigested DNA was submitted for high-throughput sequencing (MNase-Seq) and mapped to the mouse genome. A, MNase digestion was optimized to give rise to one nucleosome. Read lengths were analyzed after sequencing and summed. As expected, the majority of read lengths were 1 nucleosome is size (150 bp). B, MNase accessibility signals around transcription start sites (TSSs). The y-axis represents the read number for each 10 bp bin normalized to the effective genome size for the mouse. C, nucleosomes (black bars) were plotted on the cardiomyocyte-specific genes Ryr2 and Actn2. D, nucleosomes in noncardiomyocyte genes.
Cbx1, PurB, and Sp3 repress tissue-specific genes
This study suggests that the long-term silencing of tissuespecific genes is regulated by Cbx1, PurB, and Sp3. ChIP-Seq indicated that Cbx1, PurB, and Sp3 were specifically localized to cardiomyocyte genes. Their role appears to gene silencing as genetic ablation of these three proteins in vivo, as well as knockdown in vitro, was sufficient to induce the expression of cardiomyocyte-specific genes in fibroblasts. Based on the data obtained, it appears that Cbx1, PurB, and Sp3 mediate tissue-specific gene silencing by modifying the nucleosome architecture as well as regulating the deposition of silencing histone modifications. Cardiomyocyte genes in fibroblasts were found to contain a significant number of nucleosomes. A large number of nucleosomes may act to compact the gene and prevent expression. However, following the knockdown of Cbx1, PurB, or Sp3, these nucleosomes were no longer present. Nucleosome-free genes are typically transcriptionally active. The effects on nucleosome patterning were restricted as Cbx1, PurB, or Sp3 knockdown had no effect on nucleosomes in fibroblast genes such as S100a4. How Cbx1, PurB, and Sp3 binding induces nucleosome formation on Figure 6. Cbx1, PurB, and Sp3 regulate the PRC2 complex. A, endogenous Cbx1 and PurB was immunoprecipitated from cardiac fibroblast cell lysates. An isotype control antibody was used as a control. Immunoprecipitates were immunoblotted with a Cbx1, PurB, and a Sp3 antibody. The first lane contains cell extract (1/10th immunoprecipitation input). Representative immunoblots are shown from three independent experiments. B, cardiac fibroblasts were transfected with siRNAs targeting Cbx1, PurB, and Sp3. A nontargeting siRNA was used as a control. After 4 days, cell lysates were immunoblotted with H3K27me3 and H3 antibodies. Immunoblotting for Gapdh was used as a loading control. Representative immunoblots are shown from four independent experiments. C, endogenous Cbx1, PurB, and Sp3 was immunoprecipitated from cardiac fibroblast cell lysates. An isotype control antibody was used as a control. Immunoprecipitates were immunoblotted with an Eed antibody. The first lane contains cell extract (1/10th immunoprecipitation input). Representative immunoblots are shown from three independent experiments. D, cardiac fibroblasts were transfected with either a nontargeting control siRNA or a PurB targeting siRNA. After 3 days, endogenous Cbx1 was immunoprecipitated from cell lysates. An isotype control antibody was used as a control. Immunoprecipitates were immunoblotted with a Cbx1 and a Sp3 antibody. The first lane contains cell extract (1/10th immunoprecipitation input). Representative immunoblots are shown from three independent experiments. H3K27me3, trimethylated histone-H3 lysine-27.
Cbx1, PurB, and Sp3 repress tissue-specific genes cardiomyocyte genes is an open question. Cbx1-, PurB-, and Sp3-binding sites within cardiomyocyte genes were distinct and often separated by more than 100 nucleotides. However, coimmunoprecipitation studies suggested shared protein complexes between the three proteins. Complex formation would suggest that Cbx1, PurB, and Sp3 are causing DNA to loop. DNA looping has been invoked as the explanation for the ability of enhancers to increase transcription. It is possible that DNA looping induced by Sp3-Cbx1 and Sp3-PurB dimers acts as a scaffold for nucleosome binding.
The influence of nucleosomes on gene transcription is both simple and complex (24)(25)(26). By virtue of their mere presence, nucleosomes can act as an impediment to transcription by preventing RNA polymerases from moving along the gene. The histone core of the nucleosome can be acetylated or methylated, and the effect of these modifications on gene transcription is more subtle. Depending upon which histone residue is modified, acetylation and methylation can either promote or inhibit gene transcription. Indeed, a hallmark of fibroblast reprogramming to cardiomyocytes is the loss of inhibitory H3K27me3 from cardiomyocyte genes (10). H3K27me3 commonly resides on gene promoters (22). Two lines of evidence suggest that Cbx1, PurB, and Sp3 regulate cardiomyocyte gene activity through mediating H3K27me3 deposition. First, loss of Cbx1 and PurB expression induced histone loss in gene promoters. Second, coimmunoprecipitation studies indicated that Cbx1, PurB, and Sp3 interacted with Eed. Eed is an important component of the PRC2 complex, which mediates H3K27me3 deposition. This suggests that Cbx1, PurB, and Sp3 regulate the activity of the PRC2 complex. Indeed, knockdown of Cbx1, PurB, and Sp3 was found to reduce H3K27me3 levels. While we were able to see reduced H3K27me3 following loss of repressor expression, we were not able to determine if this was specific to cardiomyocyte genes. Future studies are therefore necessary to determine if Cbx1, PurB, and Sp3 regulate H3K27me3 deposition specifically on cardiomyocyte genes. Quantitative PCR (qPCR) analyses proved to be unreliable with apparently specific primers routinely showing multiple bands. Consequently, we plan to carry out these studies by expressing cardiomyocyte and noncardiomyocyte gene promoters in the presence and absence of repressor proteins and measuring H3K27me3 deposition.
Our study finds support in the literature. The Chien group (27) in 1994 demonstrated that in heterokaryons with equal numbers of embryonic fibroblast and cardiomyocyte nuclei, cardiomyocyte genes were silenced and there was no expression of cardiomyocyte-specific genes. Gupta et al. (28) showed that a palindrome of two Ets-binding sites is important for the cardiomyocyte-restricted expression of the Myh6 (α-myosin heavy chain) gene as deletion of these Ets-binding sites induced Myh6 expression in cells in which the gene is typically silent (28,29). Subsequent studies found that the repressive actions of the palindromic Ets-binding sites within the Myh6 gene required the proteins PurA and PurB (30).
Ablation of the three repressors was sufficient to induce fibroblasts to convert into cardiomyocyte-like cells in vivo.
Future studies are needed to determine if the rate of conversion is sufficient to promote significant functional recovery in cardiac injury models. It would also be important to measure the electrophysiological profiles of the cardiomyocytes derived from fibroblasts to determine their similarity to pre-existing cardiomyocytes.
In summary, our data imply that silencing of tissue-specific genes is hierarchal. Sequence-specific proteins such as Cbx1, PurB, and Sp3 bind to the tissue-specific gene. Once bound to the tissue-specific gene, these proteins then act as a scaffold. The scaffold plays two roles. First, to induce a conformational change in the DNA, which acts as a conduit for nucleosome binding. Second, to bring in enzyme complexes such as the PRC2, which mediate long-term gene silencing via modifications of the histones within the nucleosome core.
Cell isolation
Cardiomyocytes and fibroblasts (cardiac, lung, and tail tip) were derived from 1-day-old neonate C57BL6 mice and cultured according to the established protocols (31).
Human cardiac fibroblasts
Human cardiac fibroblasts were acquired from Cell Applications, Inc (306-05f) and were cultured according to the manufacturer's instructions.
Generating iPS-derived cardiomyocytes
Human iPS cells were differentiated into cardiomyocytes according to Burridge et al. (32).
Repressor knockdown siRNAs were purchased from Qiagen. In the initial screen, four siRNAs (20 μM stock) were used for each repressor. The siRNA that gave rise to the highest level of knockdown was used for future experiments: Cbx1 siRNA 4 (catalog no.: SI00942676), Ddx5 siRNA 2 (catalog no.: S100976514), Egr1 siRNA 1 (catalog no.: S100990899), Fhl2 siRNA 4 (catalog no.: S100190960), Fli1 siRNA 1 (catalog no.: S101003471), PurB siRNA 2 (catalog no.: SI01393462), Sp3 siRNA 2 (catalog no.: SI01429918), Tcf4 siRNA 6 (catalog no.: S102715461), and Zfp36 siRNA 5 (catalog no.: S105451670). A nontargeting siRNA was used as a control (Dharmacon; catalog no.: D-001810-03-05). For transfection, cardiac fibroblasts were seeded into 12-well plates at 22,500 cells per well 1 day prior to transfection. On the day of transfection, siRNAs (0.75 μl) were diluted in serum-free Dulbecco's modified Eagle's medium (American Type Culture Collection; 99.25 μl). In a separate tube, 0.75 μl of Dharmafect-I (Dharmacon) was diluted with 99.25 μl Opti-MEM serum-free media. After 5 min of incubation, the two solutions were combined. After 20 min, complexes were added to cells along with complete media (550 μl), and the transfection complexes were added to the cells. Knockdown was verified 4 days post-transfection. When used in conjunction with miRNA transfection, siRNA and miRNA transfection complexes were set up independently as described and then added to the cells together. When siRNA and miRNA were used in conjunction, the amount of complete media was reduced (250 μl).
MNase-Seq and ChIP-Seq
Isolated mouse (C57BL/6) neonatal cardiac fibroblasts (900,000 cells; passage 2) were seeded into T150 flasks in growth media. Where necessary, the next day, cells were transfected with a nontargeting control siRNA or an siRNA targeting Cbx1, PurB, or Sp3 as described previously. Seven days after seeding, chromatin was isolated with a SimpleChIP Plus Enzymatic Chromatin IP Kit (Cell Signaling; catalog no.: 9005) according to the manufacturer's instructions. Once isolated, chromatin was digested with the supplied MNase (1.5 ml of a 1:10 dilution for 900,000 cells) according to the manufacturer's instructions. The amount of MNase was empirically determined to digest chromatin to one nucleosome in length. MNase-digested chromatin was then used for MNase-Seq. MNase-digested chromatin was also used for ChIP-Seq. Here, MNase-digested chromatin (900,000 cells) was incubated overnight with 8 μg of control immunoglobulin G, Cbx1 (Cell Signaling; catalog no.: 8676), PurB (Proteintech Group, Inc; catalog no.: 18128-1-AP), or Sp3 (Thermo Fisher Scientific; catalog no.: PA5-78176) antibodies. Highthroughput sequencing was performed by the Duke Genomic Core. In total, five independent experiments were performed, and libraries were generated with a NovaSeq 6000 kit (Illumina). Libraries were pooled and run in duplicate (50 bp paired end) with an Illumina NovaSeq 6000. Sequencing depth was >25 × 10 6 individual reads per sample. Individual bioinformatics programs within the Galaxy suite were used for sequence alignment, peak calling, and peak comparisons. Adaptors were removed, and sequences were then aligned to mouse reference genome mm10 using Bowtie2 (33). For MNase-Seq, bamcoverage was used to determine nucleosome positions with annotated genes broken up into 10 bp bins ± 1 kb around the transcription start site and read counts counted for each bin and normalized to the effective size of the mouse genome. For ChIP-Seq, MACS2 CallPeak (paired-end model) was used to identify peaks with p < 0.01. Peaks present in both duplicate samples were identified.
In vitro 3T3 cells were seeded at 5625/cm 2 in growth media (15% FBS and 1% penicillin/streptomycin). The next day, cells were transfected with 1 μg plasmid DNA using the transfection reagent Lipofectamine 2000 (Thermo Fisher Scientific) as per the manufacturer's protocol. After 24 h, transfection complexes were removed and replaced with growth media. Three days after transfection, puromycin (2.25 μg/ml; Sigma-Aldrich) was added daily for a total of 7 days to select for transfected cells. Cells were then harvested, and protein was isolated for immunoblotting.
Images
Images were processed with CorelDraw and Zeiss software (Axiovision Rel 4.8 and Zen Blue).
Statistics
All statistical analyses were performed using GraphPad (GraphPad Software, Inc). Two-tailed t tests were used for studies with two groups. For more than two groups, one-way ANOVAs were used. For ANOVA, Bonferroni post hoc tests were used to determine significance between groups. Individual data points and the mean are shown in all graphs. A p value of less than 0.05 was considered significant.
Study approval
Experiments using animals were approved by the Duke University Division of Laboratory Animals and the Duke Institutional Animal Care and Use Committee.
Data availability
Raw sequencing data can be found at the Single Read Archive (accession number: SAMN12628632). All other data are contained within the article.
|
2022-05-22T15:03:09.660Z
|
2022-05-01T00:00:00.000
|
{
"year": 2022,
"sha1": "8610cfd997f9056d7278a2e8ae6c7d6711ee0fa2",
"oa_license": "CCBY",
"oa_url": "http://www.jbc.org/article/S0021925822004938/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "9a5028bd65ad0d8150be5e8c27cca856c7d71f8c",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": []
}
|
11962226
|
pes2o/s2orc
|
v3-fos-license
|
Early detection of Angelman syndrome resulting from de novo paternal isodisomic 15q UPD and review of comparable cases
Background Angelman syndrome is a rare neurogenetic disorder that results in intellectual and developmental disturbances, seizures, jerky movements and frequent smiling. Angelman syndrome is caused by two genetic disturbances: either genes on the maternally inherited chromosome 15 are deleted or inactivated or two paternal copies of the corresponding genes are inherited (paternal uniparental disomy). A 16-month-old child was referred with minor facial anomalies, neurodevelopmental delay and speech impairment. The clinical symptoms suggested angelman syndrome. The aim of our study was to elucidate the genetic background of this case. Results This study reports the earliest diagnosed angelman syndrome in a 16-month-old Hungarian child. Cytogenetic results suggested a de novo Robertsonian-like translocation involving both q arms of chromosome 15: 45,XY,der(15;15)(q10;q10). Molecular genetic studies with polymorphic short tandem repeat markers of the fibrillin-1 gene, located in the 15q21.1, revealed that both arms of the translocated chromosome were derived from a single paternal chromosome 15 (isodisomy) and led to the diagnosis of angelman syndrome caused by paternal uniparental disomy. Conclusions AS resulting from paternal uniparental disomy caused by de novo balanced translocation t(15q;15q) of a single paternal chromosome has been reported by other groups. This paper reviews 19 previously published comparable cases of the literature. Our paper contributes to the deeper understanding of the phenotype-genotype correlation in angelman syndrome for non-deletion subclasses and suggests that patients with uniparental disomy have milder symptoms and higher BMI than the ones with other underlying genetic abnormalities.
Background
Angelman syndrome (AS; OMIM 105830) is a rare neurodevelopmental disorder characterized by severe mental and physical delay, limited speech, fine tremor, ataxia, excessive mouthing behavior, fascination with water, jerky limb movements, seizures, craniofacial abnormalities and unusually happy sociable behavior characterized by frequent episodes of inappropriate smiling [1,2].
Seventy percent of AS cases investigated with molecular genetics methods are the result of a small deletion in the 11-13 region of the maternal chromosome 15. A deletion in the same region of the paternal chromosome 15 results in the sister disorder Prader-Willi syndrome (PWS). Expression of the genes in the 11-13 region is regulated by the PWS/AS imprinting center (IC), which differentially silences the paternal copy of the ubiquitin protein ligaseE3A (UBE3A) gene in the hippocampus and in the cerebellum. Other genetic abnormalities resulting in AS reported include uniparental disomy (UPD; 5%), mutations of the IC (5%), mutations of the UBE3A gene (10%), and other mechanisms (10%) [3,4].
In this paper, we report a 16-month-old Hungarian child, who was referred to our genetic counseling unit with delayed psychomotor and speech development and dysmorphic features, including wide nasal bridge, low set ears, thick lips, wide mouth with protuberant tongue (Figure 1). Tongue thrusts were observed. Head circumference was 47 cm (25 percentile). The affected child was born at term after an uneventful first pregnancy with normal weight (3260 g) and head circumference (33 cm). The Apgar scores were 9, 10 and 10 at 1, 5 and 10 minutes, respectively. No signs of decreased fetal movement, neonatal hypotonia or feeding difficulties were reported. The clinical phenotype of the patient suggested AS, therefore molecular cytogenetic investigations were carried out to elucidate the genetic background of the presented case.
Results
Cytogenetic analysis demonstrated a 45,XY,der(15;15) (q10;q10) karyotype in all analyzed cells from the index patient (III/1, Figure 2). All metaphase cells displayed 45 chromosomes, suggesting a balanced homologous rearrangement of the long arms of chromosomes 15. The parent's karyotype was found to be normal, indicating a de novo chromosome rearrangement in the patient.
Analysis of polymorphic short tandem repeat (STR) markers of the fibrillin-1 gene, which is located in 15q21.1, revealed that both long arms of the aberrant chromosome 15 were inherited from the father (Figure 3), allowing a diagnosis of AS caused by paternal UPD. The patient was homozygous at all loci for which his father was heterozygous, indicating that the rearrangement resulted from an isodisomic 15q.
Results from polymorphic STR marker analysis for the fibrillin-1 gene, located in 15q21.1, indicated that both arms of the aberrant chromosome 15 were inherited from the father, allowing a diagnosis of AS caused by paternal UPD. DNA polymorphic markers demonstrated that the patient was homozygous at all loci for which the father was heterozygous, suggesting that the structural rearrangement was an isodisomic 15q and not a Robertsonian translocation. Similar cases of AS resulting from isodisomic 15q associated UPD have already been demonstrated by Freeman et al. (1993) [6] and by Robinson et al. (2000) [11], however, the majority of the previously reported paternal UPD associated AS cases were heterodisomic [7][8][9][10].
The severity of AS symptoms varies significantly. Bottani et al. (1994) were the first, who reported that the phenotype of AS with paternal isochromosome 15 is milder than those caused by other mechanisms [12]. [15] have not observed differences between deletion and UPD, moreover Poyatos et al. (2002) described an even more severe phenotype [3]. The mildest symptoms have been reported for mutations of the UBE3A gene [2,12,14,16], whereas the most severe symptoms are reported for large deletions on chromosome 15 [2,14,16]. Varela et al. (2004) suggested that AS patients with UPD may remain undiagnosed because of their milder or less typical phenotype, leading to an overall under-diagnosis of the disease (Table 1) [17,18]. According to Tan et al. (2011) [4], 46% of AS children with UPD/imprinting defect showed significantly higher body mass index (BMI) than the ones carrying deletions.
In the investigated patient, we observed dysmorphic features, developmental delay, speech impairment and sleep disturbances, excessive mouthing behavior, short attention span, hand flapping, fascinating with water, and characteristic EEG and MRI results. The clinical features of our patient are similar to previously published results [4,7,9,12]. The patient's AS symptoms are relatively mild, which correlates well with the previous observations that AS patients with UPD usually have less severe clinical symptoms [8,10,11,13]. The BMI of our patients was > 85%, which correlated well with the previous results of Tan et al. (2011) [4] and further confirmed that AS patients with UPD have significantly higher BMI than AS patients with other underlying genetic abnormalities.
The patient was diagnosed with AS at the age of 16 months, earlier than in previous reports of UPD, allowing the parents to be given a correct prognosis and an explanation of delayed neurological developmental as well as the possibility of early interventional therapy. In addition, the parents were counseled that the child is at risk for obesity and its associated complications, which could be managed with lifestyle adjustments. As the aberration was the result of a de novo occurrence, the parents were not counseled on the risk of recurrence for further pregnancies.
Conclusions
In this paper we report the case of a 16-month-old Hungarian boy affected by AS due to UPD. The early diagnosis of AS has great significance, it allows the parents to be given a correct prognosis and the possibility of early interventional therapy. The detection of UPD and reviewing the previous cases reported in the literature have also pivotal role, since it contributes to the deeper understanding of the phenotype-genotype correlation in AS for non-deletion subclasses. Our data suggest that AS patients with UPD have milder symptoms and higher BMI than AS patients with other underlying genetic abnormalities.
Methods
Cytogenetic analysis of the child and his parents was carried out with standard methods using G banding with the Cytovision imaging system. The results of the cytogenetic studies suggested UPD, and, therefore, further molecular genetic studies were carried out. Genomic DNA was extracted from venous blood of the index patient (III/1), his parents (II/1, II/2), his grandparents (I/1, I/2, I/3, I/4) and his maternal aunts (II/3, II/4, II/5) [19]. Chromosome 15 segregation analysis with intragenic and extragenic markers for the fibrillin-1 gene was performed for all family members using amplified fragment length polymorphism analysis on an ALFexpress instrument [20]. To determine the molecular background and the recurrence risk, primers for the following microsatellite markers were used in the analysis: D15S119, D15S1028 and MMTS2.
Consent
Written informed consent was obtained from the patient's legal guardian for publication of this case report and accompanying images. A copy of the written consent is available for review by the Editor-in-Chief of this journal.
|
2017-06-29T12:00:47.539Z
|
2013-09-08T00:00:00.000
|
{
"year": 2013,
"sha1": "00bf30e6e7371fc3f35c4a02cc5a52b31fa8ef0f",
"oa_license": "CCBY",
"oa_url": "https://molecularcytogenetics.biomedcentral.com/track/pdf/10.1186/1755-8166-6-35",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "eea2b9de36eb673679138278f688afdbd1c77d85",
"s2fieldsofstudy": [
"Medicine",
"Biology",
"Psychology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
119111429
|
pes2o/s2orc
|
v3-fos-license
|
Microwave assisted coherent and nonlinear control in cavity piezo-optomechanical system
We present a cavity piezo-optomechanical system where microwave and optical degrees of freedom are coupled through an ultra-high frequency mechanical resonator. By utilizing the coherence among the three interacting modes, we demonstrate optical amplification, coherent absorption and a more general asymmetric Fano resonance. The strong piezoelectric drive further allows access to the large-amplitude-induced optomechanical nonlinearity, with which optical transparency at higher harmonics through multi-phonon scattering is demonstrated.
Cavity optomechanics is the study of the system where an optical cavity is coupled with a mechanical resonator. It has been an active research topic over the past few decades while many breakthroughs in experiment have emerged in recent years. Interesting classical and quantum phenomena such as regenerative mechanical oscillation [1,2], chaotic dynamics [3], optomechanically induced transparency (OMIT) [4][5][6][7], sideband cooling to quantum mechanical ground state [8,9], and squeezing of light below shot noise limit [10][11][12], have been observed experimentally. A comprehensive review of the field can be found in Ref. [13]. From a more general perspective, the optical mode used in the system is not limited to optical photon but can also be electromagnetic resonance at relatively low frequency such as the microwave mode in superconducting cavities. Recently, there has been a rising interest of combining both optomechanics and electromechanics to realize hybrid optoelectro-mechanical systems where optical and RF/microwave cavities are coupled through a common mechanical resonator [14][15][16][17][18][19][20][21][22]. Such a hybrid system finds applications in microwave photonics, high frequency oscillators [20] and promises to realize microwave-optical photons interconversions [14,21,22], phonon-mediated electromagnetically induced absorption [23], and entanglement between optical and microwave photons [24].
In this letter, we develop a cavity piezo-optomechanical system using aluminum nitride (AlN) micro-disk resonator, where the optical and microwave modes are coupled to the mechanical mode through radiation pressure and piezoelectric force. By making use of the coherence among the three interacting modes, we demonstrate coherent absorption and amplification, and a more general asymmetric Fano resonance which can be tuned through the whole phase-space. With the strong piezoelectric actuation, we are able to observe the large-amplitude-induced nonlinear optomechanical response, which allows us to further demonstrate high-harmonics optical transparency through multi-phonon scattering. 1 (b) shows the transmission spectrum of an optical resonance, which has a doubledip feature due to mode-splitting of the originally degenerate clockwise and counter-clockwise modes [25]. The resonance on the shorter (longer) wavelength side has a cavity dissipation rate of κ/2π = 1.02 GHz (0.931 GHz). At room temperature and under atmospheric pressure, we characterize the mechanical resonances by actuating the device piezoelectrically.
The measured power spectra are plotted in Fig. 1 (c). The first three mechanical radialcontour modes are observed at 779.6 MHz, 2.040 GHz, and 3.235 GHz, which agrees well with the results of finite element simulation. The simulated normalized radial displacement profiles of the three modes are shown in the corresponding insets. The phonon dissipation rates of the three modes are measured as γ/2π = 202 kHz, 1.25 MHz, and 961 kHz. The measured displacement shift due to piezoelectric actuation is dR/dV dc = 4.5 fm/V, which agrees with numerical simulation result. The actuation efficiency is relatively small compared to other demonstrated system [16] because of the larger electrode separation used.
The piezoelectric effect in AlN naturally couples the microwave mode to the mechanical mode by introducing an eletro-mechanical coupling energy given by where S, e ↔ and E are the strain field, piezoelectric coupling matrix and electric field. Unlike the capacitive force used in many electro-mechanical systems (see e.g. Refs. [5,8]), the piezoelectric force depends on the electric field linearly instead of quadratically [16]. The whole system can be described by the Hamiltonian whereâ,b, andĉ are the annihilation operators of the optical, mechanical, and microwave modes. The third and the fourth term describe the opto-mechanical and electro-mechanical coupling characterized by the coupling rates g om and g em . The last two terms represent the two optical inputs used in the experiment.ŝ c,in (ŝ p,in ) denotes the optical field of the "control" ("probe") light at frequency Ω c (Ω p ). Detunings with respect to the optical resonant frequency are defined as ∆ c = Ω c − Ω o and ∆ p = Ω p − Ω o . The output field can be We study the dynamics of the system using the measurement setup shown in Fig. 1 (d). Light from a tunable laser diode (TLD), used as the control light, is sent to the device through an electro-optical modulator (EOM), which creates sidebands to act as the probe light. A fiber polarization controller (FPC) is used to adjust the laser polarization. In the electrical path, the microwave signal with adjustable amplitude and phase is sent to the device through a transmission line. For the study of higher-harmonic interference, a frequency divider with dividing factor N is applied to the microwave signal, which causes the microwave frequency Ω e to be N times less than the control-probe offset frequency, i.e., The optical signal coming out from the device is amplified by an erbium-doped fiber preamplifier (EDFA), filtered by an optical bandpass filter (OBF) and then collected by a photodetector (PD). A wavelength meter is used to calibrate the laser wavelength and a PID feedback control is used to stabilize the laser intensity.
We blue-detune the control light with respect to the optical resonance on the shorter wavelength side by a frequency of the third mechanical mode. This mode has a frequency Ω m > κ and so satisfies the resolved-sideband condition. When the probe light is swept across the optical resonance, the presence of the control light and the microwave actuation opens a transparency window in the originally absorptive region. The transmission coefficient of the probe light can be derived from the Hamiltonian in Eq.
(1) using input-output formalism. If the force exerting on the mechanical mode is assumed to be dominated by the piezoelectric force, it can be shown that the probe transmission coefficient T = s p,out /s p,in is given by where |c| and ϕ are the amplitude and phase of the microwave mode. The first term is the transmission coefficient of the probe in the absence of the control light or the microwave input. Its absolute-squared value gives the typical inverted Lorentzian shape representing the absorption due to the optical resonance. The second term is the interference term which gives rise to the transparency window. When these two terms interfere constructively, increasing the microwave amplitude or the control-to-probe ratio will raise the magnitude of the second term, which will compensate the loss described in the first term, and even amplify the probe signal with an overall gain higher than 1. Fig. 2 (a) the phase of the microwave mode ϕ is shifted by π, the two terms in Eq. (2) interfere destructively, which causes the originally absorptive region to become even more absorptive.
As shown in Fig. 2 (b), absorption extinction down to 30 dB is achieved while the original cavity absorption extinction is about 3 dB. In this case, the two resonance circles align in opposite direction, as shown in the complex plot in the upper-right inset. At large n m /n p the resonance circle goes beyond the origin causing the group-delay of the probe light to change from negative (advanced light) to positive (slowed light). Slowed light with delay of 0.76 µs and transmission of 20% is achieved in room temperature.
Besides setting the phase of the microwave mode ϕ to 0 or π, ϕ can be tuned continuously from 0 to 2π causing the two resonance circles to rotate with respect to each other, as shown in Fig. 2 (c). As a result, the probe transmission becomes neither a peak nor a dip but a more general asymmetric Fano shape. A contour plot of the probe transmission as a function of |Ω c − Ω p | and ϕ is shown in Fig. 2 (d), and its cross-sectional plots at different ϕ are shown in Fig. 2 (e). With the control of the microwave phase, we are able to tune the Fano resonance through the whole phase-space. Asymmetric Fano resonance is widely observed in many branches of physics (See e.g. Refs. [26,27]), and is also studied recently in the context of optomechanical systems [28]. It is a manifestation of interference between a continuum and a discrete excitation modes [29]. Here, the broader optical resonance takes the role of the continuum mode.
Compared to the traditional OMIT where the mechanical actuation is from the optical force, here the actuation is provided by the piezoelectric force of the microwave mode [16].
Because of the stronger electrical drive, the transparency phenomenon can be observed with less stringent condition. From Eq. (2), it can be shown that the condition for complete transparency is given by n m /n p (2G/κ) ≥ 1, where G = √ n c g om is the modified optomechanical coupling rate. For comparison, condition for the traditional OMIT is 4G 2 /κγ 1.
Here, the effect can be enhanced by increasing the phonon number with coherent microwave drive.
The strong piezoelectric interaction between the microwave and the mechanical mode also facilitates the study of nonlinear dynamics in the optomechanical system. By performing the Hamiltonian in Eq. (1) becomeŝ When the control light is blue-detuned at ∆ c = N Ω m , using rotating wave approximation we can obtain the higher order interaction term proportional to ( gom Ωm ) Nâ †b †N + h.c., which describes the multi-phonon process as illustrated in the schematic in Fig. 3 (a). This nonlinear interaction term in the Hamiltonian is the key for generating nonclassical states and it leads to high harmonic OMIT in optomechanical systems [32,33]. Such high-order interaction may also lead to coherent multi-boson generation, which can be used to dynamically stabilize the protected cat-qubit encoding [34]. However, the prefactor (g om /Ω m ) N is rapidly decaying with N unless the system is close to the single-photon strong-coupling regime with g om > Ω m [30], which is out of reach with the current state-of-the-art. A detail analysis for the case N = 2 shows that the effect is observable when g om is within a small fraction of Ω m and κ [32,33]. However higher order interaction with N > 2 is still inaccessible. A signature of this large-amplitude-induced nonlinearity can be observed from the mechanical driven response. When only one laser is used to probe the mechanical motion, it can be shown from the transformed Hamiltonian in Eq. 3 (or using the approach adopted in Refs. [35,36]) that the detected driven signal is ∝ ( n J n (x)J n+1 (x)/xK * n K n+1 )|b|e iθ , where K n = 1 − i(∆ + nΩ)2/κ). Whenx 1, the summation term becomes independent ofx and so the detected signal is linear to the displacement |b|e iθ . Whenx > 1, the driven response becomes amplitude dependent. Fig. 3 (b) plots the driven response of the 1st mechanical mode under weak (blue) and strong (red) drive. Fig. 3 (c) plots the same data in complex plane with the weak-drive data magnified by 4 times. A deviation from the Lorentzian shape (or circular trajectory in complex plane) under strong drive can be clearly observed. The black solid lines are the fitted curves using the full expression, with g om /Ω and ∆/Ω as the fitting parameters. From the fitting, it can be deduced thatx = 1.71 is reached in the strong-drive data.
To demonstrate optical transparency at higher harmonics using this nonlinear effect, we insert a frequency divider to the microwave path (See Fig. 1 (d)) so that the frequency offset between the probe and control |Ω p − Ω c | is N times the microwave frequency. The detuning of the control laser is fixed at ∆ c = N Ω m . Using the Bessel function expansion as described above, it can be shown that the probe transmission coefficient is given by The first term is the probe transmission in the absence of the control light. The interference caused by the second term due to the multi-phonon process opens a transparency window at N -th harmonic frequency away from the control light. In this measurement we use the first and the second mechanical modes which have lower spring constants and so can be driven into larger amplitude. Fig. 3 (d) shows the high-harmonic optical transparency at various ∆ c and frequency dividing factor N . For the second mechanical mode, transparency window can be observed at the second harmonic frequency (2f m2 = 4.08GHz), while for the first mechanical mode, transmission transparency at frequency up to eighth harmonics (8f m1 = 6.24GHz) can be observed. This demonstration shows that the coherent microwave drive is a useful tool for studying the nonlinear effect in optomechanical systems. Investigation of the quantum aspect of the effect will be an interesting topic for further study.
In conclusion, we develop a hybrid opto-electro-mechanical system in which the microwave and optical modes are coupled to a common mechanical mode. We demonstrate coherent absorption and amplification, and the more general asymmetric Fano resonance. Using the strong piezoelectric drive, we operate the mechanical mode in large amplitude where high-harmonic transmission transparency through multi-phonon scattering is demonstrated.
We thank Changling Zou and Menno Poot for helpful discussion. We acknowledge fund-
|
2014-04-13T21:16:35.000Z
|
2014-04-13T00:00:00.000
|
{
"year": 2014,
"sha1": "77d64779ba4af0dabd46ef763a59c3ef1f6e5a88",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1404.3427",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "77d64779ba4af0dabd46ef763a59c3ef1f6e5a88",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
239866017
|
pes2o/s2orc
|
v3-fos-license
|
RESPONSE OF SYNGONIUM PODOPHYLLUM PLANT TO SOME SYNTHETIC CYTOKININ TYPES AND CONCENTRATIONS AS A FOLIAR APPLICATION
: This investigation was executed to assess the effects of three synthetic cytokinins [6-benzylaminopurine (BAP), 6-(γ,γ-dimethylallylamino) purine (2iP) and furfurylamino-purine (kinetin)] at three concentrations for each type (100, 200 and 300 mg/l), beside the control one (tap water) on vegetative growth and some chemical analysis of Syngonium podophyllum plants. Two pot experiments were executed during the two successive seasons of 2019 and 2020 in a commercial farm in Belqas Khamis, Dakahlia Governorate, Egypt. The obtained results generally revealed that spraying of the three types and concentrations of synthetic cytokinins significantly enhanced plant height, number of leaves/plant, leaf area, foliage fresh and dry weight, root length, root fresh and dry weight compared to the control plants. Moreover, spraying of synthetic cytokinins was superior and significantly increased N%, P%, K%, total carbohydrates, total phenolics, chlorophylls and carotenoids content in leaves. Meanwhile, spraying of 2iP at 200 mg/l gave the highest values for most of the vegetative growth characters (plant height, leaves number and foliage fresh weight) and chemical composition (chlorophyll a, b, a+b, carotenoids, total carbohydrates and N, P and K contents in leaves) compared to other treatments. However, applying of kinetin at 200 mg/l gave higher values of foliage fresh and dry weight and chlorophyll a than other concentrations. Besides, spraying of BAP at 100 mg/l gave the highest roots fresh and dry weight. While spraying of BAP at 200 mg/l gave the highest value of total phenolics content compared to other treatments. Generally, the examined cytokinin types and concentrations could be arranged for their positive effects on Syngonium podophyllum descendingly as 2iP at 200 mg/l, BAP at 100 or 200 mg/l and kinetin at 200 mg/l.
INTRODUCTION
Arrowhead vines (Syngonium podophyllum, L.) is an enduring evergreen plants and herbaceous vine.It is perceived by its simple leaves which are organized and sagging roughly and is one of the most popular and versatile leaf plants, which belongs to the family Araceae, native to African and South American.Syngonium is an indoor plant ordinarily utilized as hanging baskets and if upstanding development is wanted; trellis or other help is required.Additionally, plants can be utilized as ground covers.Also, plants can be utilized in different places like workplaces, clinics, shops, windows, meeting rooms, commercial buildings, and residences (Abd El-Aziz et al., 2007).
The foliar application involves spraying and absorption of various substances on leaves and stems of plants.It is noted that foliar application leads to increased yield, protection against diseases and pests, increased development of resistance to the dry season besides yield quality (Durrani et al., 2010).
Plant growth regulators (PGRs) are characterized as naturally or synthetically constituent materials without a nutritive value, but they influence formation or metabolism processes in higher plants, which are applied for the growth and development regulation of plants and are significant measures to improve agricultural production.PGRs are utilized in agriculture to control plant development and gain specified benefits, like diminishing sensitivity to biotic and abiotic stress, bettering morphological construction, quantitative and qualitative expansions in the crop, in addition, to make plant constituents amendments (El-Bably and Rashed 2017).
Cytokinins are used to enhance axillaries branching, cell division, chloroplast improvement, control of apical dominance, shoot and root growth (Khandaker et al., 2018).Besides, cytokinins modify some of the significant formative cycles, including the development last stage of leaf known as senescence, which is related to chlorophyll breakdown, photosynthetic deterioration and oxidative damage.There is plentiful proof that cytokinins can delay the senescenceaccompanying changes (Hönig et al., 2018).
6-Benzylaminopurine (BAP) is one of the first synthetic cytokinins utilized as a plant growth regulator in farming.It could raise a whole number of vines and leaves per arrowhead after 60 and 90 days from application.The number of branches increasing could assist the planting producers to give more different choices of cultivars (Sardoei et al., 2018).Moreover, BAP application increments the leaves number, development of new shoots, prompts early blossoming and gets a betterquality spike in Phalaenopsis hybrid (Mishra et al., 2018).Similar findings were reported by (Nambiar et al., 2012) in Dendrobium hybrid, as they reported that application of BAP at 150-200 ppm improved the leaves number.Foliar spray method of BAP at lower concentrations gave significant impact on different characteristics like leaves number, length of leaf and shoot, proliferation of tiller, number and width of rhizomes whose can be induced to improve the ginger plant growing (Bezabih et al., 2017).Kinetin (Kin) is a synthetic cytokinin, which plays a significant role in enhancing the nutrient's movement and transport towards high metabolites areas, just as in enhancing cell division, (Taiz and Zeiger, 2010).Utilizing a mix of (kinetin + salicylic acid + yeast suspension) as a spray on Aloe vera L. enhanced the production of active medical compounds by enhancing vegetative growth, number, width and thickness of leaves, photosynthesis, absorption and transport of nutrient elements, as well as the activation of certain significant enzymes (Abd-Ul Razzaq and Mohammed, 2019).Similar findings were reported by Youssef et al. (2004) whose showed that foliar application of kinetin to Matthiola incana L. had a significant positive effect on plant growth.
6-(γ,γ-Dimethylallylamino) purine ( 2iP) is an adenine-based cytokinin that is generally considered to be the second most potent of all the cytokinins behind Z125 (zeatin).Moreover, 2iP is a bacteria-derived riboside cytokinin used to grow plant tissues such as tobacco and soybean callus and is considered a precursor of the cytokinin zeatin.In addition, 2iP application increased average of number and leaves length of Phoenix dactylifera L. (Almeer, 2020).
Ornamental plants of a kind Syngonium podophyllum are usually grown in pots for mercantile purposes.In this case, the roots are not allowed to grow freely (Di Benedetto, 2011) and this limitation would be related to a restricted creation of cytokinins (O'Hare et al., 2004) which thus adversely influences the growth of the aerial part (Kyozuka, 2007).So, this research aims to improve the vegetative growth and chemical contents of Syngonium podophyllum, which is one of the most important indoor plants as a response to foliar spraying by some synthetic cytokinins types and concentrations.
Experimental location:
This experiment was carried out at a commercial Farm in Belqas Khamis, Dakahlia Governorate, Egypt located at 31.29 latitudes, 31.39 longitudes in the North Middle Nile Delta area during the two consecutive periods of 2019 and 2020 under saran house (63% shading).
Culturing process:
Uniform rooted cuttings of Syngonium podophyllum (18-20 cm length) planted in 10 cm plastic pots were purchased from a commercial nursery (Abnaa Shaesha) at El-Mansoura city, Egypt.Rooted cuttings were transplanted on 15 th March during the 2019 and 2020 seasons individually in 15 cm in diameter black plastic pots, filled with a mixture of peat moss and vermiculite (2:1, v/v) with humidity of about 70% and pH of 5.9.Plants were transplanted again on 1 st May in 25 cm in diameter plastic pots filled with 2.5 kg from the same potting mixture after supplying with 2 g/pot NPK 20:20:20).All plants were fertilized monthly after two weeks from the transplanting by 1 g/pot NPK as a drench starting from 15 th May to 1 st September (four times).All other agricultural operations, such as irrigation, etc., are carried out regularly under normal conditions.The average temperature was 36 and 17 o C for the maximum and minimum, respectively starting from May to September.
Growth regulators preparation and procedure:
Three different cytokinins were used as foliar spray; 6-benzylaminopurine (6-BAP, C12H11N5), 6-(γ,γ-dimethylallylamino) purine (2iP, C10H13N5) and 6furfurylaminopurine (kinetin, C10H9N5O).Stock solution (1000 mg/l) from each cytokinin was prepared by dissolving 0.05 g of the powder in 5 ml of 1.0 N KOH with the addition of tween-20 at the concentration of 0.1% as a surfactant, vortexed until no powder was visible, and then brought to a final volume of 50 ml with 45 ml of deionized water.Then, three dilutions from each cytokinin were prepared (100, 200 and 300 mg/l), plus the control one (tap water).Plants were sprayed monthly with different concentrations of cytokinins after 15 days from transplanting, starting from 15 th May by 250 ml of each cytokinin solution for each treatment.
Experimental design:
Pots were laid out as a simple experiment in a completely randomized block design under saran house.Since, ten treatments were as follows: BAP (100, 200 and 300 mg/l), 2iP (100, 200 and 300 mg/l), kinetin (100, 200 and 300 mg/l) and the control.All treatments contained three replicates, each replicate consisted of four pots and every pot contained one plant.
Data Recorded:
Vegetative growth: Data were recorded for vegetative growth after 135 days from the transplanting process on 15 th September.Plant height (cm) was calculated from the topsoil in planting pots to the highest point of the plant, the number of leaves/plant, leaf area (cm 2 ) of the third base leaf, foliage fresh weight (g), foliage dry weight (g), root length (cm), roots fresh and dry weight (g).
Chemical analysis:
Chemical determinations were achieved in parallel with the vegetative characteristics as chlorophyll a, b, total and carotenoids were determined as per Costache et al. (2012), proline contents were assessed utilizing the protocol of Bates et al. (1973), total phenolics content was estimated by a colorimetric method according to Chaovanalikit and Wrolstad ( 2004), total carbohydrates percentage was achieved according to Herbert et al. (1971), nitrogen percentage was evaluated by modified Micro Kjeldahl technique as depicted by Pregl (1945), phosphorus percentage was evaluated on the way of Rao et al. (1997) and potassium percentage was measured following Black (1965).
Statistical analysis:
Data were undergone to the analysis of variance (ANOVA) as a simple experiment in a completely randomized block design using the COSTAT (1986) v. 6.303 program.Comparison between means were achieved by Duncan's multiple range test according to Snedecor and Cochran (1989) at 0.05 prospect level.
Impact of BAP, 2iP and kinetin foliar application on vegetative growth:
The response of plant height (cm) leaves number/plant, leaf area (cm 2 ), foliage fresh weight (g), foliage dry weight (g), root length (cm), roots fresh and dry weight (g) of syngonium plants to different types of cytokinins as a foliar application were shown in Tables (1), (2) and (3) and Fig 1) indicated that arrowhead (Syngonium podophyllum, L.) plants sprayed with 2iP at 200 mg/l had the highest values of plant height (47.67 and 47.77 cm) in the first and second season, respectively.Plants sprayed by BAP at 200 mg/l came in the second order for this character (41.00 and 42.33 cm in both seasons, respectively).On the other hand, applying BAP at the highest concentration (300 mg/l), recorded the shortest plants in both seasons (27.00 and 26.33 cm), followed by the control treatment.Generally, it was noticeable that the highest concentrations of both BAP and 2iP were accompanied by a decrease in the plant height compared to kinetin.In addition, a higher number of leaves were recorded for plants sprayed with BAP at 200 mg/l (9.67 and 10.00), followed by 9.67 and 9.33 resulted from plants treated by Kin at 100 mg/l, then 9.00 and 9.33 obtained when using 2iP at 200 mg/l, respectively in both seasons without any significant differences among them.While the least number of leaves was 6.67 and 6.67 for untreated plants, respectively in both seasons.As for leaf area, data presented in As for syngonium foliage fresh, dry weight and roots length, data presented in Table (2) indicated that all these characteristics were significantly affected by applications of cytokinin types and concentrations as a foliar spray.In both seasons, the highest values of foliage fresh weight (55.35 and 55.75 g), (55.33 and 54.45 g) were obtained from syngonium plants treated with Kin and 2iP at 200 mg/l respectively.On the second rank, applying Kin at 300 mg/l and BAP at 100 mg/l also record higher values than most other treatments (51.16 and 51.36 g) and (50.45 and 51.62 g) respectively in both seasons without any significant differences between it and the previous superior treatments.Whereas the lowest foliage fresh weights in both seasons were obtained from the control, BAP at 300 mg/l and 2iP treatments without significant difference between them.In parallel, the use of kinetin at a concentration of 200 mg/l led to pronounced significant values (7.34 and 7.63 g) for foliage dry weight of syngonium plants, comparing with all the other treatments in both seasons, respectively.While the lowest values in this respect were recorded by applying 2iP at 100 mg/l and the control treatments.
Although the maximum root length values in both seasons were obtained from the control treatment (104.00 and 103.07 cm), applying Kin at 300 mg/l, Kin at 200 mg/l, Kin at 100 mg/l as well as BAP at 300 mg/l, gave high root length values without significant differences between them and the previous one (control).While the shortest roots length was obtained from plants sprayed with 2iP at 100 mg/l (48.83 and 51.20 cm) in both seasons, respectively.Data illustrated in Table (3) indicated that during both seasons roots fresh weight per plant was significantly affected by application of some cytokinin concentrations as a foliar spray.Plants treated with BAP at 100 mg/l gave the highest roots fresh weight per plant (74.52 and 76.14 g) when compared with the remaining treatments during both seasons, respectively.The second-order resulted from spraying BAP at 200 mg/l (67.88 and 65.58).Also, it was obvious that there were no significant differences between the control and plants sprayed with 2iP at 300 mg/l, Kin 100, 200, or 300 mg/l during the first season.Moreover, insignificant differences between the control plants and plants treated with Kin at 100 and 200 mg/l were obtained in the second season.The lightest weights (42.79 and 43.86 g), (43.18 and 41.95 g) were obtained from plants sprayed with 2iP at 100 and 200 mg/l in both seasons, respectively.In parallel, the treatments that gave the highest values of the roots fresh weight recorded the highest roots dry weight.Since applying BAP at 100 or 200 mg/l recorded pronounced significant values in roots dry weight compared to control and other treatments.
Impact of BAP, 2iP and Kin foliar application on chemical contents:
As for the influence of the studied cytokinins on chlorophyll a, b, a+b and total carotenoids in Syngonium podophyllum leaves, it was clear from data in Table ( 4) that applying BAP, 2iP and Kin significantly enhanced these characteristics as compared with the control plants.In addition, the highest chlorophyll a, b, a+b and total carotenoids contents in both seasons were achieved using the 2iP at 200 mg/l.The second highest significant chlorophyll a, b, a+b and total carotenoids contents in the leaves was achieved by using 2iP at 300 mg/l and the third one recorded for using Kin or BAP at 300 mg/l.
On the other side, data in Table ( 5) cleared that the highest total phenolics contents in both seasons were achieved using BAP at 200 mg/l (11.38 and 10.66 mg GAE/g DM).Also, applying BAP at 100 or 300 mg/l recorded superior total phenolics values, comparing the rest treatments.In contrast, the control plants and the others which sprayed with all Kin concentrations tabulated the lowest values in that respect.Moreover, spraying syngonium with 2iP at 200 mg/l significantly increased the total carbohydrates percentage in leaves as compared with the control plants.In addition, applying BAP at 200 mg/l, followed by 2iP at 300 mg/l, BAP at 300 mg/l and Kin at 200, 300 mg/l, is listed in descending order for this trait.While nonsignificance differences between the remaining treatments and the control one.On the other hand, the control plants significantly enhanced the proline content (μg/g fw) in syngonium leaves as compared with all cytokinin types and concentrations treatments.The second highest proline contents were obtained from using Kin at 100 mg/l and BAP at 300 mg/l.
The concerned data in Table (6) showed that spraying syngonium with most concentrations of used cytokinins increased the nitrogen (N), phosphorous (P) and potassium (K) percentages in leaves compared with the control during the 2019 and 2020 seasons.Moreover, spraying 2iP at 300 mg/l gave the highest percentages of nitrogen in both seasons as compared with other treatments.While the highest percentage of P and K was recorded when 2iP at 200 mg/l in both seasons were applied.On the other hand, the lowest N, P and K percentages in leaves were recorded for the untreated plant in both seasons of study.
DISCUSSION
Most houseplants are tropical evergreen species that adapted to survive in a tropical climate which ranges from 15 °C to 25 °C (60 °F to 80 °F) year-round.The natural range of plant species, the varieties of which are used as houseplants, allows important conclusions to be drawn about their husbandry requirements.As a result of climate change, it has become important to try to protect these plants from various stress factors like changes in temperature, humidity and air content of various gases that are harmful to it.
Literature data make it conceivable to infer that cytokinins assume a significant part in the line of plant protection from unfavorable impacts; in any case, the impact of these hormones relies upon stress intensity.Under modest stress, cytokinins guarantee support of plant development, while a drop in cytokinins obstructs development under the effect of stress (Veselov et al., 2017).Also, cytokinins advance cell division and cell extension in plant tissue and various research have declared fitting cytokinin types and their concentrations for each species (Ružić and Vujović, 2008).
Generally, the consequences of this research exhibited that there were positive connections between's cytokinins application as a foliar spray and most vegetative overthe-ground and root boundaries and in the inside chemical substance of Syngonium podophyllum L. plant.
The best vegetative parameters are affected by application with different cytokinin concentrations (Table,1).Where, the best plant height was obtained when 2iP at a concentration of 200 mg/l was applied, followed by BAP at 200 mg/l.The maximum number of leaves for plants sprayed with BAP at 200 mg/l, followed by Kin at 100 mg/l, then 2iP at 200 mg/l respectively in both seasons with insignificant differences between them.The highest values of leaf area were recorded when syngonium plants were sprayed with 2iP at 300 mg/l.In this regard, Khandaker et al. (2018) reported that kinetin further developed the height of the plant, number of leaves, branches number and the area of leaf in stevia plants which might be because of the activations of apical and lateral meristems.These results are following those of Yadav (2013) who announced that big leaf area characteristics obtained in marigold leaves were when used kinetin at 150 mg/l.These results were parallel with those reported by Liang et al. (2010) who showed that kinetin controls leaf development.Also, Singh (2006) found that foliar spray with kinetin at 10 mg/l at vegetative and flowering stages improved all growth characteristics as well as the produce of okra crop.While Almeer (2020) noticed that the average number of vegetative branches of the Hillawi cultivar date palm significantly increased when using BAP compared to 2iP and Kin.Similarly, results recorded by Taheri and Haghighi (2018) cleared that foliar application of BAP enhanced the plant height, shoot and root dry weight in bell pepper.In addition, the utilization of 6-Benzylaminopurine expanded the number of leaves in orchids plants (Mishra, 2018).Comparable results were found by Nambiar et al. (2012) in Dendrobium hybrid as they announced that utilization of BAP (150-200 mg/l) expanded the leaves number.
Applying Kin and 2iP at 200 mg/l, gave the highest values of foliage fresh weight of Syngonium podophyllum plants, followed by treatments of Kin at 300 mg/l and BAP at 100 mg/l (Table , 2).Also, treatments with 2iP produced plants which mainly large and influenced considerable the foliage fresh and dry values, particularly at medium concentrations (200 mg/l).As for foliage weights, the increase was recorded with a decline in BAP concentration.On the application with mentioned concentrations of applied cytokinins, our results suggest the roots shortened when compared with untreated plant, which is largely a positive impact caused stronger stem, with large leaves and good characteristics.
The superior root fresh and dry weights were obtained from the treatments of BAP at 100 mg/l (Table , 3).This result clears an inverse relationship between BAP concentrations and these parameters.Since, applying BAP at higher concentrations (200 or 300 mg/l), reduced the root fresh and dry weights values compared with the lowest concentration one (100 mg/l).These results may be because of cytokinins-driven diversion of assimilates and mineral nutrients towards shoot meristems, rather than to roots (De Lojo and Di Benedetto, 2014).
Using cytokinins not only improves vegetative growth but also enhances pigments.Where all used cytokinins significantly promoted pigments content of Syngonium fresh leaves (Table,4).The highest chlorophyll a, b, a+b and total carotenoids contents in both seasons were achieved using the 2iP at 200 mg/l.The second highest values in those parameters were achieved using 2iP at 300 mg/l, the third-highest were that of Kin and BAP at 300 mg/l.These results are in harmony with those of (George and Shemington, 2008) who's reported that the increases in chlorophylls formation and protein synthesis in tissues were a result of treating with synthetic cytokinins.Exogenous application of kinetin increased photosynthetic pigments contents in the leaves of corn (Kaya et al., 2010).In addition, kinetin protects chlorophylls against the photo-oxidation process by enhancing the concentration of carotenoids (Petrenko and Biryukova, 1977).The application of cytokinin may increase the content of chlorophylls in leaf tissues because it reduced chlorophyll degradation and delays the aging process (Xu et al., 2011).
Our obtained data cleared that spraying Syngonium plants with most concentrations of cytokinins significantly enhanced total phenolics content in the leaves as compared with the untreated plants.The maximum total phenolics content in the leaves in both seasons was achieved when applying BAP at 200 mg/l, followed by the same cytokinin type (BAP) and 2iP at 300 mg/l.these results were in the same line with those obtained by Aslam et al. (2016) on spinach who revealed that foliar application of plant growth regulators improved the content of individual phenolic acids in the leaves as compared with the controls.Moreover, spraying Syngonium with 2iP at 200 mg/l significantly increased total carbohydrates percentage as compared with the untreated plants.
the second-highest total carbohydrates content was obtained from spraying BAP at 200 mg/l, 2iP or BAP at 300 mg/l and Kin at 200, 300 mg/l.while no significant differences between the remaining treatments and the control one.While the highest proline content was achieved from the untreated plants, followed by Kin at 100 mg/l.This agreed with Aslam et al. (2016) since they recorded a decrease in the proline contents as a response to the plant growth regulators treatments comparing with the control.
Our findings showed that spraying syngonium with most concentrations of used cytokinins increased the N, P and K percentages compared to the control, (Table,6).This result was supported by the findings of Ruffel et al. (2011), as they cleared that cytokinins appear to increase the nitrogen content in plants.Similarly, Singh and Paliwal (2017) revealed that utilization of kinetin expanded phosphorus, nitrogen and protein content in okra fruit, and this might be because of plant senescence.Also, Moatshe et al. (2011) found that leaf mineral content of Morula tree sprayed with BAP had significantly higher than control trees
CONCLUSION
From our findings, it could be concluded that among the different synthetic cytokinins, the applied Syngonium podophyllum plants were effectively controlled beside improving the vegetative growth and chemical contents when BAP at 100 mg/l, 2iP at 200 mg/l and Kin at 200 mg/l were used as a foliar application under the conditions of this study.
Fig. 1 .
Fig. 1.Impact of BAP, 2iP and Kin on vegetative growth of Syngonium podophyllum plant after 135 days from the beginning of the experiment during 2019 season.
|
2021-10-25T15:09:17.266Z
|
2021-09-01T00:00:00.000
|
{
"year": 2021,
"sha1": "a69ce4f9ff231c2d7fb2b7cfc2c8395ba83f4bf9",
"oa_license": "CCBYNCSA",
"oa_url": "https://sjfop.journals.ekb.eg/article_198629_409c2713c2152d45c5c166ffca2cb455.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "8f717163c9bc375814b8c0e852595dc3c338ace3",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": []
}
|
11214924
|
pes2o/s2orc
|
v3-fos-license
|
Concordance of Results from Randomized and Observational Analyses within the Same Study: A Re-Analysis of the Women’s Health Initiative Limited-Access Dataset
Background Observational studies (OS) and randomized controlled trials (RCTs) often report discordant results. In the Women’s Health Initiative Calcium and Vitamin D (WHI CaD) RCT, women were randomly assigned to CaD or placebo, but were permitted to use personal calcium and vitamin D supplements, creating a unique opportunity to compare results from randomized and observational analyses within the same study. Methods WHI CaD was a 7-year RCT of 1g calcium/400IU vitamin D daily in 36,282 post-menopausal women. We assessed the effects of CaD on cardiovascular events, death, cancer and fracture in a randomized design- comparing CaD with placebo in 43% of women not using personal calcium or vitamin D supplements- and in a observational design- comparing women in the placebo group (44%) using personal calcium and vitamin D supplements with non-users. Incidence was assessed using Cox proportional hazards models, and results from the two study designs deemed concordant if the absolute difference in hazard ratios was ≤0.15. We also compared results from WHI CaD to those from the WHI Observational Study(WHI OS), which used similar methodology for analyses and recruited from the same population. Results In WHI CaD, for myocardial infarction and stroke, results of unadjusted and 6/8 covariate-controlled observational analyses (age-adjusted, multivariate-adjusted, propensity-adjusted, propensity-matched) were not concordant with the randomized design results. For death, hip and total fracture, colorectal and total cancer, unadjusted and covariate-controlled observational results were concordant with randomized results. For breast cancer, unadjusted and age-adjusted observational results were concordant with randomized results, but only 1/3 other covariate-controlled observational results were concordant with randomized results. Multivariate-adjusted results from WHI OS were concordant with randomized WHI CaD results for only 4/8 endpoints. Conclusions Results of randomized analyses in WHI CaD were concordant with observational analyses for 5/8 endpoints in WHI CaD and 4/8 endpoints in WHI OS.
Methods
WHI CaD was a 7-year RCT of 1g calcium/400IU vitamin D daily in 36,282 post-menopausal women. We assessed the effects of CaD on cardiovascular events, death, cancer and fracture in a randomized design-comparing CaD with placebo in 43% of women not using personal calcium or vitamin D supplements-and in a observational design-comparing women in the placebo group (44%) using personal calcium and vitamin D supplements with non-users. Incidence was assessed using Cox proportional hazards models, and results from the two study designs deemed concordant if the absolute difference in hazard ratios was 0.15. We also compared results from WHI CaD to those from the WHI Observational Study(WHI OS), which used similar methodology for analyses and recruited from the same population.
Results
In WHI CaD, for myocardial infarction and stroke, results of unadjusted and 6/8 covariatecontrolled observational analyses (age-adjusted, multivariate-adjusted, propensityadjusted, propensity-matched) were not concordant with the randomized design results. For death, hip and total fracture, colorectal and total cancer, unadjusted and covariate-controlled observational results were concordant with randomized results. For breast cancer, unadjusted and age-adjusted observational results were concordant with randomized
Introduction
The role that observational studies reporting effects of treatments should play in informing clinical practice is debated. Marked differences in the results of high-profile randomized controlled trials (RCTs) and observational studies have led to questions about the reliability of results of observational studies. The observational Nurses' Health Study reported that use of oestrogen with or without progesterone was associated with a substantial reduction in the risk of cardiovascular disease in post-menopausal women [1,2]. However in two large RCTs, women randomly allocated to oestrogen and progesterone treatment had increases in risk of cardiovascular disease [3,4]. Similarly, observational studies suggested benefits for antioxidants on cancer prevention [5] and folic acid/ B vitamins for cardiovascular disease [6], but later RCTs reported either harms [7,8] or no benefits [9][10][11] from these agents. In contrast, results from systematic reviews show generally good agreement between results from observational studies and those from RCTs [12][13][14]. However, within these systematic reviews, discrepancies did occur and substantial differences in the estimated magnitude of treatment effect between the different study designs were common [14]. For example, 62% of observation and randomized studies on the same topic had a >50% difference in the odds ratio [14].
There are many potential reasons for differences in results between observational studies and RCTs. They might result from differences in study design-for example, study populations may differ; RCTs are usually smaller and may not detect small effects; and RCTs usually involve shorter treatment exposure. Other differences might arise through confounding and bias in observational studies. Users of dietary supplements are generally healthier and of higher socioeconomic status than non-users, and these factors are often difficult to control for in statistical analyses. Thus, some of the benefits observed in the observational studies for such agents may reflect underlying health differences between people who use supplements and those who do not, even though attempts were made to adjust for such differences in statistical models.
The Women's Health Initiative Calcium and Vitamin D trial (WHI CaD) represents a unique opportunity to explore differences in results between observational studies and RCTs. WHI CaD was a very large, long duration RCT that permitted the non-protocol use of study agents: women were randomly assigned to CaD or placebo, but were permitted to use personal calcium and vitamin D supplements. At randomization, 57% of participants were using either personal calcium or vitamin D supplements. Thus, it is possible to compare results from the two different study designs within the same study: a randomized design comparing the effects of CaD with placebo in women not using personal calcium or vitamin D supplements, and an observational design restricted to the placebo group comparing outcomes in women using personal calcium and vitamin D supplements with outcomes in non-users. Whether the results from these two different study designs are concordant or not might provide insights into differences between results from observational studies and RCTs.
WHI CaD trial
The design and results of the WHI CaD trial have been published in full [15][16][17][18][19]. The WHI clinical trials programme consisted of 3 trials. At entry to the programme, women were invited to take part in the WHI dietary modification trial, the WHI hormone therapy trial, or both. At their first or second annual follow-up visit, participants in these trials were invited to take part in WHI CaD. 36,282 post-menopausal women were randomized to daily supplemental calcium (1g) and vitamin D (400 IU) or matching placebos and followed for an average of 7y. Personal calcium supplements of up to 1g daily, and personal vitamin D supplements of up to 600 IU daily (and later 1000 IU daily) were permitted in WHI CaD [15]. Outcomes for cardiovascular events, hip and total fracture, colorectal, breast, endometrial and ovarian cancer, and mortality were adjudicated centrally, while other cancers were adjudicated by local researchers [20]. CaD had no effect on the incidence of hip or total fracture, cardiovascular outcomes, colorectal or breast cancer, or mortality [15][16][17][18][19]. We obtained the WHI limited-access clinical trials dataset from the National Heart Lung and Blood Institute (NHLBI). Data are anonymous in the dataset. A protocol was submitted to the NHLBI before any analyses were carried out. We attempted to replicate the approach of the WHI investigators where possible. Our re-analysis was approved by the Northern X regional ethics committee.
Randomized study design analyses
We assessed the effects of CaD on myocardial infarction, stroke, all-cause mortality, hip and total fracture, and breast, colorectal, and total cancer (total cancer excludes non-melanoma skin cancer). Using an intention-to-treat approach, the effect of CaD on the time since randomization to the first event for each of these endpoints was assessed using Cox proportional hazards models, stratified by age, randomization status in the WHI hormone and dietary modification trials and relevant prevalent disease at baseline (history of breast, colorectal, or any cancer for breast, colorectal and total cancer endpoints respectively; and history of fracture for hip and total fracture; and history of cardiovascular disease for myocardial infarction and stroke). These analyses were performed in the cohort of participants who were not using personal non-protocol calcium or vitamin D supplements at randomization. We also performed these analyses in the entire WHI CaD cohort for comparison with the original publications.
Observational study design analyses
We restricted analyses to the placebo group and compared outcomes in women using personal calcium and vitamin D supplements at randomization with women not using either personal calcium or vitamin D supplements at randomization for each of the above endpoints using Cox proportional hazards models as described for the randomized design. Because there were differences in baseline characteristics between supplement users and non-users, we carried out unadjusted and age-adjusted analyses, and analyses that controlled for other covariates. For multivariate analyses, we included variables that differed between the groups and/or might be potentially related to the outcome with the final model selection based on plausibility, parsimony, and consideration of similar models used by the WHI investigators [21]. We also used propensity scores to control for baseline differences. We used a stepwise logistic regression model that selected 52 of 478 baseline variables to create a propensity score for baseline personal use of calcium and vitamin D supplements that was included as a covariate in the Cox proportional hazards models. Finally, we performed analyses in which users of personal calcium and vitamin D supplements were matched with non-users based upon their propensity score. 5363 matched pairs were identified with propensity scores that differed by 0.07: the mean difference in propensity score for the pairs was 0.0041.
The WHI investigators reported analyses based on use of personal calcium and vitamin D supplements in the prospective WHI Observational Study (OS) which was recruited from the same catchment population as WHI CaD [21]. They compared outcomes over 7.2y for 15,476 women taking 500mg/d calcium and 400IU/d vitamin D at baseline with 23,561 women not using these supplements for cardiovascular, fracture, mortality and cancer endpoints [21]. We compared the results from our analyses with these previously published results.
Concordance of results
There are no accepted criteria for defining concordance of results between studies. The point estimates of the hazard ratios for the treatment effects of CaD on the major outcomes in WHI CaD ranged from 0.88-1.08 with 95% confidence intervals spanning approximately ±0.15 [15][16][17][18][19]. We think a difference of 0.15 between hazard ratios is a reasonable threshold for concordance because smaller differences have little effect on absolute risk, and are therefore of less clinical relevance to individual patients. For these reasons, we considered results from the two study designs concordant when the absolute difference between the point estimates of the treatment effect is 0.15.
Data and statistical analyses
We have reported the baseline characteristics at the time of randomization to CaD, whereas the WHI investigators reported these characteristics at the time of entry to the WHI programme. For body mass index, and dietary and supplemental calcium and vitamin D intakes, we used the latest value recorded between screening and one month following CaD randomization. Cox proportional hazards models and logistic regression were undertaken as described above using the SAS software package (SAS Institute, Cary, NC version 9.4). We matched personal users of calcium and vitamin D supplements by propensity score with the %gmatch macro in SAS [22]. The assumption of proportional hazards was explored by performing a test for proportionality of the interaction between variables included in the model and the logarithm of time. All tests were two-tailed and P<0.05 was considered significant.
Results
At randomization, 43% of participants were not using personal calcium or vitamin D supplements, 54% were using personal calcium, 47% personal vitamin D, and 44% both personal calcium and vitamin D. For our analyses, the randomized design included the 15,646 (43%) participants not using personal calcium or vitamin D supplements. The observational design included the 15,828 (44%) participants from the placebo group who were either using both personal calcium and vitamin D or were not using either of these supplements at randomization. Baseline characteristics for the entire cohort and for the subgroups defined by treatment allocation and personal supplement use are shown in Table 1. The subgroups for the randomized design were well-matched for these baseline characteristics, whereas for the observational design, there were a number of important differences between the subgroups, including for variables such as age, body mass index, race, hormone replacement therapy use and history of medical conditions such as hypertension and fracture. Personal, non-protocol supplemental vitamin D intake (μg/d) Blood pressure (mmHg) Systolic 126 (17) 126 (17) 126 (17) 125 (17) 126 (17) Diastolic 74 (9) 75 (9) 75 (9) 74 (9) 75 (9) Medical history c Personal supplement use tended to increase throughout the study. At their final study visit, 32% of participants in the entire cohort were not using personal calcium or vitamin D, and 60% were using both supplements. For the randomized design, 53% of participants in both groups continued to be non-users of personal calcium at their final visit. For the observational design, 14% of participants using personal calcium and vitamin D at randomization were no longer using these supplements at their final visit, and 53% of participants not using these supplements at randomization continued to be non-users at their final visit. Tables 2-4 and Fig 1 show the results for the randomized design, the observational design, and for comparison, the multivariate-adjusted results from the WHI OS. For myocardial infarction and stroke (Table 2), the results for the randomized and unadjusted observational designs were not concordant, and there was concordance with the randomized design results in only 2/8 analyses that controlled for covariates (age-, multivariate-adjusted, propensityadjusted, or propensity-matched) observational analyses. The results of WHI OS were not concordant with the randomized design results.
In contrast, for death ( Table 2), all of the unadjusted and covariate-controlled observational design results and the WHI OS result were concordant with the randomized design result. Similarly, for hip and total fracture (Table 3), the unadjusted observational design result, 7/8 of the covariate-controlled observational results, and the WHI OS result were concordant with the randomized design result. For breast cancer (Table 4), the unadjusted, age-and multivariate-adjusted observational design results were concordant with the randomized design result. However, neither the WHI OS result nor the propensity-adjusted or propensity-matched observational design results were concordant with the randomized design result. For colorectal and any cancer (Table 4), the unadjusted and covariate-controlled observational design results were concordant with the randomized design results. However, only the WHI OS result for colorectal cancer was concordant with the randomized result.
In sensitivity analyses, we explored the effect of selecting different thresholds for defining concordance. If we adopted a threshold of ±0.10 for concordance, 3/8 unadjusted and 15/32 covariate-controlled observational design results, and 4/8 WHI OS results were concordant with the randomized design results. Using a threshold of ±0.20, 7/8 unadjusted and 26/32 covariate-controlled observational design results, and 5/8 WHI OS results were concordant with the randomized design results. (For the primary analyses with a threshold of ±0.15, the frequency of concordance was 6/8, 23/32, and 4/8, respectively).
Discussion
There were different patterns of results from randomized and observational study designs for different outcomes in WHI CaD. For death, colorectal and total cancer, and hip and total fracture, results of unadjusted observational analyses were concordant with randomized design results, and adjustment for other variables in the observational analyses generally had little effect. For myocardial infarction and stroke, results of unadjusted observational analyses were not concordant with the randomized design results, and adjustment for other variables generally did not substantially decrease the differences between the results. For breast cancer, the unadjusted, age-and multivariate-adjusted observational results were concordant with the randomized results, but propensity adjustment or matching increased the differences between the results. Overall, 6/8 unadjusted, 6/8 age-adjusted, 8/8 multivariate-adjusted, 5/8 propensityadjusted, and 4/8 propensity-matched observational results were concordant with the randomized results. In comparison, 4/8 results from the WHI OS were concordant with the randomized results.
The results suggest that within the same study there are not substantial differences between results from randomized and observational study designs. Other than for myocardial infarction and stroke, all the unadjusted observational results were concordant with the randomized design results, and results from all multivariate-adjusted results using Cox proportional hazard models incorporating potential confounders were concordant. Results from propensity-adjusted and propensity-matched models were generally similar to the multivariate Cox proportional hazard model results. However, there were small differences between these models for some endpoints (myocardial infarction and breast cancer), and the propensity-adjusted and propensity-matched models did not fall within the defined range for concordance for these two outcomes or for stroke. An important limitation is that the randomized and observational study designs were not independent because the control group was the same for both designs. This feature may have contributed to the smaller differences between the within-study observational and randomized design comparisons compared to the between-study comparisons. Although there was fairly high concordance of observational and randomized design results within WHI CaD, concordance between the WHI CaD randomized results and the WHI OS results was only 50%, even though the two studies used similar methodology and recruited participants from the same population. Thus, differences in results between RCTs and observational studies may be due to differences between studies, even when they are small and subtle, rather than due to the specific design of the study (observational versus RCT). One potential difference is the willingness of participants to take part in a clinical trial and be randomized and blinded to a treatment. It is possible that responses to a treatment might be different in people willing to participate in a clinical trial compared to people unwilling to participate.
The results suggest that the influence of potential confounders may vary for different outcome variables and in different statistical models, although any such differences were small. There were substantial differences between users of personal calcium and vitamin D and those not taking either of these supplements for variables such as age, body mass index, and race which are all associated with cardiovascular disease, fractures, and cancer. Age and race were statistically significant predictors of fracture and cancer outcomes in our analyses, but adjustment for these and other variables did not have a substantial impact in any of the observational analyses, with all differences between the unadjusted and covariate-controlled effect estimates being <0.12. There were small differences between effect estimates from Cox proportional hazards models and propensity score-based models, but all differences were <0.17. When effect sizes are large, such differences are likely to have little impact. However, 70% of numeric associations were weak (odds ratio or relative risk between 0.5 and 2.0) in a recent survey of >2000 outcomes assessed in the influential observational Nurses' Health Study [23]. For effect estimates of this magnitude, small effects from adjusting for potential confounders could have substantial impact. It is not certain what accounts for the different impacts of confounders on outcome variables, but it highlights the difficulties in carrying out and interpreting multivariate analyses. It suggests that multivariate analyses of observational studies should be treated as exploratory, with a number of different models and techniques applied. The results should be reported accordingly, rather than simply presenting the results from a single "best" model, as commonly occurs.
An important limitation of our analyses is that the effects of CaD on all the outcomes we measured in both the randomized and observational designs were weak, with all effect estimates ranging between 0.76 and 1.20. Although WHI CaD was a large study and the confidence intervals around the effect estimates were generally narrow, it is possible that results might differ for agents with stronger therapeutic effects. We are not aware of any other completed large studies with a similar study design-that is, the study permitted non-protocol use of the study medication and had a large proportion of non-protocol users at baseline. However, a large study of vitamin D supplements currently underway also permits the use of non-protocol vitamin D supplements [24]. This study may therefore allow a similar analysis to ours to be undertaken once the study is completed. Cross-over between the study groups occurred with non-users of supplements at baseline starting them during follow-up and also, less commonly, baseline users discontinuing supplements. This cross-over between groups may have obscured true effects of CaD. Finally, an important limitation is that our definition of congruence between study results is necessarily arbitrary, being based on clinical pragmatism [23], although we did explore other definitions in sensitivity analyses.
In summary, these results do not suggest that there are substantial differences between the results of randomized and observational study designs within the same study, although concordance of results did vary between outcomes. The comparison of randomized results from WHI CaD with those from the separate WHI OS observational study again highlight the inconsistency of results between RCTs and observational studies, even, in this case, when the studies used similar methodology in the analyses and recruited participants from the same population. The effect of adjusting for potential confounders in observational analyses differed by only small amounts in a range of outcome variables and in the different methods of adjustment used. However, as the effect estimates were also small, some of these differences did alter the conclusions as to whether results were concordant or not. This suggests that multivariate adjustment in observational studies should explore a variety of different models and techniques, and report the impact of the different approaches as exploratory analyses.
|
2018-04-03T00:21:11.354Z
|
2015-10-06T00:00:00.000
|
{
"year": 2015,
"sha1": "c1b505d04942a9e9cb39cee192152be56ca3de4a",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0139975&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "c1b505d04942a9e9cb39cee192152be56ca3de4a",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
125122773
|
pes2o/s2orc
|
v3-fos-license
|
Measurement of Hadron and Lepton-Pair Production at 130GeV<\sqrt{s}<189 GeV at LEP
We report on measurements of e+e- annihilation into hadrons and lepton pairs. The data have been collected with the L3 detector at LEP at centre-of-mass energies between 130 and 189 GeV. Using a total integrated luminosity of 243.7 pb^-1, 25864 hadronic and 8573 lepton-pair events are selected for the measurement of cross sections and leptonic forward-backward asymmetries. The results are in good agreement with Standard Model predictions.
Introduction
We report on the results of measurements of fermion-pair production above the Z pole, based on data collected using the L3 detector at LEP in 1997 and 1998 at centre-of-mass energies √ s = 182.7 GeV and √ s = 188.7 GeV, respectively.Data corresponding to integrated luminosities of 55.5 pb −1 and 176.2 pb −1 were collected, leading to much improved statistics compared to our previous publications [1,2] based on data from 1995 and 1996.In addition, in 1997, small amounts of data, 3.4 pb −1 and 3.6 pb −1 , were collected at the same centre-of-mass energies as in 1995, 130.0 GeV and 136.1 GeV, respectively.The measurements made on these data samples are combined with those resulting from a re-analysis of the previous data, superseding the results obtained in Reference [1].
In these reactions, the (γ) indicates the possible presence of additional photons or low invariantmass fermion pairs.For a substantial fraction of the events initial-state radiation, ISR, lowers the initial centreof-mass energy to an effective centre-of-mass energy of the annihilation process, √ s ′ .When √ s ′ is close to the Z mass, m Z , the events are classified as radiative returns to the Z.A cut on √ s ′ allows a separation between events at high effective centre-of-mass energies (so-called highenergy events), and radiative returns to the Z. Cross sections are measured for all processes and forward-backward asymmetries are measured for the lepton channels and are compared to predictions of the Standard Model [3,4], both for the high-energy sample and for a larger, inclusive sample including also the radiative returns to the Z. Kinematic cuts have been changed with respect to our previous publication [2].The corresponding results of the cross section and forward-backward asymmetry measurements have been included, with corrections for these changes applied.
Similar studies on the data taken at centre-of-mass energies between 182.7 GeV and 188.7 GeV have been published by other LEP collaborations [5].
Analysis Method
The data were collected using the L3 detector described in References [?, 6].For the s-channel processes, the inclusive event sample is defined by requiring √ s ′ > 60 GeV for hadronic events and √ s ′ > 75 GeV for lepton-pair events, to reduce uncertainties on radiative corrections in extrapolating to low √ s ′ values.The high-energy sample is defined by requiring √ s ′ > 0.85 √ s.Using the sum of all ISR photon or pair energies, E γ , and momentum vectors, P γ , the s ′ value is given by: For most of the events initial-state radiation is along the beam pipe and is not detected.In this case a single photon is assumed to be emitted along the beam axis; its energy is determined from the event kinematics.The √ s ′ value is estimated using Equation 1.The effect of multiple photon and final-state radiation on the √ s ′ calculation has been studied using Monte Carlo programs and is corrected for.The treatment of photons observed in the detector is addressed in the sections describing the individual analyses.Mis-reconstruction of the effective centre-ofmass energy induces a migration of events between the kinematic regions allowed and excluded by the cut on √ s ′ .This is taken into account in the efficiency determination and as an additional background, denominated as ISR contamination.
Bhabha scattering at high energies is dominated by t-channel photon exchange, and hence a cut on s ′ is less natural.Instead, a cut is applied on the acollinearity angle, ζ, of the final-state e + and e − .In this case, the inclusive and high-energy samples are defined by requiring ζ < 120 • and ζ < 25 • , respectively.
The measurements are compared to the predictions of the Standard Model as calculated using the ZFITTER [20] and TOPAZ0 [21] programs with the following parameters [22][23][24][25][26]: had = 0.02804, and m H = 150 GeV.The theoretical uncertainties on the Standard Model predictions are estimated to be below 1% [27] except for the predictions for large angle Bhabha scattering which have an uncertainty of 2% [28].
Initial-final state interference in s-channel processes
In the presence of interference between initial-and final-state radiative corrections, the effective centre-of-mass energy, in contrast to the acollinearity angle, is not well-defined.Moreover, for the s-channel processes, unlike for Bhabha scattering, these contributions are not included in the Monte Carlo samples used to estimate efficiencies.Their effect is expected to be largest for the high-energy µ + µ − and τ + τ − samples, affecting cross sections by up to 2% and forwardbackward asymmetries by up to 0.02.The following approach is used in the analysis of the s-channel processes.
Cross sections are first determined disregarding this effect.This allows the use of existing Monte Carlo programs without modifications.Corrections are subsequently applied using the Standard Model predictions for the interference contributions, folded with the selection efficiency, ǫ, as a function of the fermion-pair invariant mass, m f f , and the scattering angle of the anti-fermion, cos θ.This leads to an additive correction: For the inclusive sample, initial-state radiation distorts the angular distribution, such that the Born approximation in Equation 3 is not appropriate.Instead, the forward-backward asymmetry is obtained directly from the differential cross section and extrapolated to the full solid angle using the ZFITTER program.The differential cross section is corrected analogously to Equation 2. The correction is largest for the asymmetry of the high-energy samples, ranging between 0.004 and 0.010.
Pair corrections
Besides the emission of ISR photons, also the emission of initial-state pairs can lower the effective centre-of-mass energy of the scattering process.This gives rise to a non-negligible contribution to the inclusive cross section (approximately 1.5% for all s-channels as estimated using the ZFITTER program) when radiative returns to the Z are included in the signal definition.To allow for a proper comparison between experimental measurements and theoretical predictions, these radiative corrections are included in the fermion-pair signal definition.
To calculate the effect of this signal contribution on the overall efficiency, and to estimate the background contributions leading to the same four-fermion final states, events are generated using the DIAG36 program.As this program includes only photon exchange, the events are reweighted to include the effects of Z exchange using the matrix element calculation of the FERMISV program.The selection efficiencies are obtained by combining those estimated from the separate Monte Carlo samples regarded as signal, weighted with their respective cross sections as estimated using the ZFITTER program.As these Monte Carlo programs do not yield a correct description of low-mass hadronic pairs, the efficiency for events with hadronic pairs is taken to be that for the events with lepton pairs.A 20% uncertainty is assigned to this efficiency, resulting in an uncertainty less than 0.2% on the overall efficiency.
Because of the large number of diagrams involved, this approach is less straightforward in the case of Bhabha scattering.Since the relative pair correction is estimated [29] to be significantly smaller than for the s-channel processes, its effect on the selection efficiency is neglected and no correction is applied.
Integrated luminosity
The luminosity is measured using small-angle Bhabha scattering [?].A tight fiducial volume cut, 34 mrad < θ < 54 mrad and |90 • − φ| > 11.25 • , |270 • − φ| > 11.25 • , is imposed on the coordinates of the highest-energy cluster on one side.The highest-energy cluster on the opposite side should be contained in a looser fiducial volume, 32 mrad < θ < 65 mrad and This method reduces the theoretical uncertainty.
The experimental systematic uncertainties originate from the event selection criteria, 0.10%, and from the detector geometry, 0.05%.The Monte Carlo statistics result in an uncertainty of 0.07%, yielding a total experimental systematic uncertainty of 0.13%.In addition, a theoretical uncertainty of 0.12% [30] is assigned to the BHLUMI generator, resulting in a total uncertainty of 0.18%.
Event selection
Events are selected by restricting the visible energy, E vis , to 0.4 < E vis / √ s < 2.0.The longitudinal energy imbalance must satisfy |E long |/E vis < 0.7.The reconstructed energies do not include isolated electromagnetic energy depositions with an energy greater than 10 GeV.These cuts reject most of the background from two-photon collision processes.
In order to reject background originating from lepton pair events, more than 18 calorimetric clusters with an energy exceeding 300 MeV each are requested.
The W-pair production background is reduced by applying the following cuts.Semi-leptonic W-pair decays are rejected by requiring the transverse energy imbalance to be smaller than 0.3 E vis .The background from hadronic W-pair decays is reduced by rejecting events with at least four jets each with energy greater than 15 GeV.The jets are obtained using the JADE [31] algorithm with a fixed jet resolution parameter y cut = 0.01.
Figure 1a shows the distribution of the visible energy normalised to the centre-of-mass energy for hadronic final state events selected at 189 GeV.The observed peak structure of the signal arises from the high-energy events and from the radiative returns to the Z.
As an additional cross-check, an alternative selection is performed using an artificial neural network technique [32] instead of the cuts described above.The results obtained using the two selection methods are compatible with each other.
To reconstruct the effective centre-of-mass energy, two different methods are used.In the first method, all events are reclustered into two jets using the JADE algorithm.A single photon is assumed to be emitted along the beam axis and to result in a missing momentum vector.From the polar angles of the jets, θ 1 and θ 2 , the photon energy is then estimated as: The second method uses the clustered jets obtained using the JADE algorithm with a fixed cut, y cut = 0.01.A kinematic fit is performed assuming the emission of either zero, one, or two photons along the beam axis.The hypothesis of the smallest number of photons yielding a probability of the kinematic fit larger than 8.5% is used.The cross sections are estimated as the average of the results obtained using the two methods.A systematic uncertainty on the √ s ′ reconstruction, equal to half their difference, is assigned.For about 10% of the events, a high-energy cluster is detected in the electromagnetic calorimeter.It is selected as described above and is assumed to be a photon.Its energy and momentum are added to the undetected ISR photons.The effective centre-of-mass energy is then calculated using Equation 1.
Figure 2a shows the reconstructed √ s ′ distribution, based on the reconstruction using the jet angles, for hadronic final state events.
Cross section
Selection efficiencies and background contributions are listed, for the √ s ′ reconstruction method using the jet angles, in Table 1.The selected sample contains a background from hadronic twophoton collision processes, W-, Z-and tau-pair production and e + e − → Ze + e − (γ) events.The two-photon background is estimated by adjusting the Monte Carlo to the data in a two-photon enriched sample.
The numbers of selected events, the total cross sections for the different event samples, and the corresponding statistical and systematic uncertainties are listed in Table 2, together with our previous published measurements [2].The systematic uncertainties are dominated by the uncertainty on the √ s ′ determination and are correlated between different centre-of-mass energies.In Figure 3 the cross section measurements are shown and compared to the Standard Model predictions.
e
The event selection for the process e + e − → µ + µ − (γ) follows that of Reference [2].Two muons are required within the polar angular range | cos θ| < 0.9.For the data taken at 183 GeV the angular range is restricted to | cos θ| < 0.81.At least one muon must be measured in the muon spectrometer, and have a momentum greater than 35 GeV.This reduces substantially the background from e + e − → e + e − µ + µ − interactions whilst ensuring a high acceptance for events with hard ISR photons.
Background from cosmic muons is reduced using both scintillation counter time information and the distance of the muon tracks from the beam axis.The number of accepted cosmic muon events is estimated by extrapolating the corresponding sideband distributions to the signal region.Figure 1b shows the distribution of the maximum muon momentum normalised to E beam for events selected at 189 GeV.
The √ s ′ value for each event is determined using Equation 1assuming the emission of a single ISR photon.In case a photon is detected in the electromagnetic calorimeter it is required to have an energy greater than 15 GeV and an angular separation to the nearest muon of more than 10 degrees.Otherwise the photon is assumed to be emitted along the beam axis and its energy is calculated from the polar angles of the outgoing muons according to Equation 4. The distribution of the reconstructed √ s ′ for events selected at 189 GeV is shown in Figure 2b.
Cross section
Selection efficiencies and background contributions are listed in Table 1.The main background contributions are from the reactions e + e − → e + e − µ + µ − , e + e − → τ + τ − (γ) and from W-pair production.
Table 2 summarises the numbers of selected events, the resulting cross sections, and their statistical and systematic uncertainties for the two event samples at the various centre-of-mass energies.The main contributions to the systematic uncertainties originate from the background subtraction and from the acceptance correction.Figure 4 shows the comparison to the Standard Model prediction.
Forward-backward asymmetry
The forward-backward asymmetry is determined using events with two muons with opposite charge and an acollinearity angle smaller than 90 degrees.
For the high-energy sample, the angular distribution of the events is parametrised according to Equation 3. The asymmetry, A fb , is determined from an unbinned maximum-likelihood fit of this parametrisation to the data within the fiducial volume.The muon charge is measured as described in Reference [2].The charge confusion per event, ranging between 0.2% and 0.7%, is taken into account in the fit procedure.The asymmetries for the accepted background contributions are estimated using the same method and are corrected for.The corrections range between 0.045 and 0.059.
For the inclusive event sample the differential cross section is distorted by hard ISR photons.Therefore, A fb is computed directly from the differential cross sections obtained within the fiducial volume.To obtain the asymmetry for the full solid angle an extrapolation factor is calculated using the ZFITTER program.It ranges between 1.10 for the 183 GeV data and 1.03 for the 189 GeV data.
Table 3 summarises the numbers of forward and backward events, the forward-backward asymmetry measurements, and their statistical and systematic uncertainties.The main contributions to the systematic uncertainty are the uncertainties on the backgrounds and on the momentum reconstruction.Figure 4 shows the comparison of the corrected asymmetries to the Standard Model prediction.Table 4 lists the differential cross sections at 183 GeV and 189 GeV, compared to their Standard Model predictions.The 189 GeV distributions are displayed in Figure 5.
e
Taus are identified as narrow, low multiplicity jets, containing at least one charged particle.Tau jets are formed by matching the energy depositions in the electromagnetic and hadron calorimeters with tracks in the central tracker and the muon spectrometer.Events containing two jets within the polar angular range | cos θ| < 0.92 are accepted.The reconstruction of √ s ′ follows the procedure described in Section 3.3 using the polar angles of the two tau jets, requiring at least 10 GeV for observed photons.
Hadronic events are removed by requiring at most 16 calorimetric clusters with an energy exceeding 100 MeV each and at most 9 tracks in the central tracker.Events containing two electrons or two muons are rejected.Electrons are identified by a cluster in the electromagnetic calorimeter with an energy greater than 2.5 GeV and an electromagnetic shower shape, a matched track, and less than 2.5 GeV deposited in the hadron calorimeter.Muons are identified by a track in the muon spectrometer and a minimum-ionising particle signature in the calorimeters.Bhabha events are further rejected by requiring the electromagnetic energy of the highest-energy jet and the other jet to be less than 0.375 √ s ′ and 0.25 √ s ′ , respectively.In addition, the acoplanarity of the two jets must be larger than 0.2 degrees.
To reject background from two-photon collision processes the most energetic jet must have an energy greater than 0.24 E beam .The distribution of this quantity is shown in Figure 1c for the data taken at 189 GeV.The energy of reconstructed muons is required to be less than 0.4 √ s ′ .To reject leptonic final states from W-pair production the acoplanarity of the two tau jets must be less than 10 degrees.Background from cosmic muons is reduced using both scintillation counter information and the distance of the muon tracks from the beam axis.Figure 2c shows the reconstructed √ s ′ distribution for the data taken at 189 GeV.
Cross section
Selection efficiencies and background contaminations are listed in Table 1.The numbers of selected events, as well as the cross sections and corresponding statistical and systematic uncertainties for the different event samples, are listed in Table 2.The systematic uncertainties originate mainly from uncertainties in the event selection, in particular on the rejection of Bhabha events.Figure 4 shows the comparison to the Standard Model prediction.
Forward-backward asymmetry
For the high-energy sample, the forward-backward asymmetry is determined using an unbinned maximum-likelihood fit of Equation 3to events with unambiguous charge assignment.The background from other final states and from events with hard ISR photons is corrected for in the fit procedure.The fitted asymmetry is corrected for charge confusion.For the inclusive sample, the forward-backward asymmetry is determined from the differential cross section as described in Section 3.3.For both the inclusive and high-energy samples, the charge confusion per event is estimated, from the data, to be less than 0.5%.Table 3 lists the number of forward and backward events and the results of the asymmetry measurements.Figure 4 shows the comparison of the measured asymmetries to their Standard Model predictions.Table 4 lists the differential cross sections at 183 GeV and 189 GeV, compared to their Standard Model predictions.The 189 GeV distributions are displayed in Figure 5.
Event selection
Electron candidates are recognised by an energy deposition in the electromagnetic calorimeter with at least 15 associated hits in the central tracking chamber within a three degree azimuthal angular range.
Bhabha events are selected by requiring the two highest energy electron candidates to be contained in the polar angular range 44 • < θ < 136 • , and to have an energy greater than 0.5 E beam and 15 GeV, respectively.Figure 1d shows the energy of the highest energy electron candidate, normalised to the beam energy for events selected at 189 GeV.
The acollinearity angle is calculated from the directions of the two electrons.Its distribution is shown in Figure 2d for events selected at 189 GeV.
Cross section
The selection efficiencies within the fiducial volume and the background contributions are listed in Table 1.The background is dominated by tau-pair production.Table 2 lists the numbers of selected events, and the measured cross sections with their statistical and systematic uncertainties, for the various centre-of-mass energies.The systematic errors are dominated by uncertainties on the event selection.The cross sections are compared to their Standard Model prediction in Figure 6.
Forward-backward asymmetry
The forward-backward asymmetry is extracted from the differential cross section.The selection criteria for electron candidates are tightened to improve the charge determination.The electron direction is obtained from both tracks in the event.The charge confusion probability is determined [22] from the data to be (2.8 ± 0.3)% for the 130-136 GeV data, (4.3 ± 0.3)% for the 183 GeV data, and (5.1 ± 0.2)% for the 189 GeV data, and is corrected for in the determination of the differential cross section.The validity of the method has been verified using a sample of dimuon events collected at the Z pole in 1995, for which the charge is measured precisely in the muon spectrometer.
Table 3 summarises the numbers of forward and backward events and the asymmetry measurements.The systematic error on the asymmetry measurements is dominated by the uncertainty on the charge confusion.Figure 6 shows the comparison of the measured asymmetries to their Standard Model predictions.Table 5 lists the differential cross sections for the highenergy samples at 183 GeV and 189 GeV, compared to their Standard Model predictions.The 189 GeV distribution is displayed in Figure 7.
Summary and Conclusion
Based on an integrated luminosity of 243.7 pb −1 collected at centre-of-mass energies between 130.0 GeV and 188.7 GeV, we select 25864 hadronic and 8573 lepton-pair events.The data are used to measure cross sections and leptonic forward-backward asymmetries.The measurements are performed for the inclusive event sample and for the high-energy sample.1: Selection efficiencies and background fractions for the inclusive and the high-energy event samples of the reactions e + e − → hadrons(γ), e + e − → µ + µ − (γ), e + e − → τ + τ − (γ) and e + e − → e + e − (γ).For Bhabha scattering the selection efficiencies are given for 44 γ), e + e − → µ + µ − (γ), e + e − → τ + τ − (γ) and e + e − → e + e − (γ), for the inclusive and the high-energy event samples.The systematic errors do not include the uncertainty on the luminosity measurement.In the case of Bhabha scattering, both leptons have to be inside 44 • < θ < 136 • .The results for the 161-172 GeV data have been taken from Reference [2] and corrected using ZFIT-TER (s-channel processes) and BHAGENE (Bhabha scattering) to correspond to the kinematic cuts described in the text.
inclusive A SM fb , of the reactions e + e − → µ + µ − (γ), e + e − → τ + τ − (γ) and e + e − → e + e − (γ) for the inclusive and the high-energy event samples.In the case of Bhabha scattering, both leptons have to be inside 44 • < θ < 136 • .The results for the 161-172 GeV data have been taken from Reference [2] and corrected using ZFITTER (s-channel processes) and BHAGENE (Bhabha scattering) to correspond to the kinematic cuts described in the text.cos θ range 182.7 GeV 188.7 GeV
Figure 1 :Figure 2 :
Figure 1: (a) The total visible energy normalised to the centre-of-mass energy, √ s, for the selection of e + e − → hadrons(γ) events, (b) highest muon momentum normalised to the beam energy for the selection of e + e − → µ + µ − (γ) events, (c) highest tau jet energy normalised to the beam energy for the selection of e + e − → τ + τ − (γ) events, and (d) highest electron energy normalised to the beam energy for the selection of e + e − → e + e − (γ) events.
Figure 3 :
Figure 3: Cross sections of the process e + e − → hadrons(γ), for the inclusive (solid symbols) and the high-energy sample (open symbols).The Standard Model predictions are shown as a solid line for the inclusive sample and as a dashed line for the high-energy sample.The lower plot shows the ratio of measured and predicted cross sections.20
Figure 4 :
Figure 4: Cross sections (a) and forward-backward asymmetries (b) of the processes e + e − → µ + µ − (γ) and e + e − → τ + τ − (γ) for the inclusive (solid symbols) and the high-energy sample (open symbols).The Standard Model predictions are shown as a solid line for the inclusive sample and as a dashed line for the high-energy sample.
Figure 6 :
Figure 6: Cross sections (a) and forward-backward asymmetries (b) of the process e + e − → e + e − (γ) for the inclusive (solid symbols) and the high-energy sample (open symbols).The Standard Model predictions are shown as a solid line for the inclusive sample and as a dashed line for the high-energy sample.The two electrons are required to be inside 44 • < θ < 136 • .23
Figure 7 :
Figure 7: Differential cross section for the high-energy event sample, for the process e + e − → e + e − (γ) at √ s = 189 GeV.The line indicates the Standard Model prediction.
The results are in good agreement with Standard Model predictions.
Table 2 :
Number of selected events, N sel , measured cross sections, σ, statistical errors and systematic errors and the Standard Model predictions, σ SM , of the reactions e + e − → hadrons(
Table 3 :
Number of forward, N f , and backward events, N b , forward-backward asymmetries, A fb , statistical and systematic errors and the Standard Model predictions,
Table 5 :
Cross sections (in pb) for ζ < 25 • for the process e + e − → e + e − (γ) at 183 GeV and 189 GeV in bins of cos θ, compared to their Standard Model predictions.Statistical and systematic uncertainties are combined.
|
2019-04-14T02:26:02.975Z
|
1996-03-07T00:00:00.000
|
{
"year": 2000,
"sha1": "3e268b4f6babb52125bcef7e9147ff4c567a6b6c",
"oa_license": null,
"oa_url": "https://access.archive-ouverte.unige.ch/access/metadata/ee942f8e-e3cb-4ab9-aded-7b77071e72ec/download",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "3e268b4f6babb52125bcef7e9147ff4c567a6b6c",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
252683244
|
pes2o/s2orc
|
v3-fos-license
|
On the salient limitations of the methods of assembly theory and their classification of molecular biosignatures
We demonstrate that the assembly pathway method underlying assembly theory (AT) is an encoding scheme widely used by popular statistical compression algorithms. We show that in all cases (synthetic or natural) AT performs similarly to other simple coding schemes and underperforms compared to system-related indexes based upon algorithmic probability that take into account statistical repetitions but also the likelihood of other computable patterns. Our results imply that the assembly index does not offer substantial improvements over existing methods, including traditional statistical ones, and imply that the separation between living and non-living compounds following these methods has been reported before.
Introduction
The distinction between life and non-life has been a matter that has long fascinated both scientists and philosophers.This question has been germane to the area of complex systems science since its inception, with the concept of complexity having long been hypothesised as being deeply connected to the life vs.non-life distinction [2,18,23], as also to matters such as emergence and self-organisation, which have exercised scientists concurrently [19].First to take up this nexus of issues was Erwin Schrödinger in his book "What is Life?", exploring the physical aspect of life and cells, followed by Claude Shannon with his concept of entropy, responding to the pressing challenge of identifying the distinctiveness of certain configurations of atoms or molecules assembled non-randomly, and quantifying the many ways in which swapping these molecules could explain a whole system.Later would come the concepts of algorithmic information, algorithmic randomness and algorithmic probability, that formalised what constituted a discretely-describable random object at the limit-a problem that had challenged mathematicians for decades if not centuries-by abstracting it away from statistics and recasting it in terms of fundamental mathematical first principles.These foundations are the underpinnings of both computable and uncomputable coding methods, and they are ultimately what explain and justify their application as a generalisation of Shannon's information theory, which is subsumed into Algorithmic Information Theory.
Building upon algorithmic information, Charles Bennett put forward a measure of sophistication to capture complex systems, in particular, life and the byproducts of living systems.Bennett's concept of logical depth [4] focuses on the lengths of the shortest computer programs that best compress data.Characterisations in terms of thermodynamics [14,24] have further enriched these measures, beyond statistical pattern recognition and number of steps, circling back to some original ideas related to what are believed to be the principles of living systems.One characteristic of measures based on symbolic computation, that goes beyond statistical pattern matching, is that these measures are either semi-or uncomputable (we will call them 'stronger' as they are a generalisation of computable statistical measures).
These stronger measures allow estimations from weaker computable versions that can easily be formulated from their uncomputable counterparts, one example being resource-bounded algorithmic complexity [7,21,31].Some of these are represented by popular coding schemes such as Lempel-Ziv-Welch (LZW) and cognates, with basic coding schemes such as run-length encoding (RLE) and Huffman codings [10] underlying many of these.
A recently introduced approach termed "Assembly Theory" (AT will be interchangeable with MA in this paper, representing an application of AT to molecules), featuring a computable index, has been claimed to be a novel and superior approach to distinguishing living from non-living systems and gauging the complexity of molecular biosignatures.A major problem in science is that of reproducibility and lack of proper control experiments.In proposing a new complexity measure, the central claim advanced in [17] is that molecules with high molecular assembly index (MA) values "are very unlikely to form abiotically, and the probability of abiotic formation goes down as MA increases".In other words, "high MA molecules cannot form in detectable abundance through random and unconstrained processes, implying that the existence of high MA molecules depends on additional constraints imposed on the process" [17].
At the core of Assembly Theory is an elementary coding scheme that has been used in compression since the 1960s, for purposes of compression as well as for other applications.Compression incorporating these basic ideas has been widely applied in the context of living systems, including in a landmark paper published in 2005 that was not only able to characterise DNA as a biosignature but was able to reconstruct the main branches of an evolutionary phylogenetic tree from the compressibility ratio of mammalian mtDNA sequences [11].Just as with Assembly Theory and its index, the work in [11] was weak on basic control experiments, given that in the field of genetics it is widely known that similar species will have similar genomic GC content and therefore a simple Shannon entropy approach on a uniform distribution of G and C nucleotides-effectively simply counting GC content [12]-would have yielded the tree.Nevertheless, the work in [11] demonstrates that compression schemes have been central to the discussion of and applications to living organisms and their information signatures for decades.
Note that, depending on variations in application and context, the measure featured in Assembly Theory has had several names, or can be referred to in different ways: pathway assembly (PA), object assembly (OA), or molecular assembly index (MA) as in [15,16].We choose to employ the last nomenclature in the present article, this paper's results and conclusion hold across all their methods and measures introduced to this date.
The underlying intuition is that such an assembly index (by virtue of minimising the length of the path necessary for an extrinsic agent to assemble the object) would afford "a way to rank the relative complexity of objects made up of the same building units on the basis of the pathway, exploiting the combinatorial nature of these combinations" [15].
In order to support their central claim, Marshall et al. [17] state that "MA tracks the specificity of a path through the combinatorically vast chemical space" and that, as presented in [16], it "leads to a measure of structural complexity that accounts for the structure of the object and how it could have been constructed, which is in all cases computable and unambiguous".The authors propose that molecules with high MA detected in contexts or samples generated by random processes, in which there are minimal (or no) biases in the formation of the objects, display a smaller frequency of occurrence in comparison to the frequency of occurrence of molecules in alternative configurations, where extrinsic agents or a set of biases (such as those brought into play by evolutionary processes) play a significant role.
However, we found that what the authors have called Assembly Theory [17] is a formulation that mirrors the working of previous coding algorithms, without proper attribution.
Furthermore, these results show that the claim that Assembly Theory may help not only to distinguish life from non-life but also to identify nonterrestrial life, is a major overstatement.At best what Assembly Theory amounts to is a re-purposing of existing elementary algorithms in computer science.But some of these algorithms have themselves been advanced in the context of identifying the complexity of living systems [4,25], hence even the claim to novelty of application is in question.
While the calculation of MA may be prone to false negatives-due to partial fragmentation in energy collision analysis and the restriction to counting only valence rules in molecule synthesis (ignoring other chemical conditions)this does not pose a challenge to the central claim made in [17].Instead, MA aims at avoiding underestimation of the amount of molecules that result from random or abiotic processes.Thus in the present article, instead of studying both positives and false negatives, we only focus on investigating the existence of false positives, which directly tackles the central claim.The limitations and drawbacks here identified extend to all applications of these methods developed in [15][16][17]20].
A first type of life-like formal idea using computation was proposed by von Neumann, featuring a universal replicator, where a function (e.g. a cellular automaton) gets as input the instruction blueprint for its own construction.This type of computation is deeply related to universal computation, which in turn implies uncomputability.What Turing and others proved was that for an arbitrary blueprint (e.g.genetic instructions for life) to be reproduced, an uncomputable universal mechanism would be required.The concept of modularity of structure and recursive reconstruction from elementary building blocks has also been a feature long associated with life.In [9], for example, we proved that modularity can be built up from computation alone, and can therefore be characterised in a recursive fashion.However, the complexity of living systems also immediately suggests that simple measures, such as Huffman schemes or Assembly Theory, are unable to characterise the complexity of life.Modularity also goes hand in hand with generative functions, particularly, of the recursive type.In other words, modularity, computability, and fundamental features of life have been richly intertwined and explored in tandem in the last century.
MA classification exhibits lower performance than existing algorithms
Here we first compare the performance of 'Molecular Assembly' (MA) with other measures under the four mass spectroscopy (MS) categories.The Ttest and Kolmogorov-Smirnov test have been used for this purpose.Unlike the t-test statistic, the Kolmogorov-Smirnov test provides a non-parametric goodness-of-fit test, assuming the data does not come from a Gaussian (Normal) distribution.
For the unpaired (two-samples/independent measures) t-test with Welch's correction, at a degree of freedom (df) of 100, a critical t-value of 3.390 is expected for a two-tail P-value of 0.001 (i.e., 99.9% confidence).The t-value closest to 3.39 was found for the 1D-BDM and 2D-BDM [30,31], with a t-value of 6.410 and 6.561, respectively (P < 0.0001), both within the critical region of statistical significance.All complexity measures obtained a nonparametric Kolmogorov-Smirnov test value of P < 0.0001; the Kolmogorov-Smirnov distance D was smallest for the 1D-BDM and 2D-BDM, with both returning a value of 0.707.MA and Shannon entropy had a similar statistical significance in classifying the mass spectroscopy (MS) data into their four distinct categories, with t-values of 15.96 and 20.96, respectively at df = 100.The Kolmogorov-Smirnov distances were 0.828 and 1, respectively.Lastly, the LZW compression was found to be non-significant in the classification (P = 0.8466).
Through these statistical assessments, the 1D-BDM and 2D-BDM at a binary conversion threshold of 3 were found to be robust discriminants of molecular complexity in classifying living vs. non-living molecules.The result is shown in Fig. 2.
Unlike more sophisticated compression algorithms such as LZW, despite their statistical limitations, RLE and Huffman's coding algorithms are among the simplest coding algorithms introduced in the 1960s and incorporated in compression schemes.They are known not to be optimal for statistical or algorithmic compression, but optimal at doing what they were intended to do, that is, counting statistical copies in the form of minimum codelengths.Figure 5 explains the Huffman coding scheme in more detail and how it compares to MA, which is ill-defined when considered in light of the authors' original objective ('counting copies') [17].RLE and Huffman implements better what the authors meant to recreate.
In Figures 1 and 2, we test the measures as the original authors did, in preparation for the incorporation of mass spectroscopy data.Both 1D-RLE and 1D-Huffman coding schemes show a strong statistical correlation and linear correspondence with MA (Figure 1).The one-dimensional RLE and Huffman code compression lengths showed the strongest Pearson correlations with MA at R-values of 0.9001 and 0.896, respectively.The two-dimensional distance matrices of the mass spectroscopy (MS) data were binary converted at a threshold of 3 and subjected to the compression algorithms.The 2D-RLE and 2D-Huffman code compression lengths obtained Spearman correlation values of 0.7967 and 0.7537, respectively with MA (the Pearson scores were comparable).The gzip compression showed a weaker correlation with MA, at a Pearson score of 0.4761 and a Spearman correlation of 0.804.
A strong Pearson correlation with an R-value of 0.8823 was observed between 1D-BDM and MA for the 99 molecules in the MS data set (Fig. 2).The LZW compression shared a close Pearson correlation score of 0.8738 with MA.All correlation measures obtained a statistically significant one-tailed p-value (P < 0.0001).
Figure 2: Classification of complexity measures by mass spectroscopy (MS) profiles (log-scale).Both 1D and 2D BDM better distinguish living from non-living molecules in the MS dataset than MA, as shown by a clearer variability in the complexity measures between the molecular subgroups.MA does not display any particular advantage when compared against proper control experiments, and performs similarly to the simplest of the statistical algorithms.
These findings suggest that the methods behind the so-called 'Assembly Theory,' on which 'Molecular Assembly' (MA) is based, can easily be replaced by one of the first and simplest compression schemes, 1D-RLE, in the classification of mass spectroscopy (MS) complexity.The so-called Molecular Assembly indices did not show any significant advantage when compared with other measures that were introduced several decades ago when computer compression algorithms where designed based upon the same modular statistical principle of repetition and modular counting re-introduced by MA.Nor were the MA indices able to show any particular advantage over indices that are non-computable but capable of being approximated from above and based on resource-bounded variants of algorithmic complexity (such as BDM [30,31]), which the authors of MA disqualify a priori [17] without any evidence or control experiments, on account of their semi-computable nature (where 'semi' means they can be approximated using various methods, as is the case, for example, with protein folding).The comparison of measures across the four categories of MS molecules is shown in Table 2 with respect to increasing the molecular weight (MW) to better visualise the trends across living and non-living bio-signatures.The Pearson correlation test was assessed on the various complexity and compression measures in relation to molecular weight (MW) with an alpha value of 0.01 (99 percent confidence interval) for which the one-tailed p-values were significant (P<0.0001) for all five measures compared in Table 2.The one-tail P-value tests were performed instead of the two-tail tests since our (previous) analyses inferred a unidirectional linear relationship in the trend patterns.As shown in Table 2, 1D-BDM had the highest Pearson correlation with MW (R=0.9058),followed by LZW compression (R=0.9028).MA has a correlation score of 0.8055.
Comparison of correlation with molecular weight
The correlation analysis suggests a stronger positive linear relationship between MW and measures from algorithmic information dynamics, such as BDM and LZW, in contrast to that between MW and MA.As such, the complexity measures we employ are better predictors of increasing molecular complexity in the MS signatures classification.
Comparison of Correlation with MS2 spectra
In the previous results, we demonstrated that complexity measures from algorithmic information dynamics outperform MA indices in the classification of the molecular signatures shown in Figure 2B data of the original paper, thereby, debunking MA theory.Herein, we will further extend the validity of these results by extending the application of the measures to the 114 molecules shown in Figure 3 MS2-standard curves data from the original paper, and thereby, outline the higher statistical power of other measures on MA chemical space.The MA chemical space, as discussed in the supplementary information of the original paper, is constructed from these 114 molecules, which are classified as small molecules (comprising of the four previous categories of molecules from the 99 molecules) or as biological peptides.As shown in Figure 3, other measures better discriminate the 114 molecules from these two categories than MA.The two-tailed P-value test, Pearson correlation analysis, as shown in Table 2, revealed that the correlation between 1D-BDM (performed on the InChI codes of the molecules) and the category (i.e., small molecules vs. peptides) was R= 0.828, while those for the 1D-RLE and 1D-Huffman, with the categories, were R= 0.704, and 0.713, respectively.Lastly, the Pearson correlation between MA and the categories is R=0.711.Not only did MA perform the poorest in correlation with the categories, compared to other measures, but the identical correlation values for the 1D-Huffman and MA, further support the findings and arguments made herein.Furthermore, the MA theory findings suggested that MA predicts living 2.
Statistic
vs. non-living molecules, by a cherry-picked subset of biological extracts, abiotic factors, and inorganic (dead), shown in Figure 4 of the original paper.We repeated the experiment using the binarised MS2 spectra peaks matrices provided in the source data of the original paper.Only 18 of the extracts and molecular MS2 spectra were obtained, ignoring the blinded samples shown in their Figure 4. Our reproduced findings on their Figure 4, are shown in our Figure 4 data.By including the 114 molecules from Figure 3 with the 18 molecules of their Figure 4, we performed correlation analysis on all 132 signatures, with 5 categories: small molecules, peptides, abiotic, dead (inorganic, such as coal and quartz), and biological extracts (which includes yeast, E.coli, etc.).The Pearson correlation was strongest between 1D-BDM and the category (R= 0.951), followed by 1D-RLE and 1D-Huffman having a near identical Pearson correlation of R= 0.843 and R= 0.842, respectively.MA has the poorest correlation with the categories, with a correlation of R= 0.448.All Pearson scores were statistically significant (P<0.0001).The 3. Therefore, our findings collectively conclude that when considering both the mass spectrometry signatures of Figures 2B, 3, and 4, of the original paper, together, our complexity measures strongly outperform MA index, as a discriminant of living vs. non-living systems.
In addition, it should be noted that according to the original paper's findings, beer has the highest MA score.This further questions the validity of MA theory, as it greatly limits the complexity of living vs. non-living molecular signatures.There is also a significant level of variance in the MA scores of these biological mixtures and extracts, as indicated on Figure 4 of the original paper.Further, given that the MA or complexity of the biological extracts shown in their Figure 4 are mixtures derived from the small molecules and peptides in the MA chemical space constructed from their Figure 3 data, and by virtue of other coding indexes outperforming their chemical MA space classification, we can conclude that MA theory is sub-optimal and a limited subset of compression measures provided within the algorithmic complexity framework.
Comparison of computational optimality
The assembly method derived from the 'Assembly Theory' proposed by the original authors [17] consists roughly in finding a pattern-matching generative grammar behind a string by traversing and counting the number of copies needed to generate its modular redundancies, decomposing it into the statistically smallest collection of components that reproduce it without loss of information by finding repetitions that reproduce the object from its compressed form.For purposes of illustration, let us take the example of ABRACADABRA, which the original authors have also used [17].For molecular assembly (MA) to succeed it needs to have a discriminator and classifier able to characterise each repetition of A and N as the same, where N is another character or some sub-unit of the structure with the same frequency as A (e.g., a twoletter unit containing A, such as AB or RA).In the ABRACADABRA example, MA deconstructs the sequence into unique blocks of five possible characters by adding a new character in subsequent steps, such that the minimal number of steps, considering only the frequency of the largest repeated block size (ABRA), is obtained.The repeated binary or tertiary recursive structures (i.e., blocks of 2 or 3 letters) within the sequence, such as AB, RA, or BRA, are ignored in MA's minimal path search.The proposed MA falls into the category of entropy encoding measures and is indistinguishable from an implementation motivated by algorithmic complexity using methods such as LZW and cognates, except for perhaps meaningless variations.In Section 2.4 we have shown the marked similarity between Huffman coding and MA results.We found that the similarity depended on the fact that compression algorithms create unique blocks in generating the structure/system.However, the Huffman coding provides the most optimal way of counting copies-by finding the shortest number of steps (what the authors call 'assembly pathways' [17]).While the Huffman coding algorithm's purpose is compression efficiency, it effectively does what AT was meant to do but efficiently by optimisation of the tree length.Together with other popular statistical approaches, it has been universally used for data compression and computable estimations which are based on the more general principles of algorithmic complexity [13] and logical depth [21,25] and of which AT is a very special narrow and weak case.
Figure 5 shows an illustration of the standard operation of Huffman coding in a typical example, compared to the principle advanced by the Assembly Theory authors [17].Proposed in the 50s, the Huffman coding exploits block redundancy by parsing objects, counting block recurrence in a nested fashion [10].
As shown in Figure 5 featuring the ABRACADABRA example, to the left (1A), we see the reconstruction of the sequence from a root node by the method proposed by Assembly Theory, in general and in this particular molecular application, when represented as a tree search diagram following binary branching rules.A bifurcation to the right denoted by 1 indicates a new assembly step, whereas a bifurcation to the left indicated by 0 from a node represents a fixed structure (block).The MA algorithm requires 7 assembly steps to derive the sequence of interest.However, as shown to the left in Fig 1B, the Huffman coding tree optimises the sequence reconstruction by principles of recursivity in its search compression, as evidenced by the nested bifurcations.Given that the Huffman tree is more compact (fewer assembly steps) than MA, we demonstrate that the Huffman tree is a more robust compression algorithm than MA when it comes to characterising the molecular complexity of complex structures such as biomolecular signatures.
The superior performance of the Huffman coding at 'counting copies,' the original objective of 'assembly theory' [17], can be partially explained by its treatment of binary bifurcations which are precursors to tree-like recursive structures.MA lacks bifurcations in the assembly search and instead consid-Figure 5: ABRACADABRA tree diagrams for Assembly theory (A) and dynamic Huffman coding (B), both computable measures trivial to calculate.Huffman's is an optimal compression method able to characterise every statistical redundancy, including modularity.The (molecular) assembly index (to the left) is a suboptimal approximation of Huffman's coding (to the right) or a Shannon-Fano algorithm, as introduced in the 1960s.In this example, Huffman's collapses the compression tree into a 4-level tree, while MA's is a 7-level tree.In both cases, the resulting tree characterises the same word and is able to reconstruct it in full, without any loss of information, by exploiting redundancy and nestedness.ers a combinatorial search space with a linear sequence progression.Hence, it shirks the quantification of emergent hierarchical or nested structures (i.e., modularity optimisation) and intermediate structures within the sequence decomposition/compression. In contrast to MA, the recursiveness observed in complex molecules and biosignatures is detected by RLE and Huffman coding, and it does this in the most optimal way by providing the shortest tree algorithm (assembly pathways) needed.
We suggest that the reason behind this is that modularity, 'nestedness', or recursion are inherent to the binary tree search framework of the Huffman coding algorithm that implements 'counting copies'.Thus, in this analysis MA was found to produce an expanded and suboptimal version of the Huffman coding tree for the same purpose, searching for the minimal path length (steps) to obtain a structure or sequence while considering only the frequency of the largest block size (e.g.ABRA) in the optimisation of the search.Even if modified, MA would only perform as well as a Shannon-Fano or Huffman coding.On the other hand, the Huffman coding shows the emergence of all unique blocks, including recursive sub-structures (and their respective frequencies), in the shortest number of steps as a tree diagram with its shortest description, which is what assembly theory and molecular assembly seemed intended to originally capture and what the authors designate as 'counting the number of copies' [17].
Hence, Huffman, though a very simple compression algorithm, is an optimal coding scheme instantiating what assembly theory intended but failed to implement, namely, to count the minimum of expected steps [6], capturing all nested copies in an object.
A correlation plot between 'Molecular Assembly' (MA) and various coding measures is provided in Figure 4 incorporating the spectroscopy data from the original molecular assembly paper and completing the demonstration of the under-performance or equivalence of MA compared to the simplest coding algorithms (RLE and Huffman) and an estimation of a measure of algorithmic complexity (that the authors have claimed before is impossible to compute or use but also produces similar or better results than MA) [17].This disproves any claim of novelty or over-performance as regards MA as compared to long-standing, very basic, and well-known coding schemes.This completes the control experiments missing in the original paper claiming novelty at 'counting copies' as a measure of living systems.
Mischaracterisations
To understand the mathematical limitations underpinning Assembly Theory, first note that the pathway assemblages are characterised by functions of the form , where (w 1 , . . ., w k , . . . ) denotes the object x in the assembly space (Γ, φ) that results from the combination of other objects w 1 , w 2 , . . ., w k , etc. also in Γ and function f : V (Γ) × V (Γ) → V (Γ) gives the result of combining object z with w k .Being limited to joining operations-and this limitation becomes even more dramatic in the non-stochastic generative processes that we will discuss below-Assembly Theory cannot deal with any variation of x or f beyond successive simple constructions.In the general case, most computable objects would be missed by statistical methods (like entropy and cognates such as Assembly Theory).Since probability distribution uniformity does not guarantee randomness [3,5,26], most objects, both in theory and practice, cannot be recognised or characterised by weak computable measures, espe-cially by those that are largely based on entropy measures such as statistical compression algorithms or Assembly Theory.
As mentioned in Section 1, such a mischaracterisation has its roots in the reason any particular statistical test may fail to capture a mathematical formalisation of randomness, an inadequacy which prompted the positing of algorithmic randomness [5,8].For every computable statistical test (e.g., obeying the law of large numbers or displaying Borel normality) for which there is a computably enumerable number of sequences that satisfy it, there are arbitrarily large initial segments of sequences that can be computed by a program, although these initial segments would be deemed random by statistical tests.
On the contrary, algorithmic randomness requires the sequence to be incompressible (and, as a consequence, uncomputable) across the board, or to pass any feasible statistical test.More formally, any sufficiently long initial segment of an algorithmically random infinite sequence is incompressible (except by a fixed constant) or, equivalently, the sequence does not belong to the infinite intersection of any Martin-Löf test [5,8].As a unidimensional example in the context of sequences, algorithmic complexity theorists very soon realised that an object such as 123456789101112 . . .could be very misleading in terms of complexity.Note that this sequence in fact defines the Champernowne constant C 10 = 0.123456789101112, a complexity-deceiving phenomenon from the Borel normal numbers [26] that is generated by one of the most modular forms of a function type, recursion and iteration of a successor-type function f (x 0 , x i ) = x i + x 0 = x i+1 for x 0 = 1.The ZK graph [26] which is constructed using the Champernowne constant as the degree sequence, was shown to be a near-'maximal entropy graph with low algorithmic complexity [26].The reader is invited to note how such a mathematical concept motivated the construction of deceiving molecules in [22].Indeed, as we move beyond the realm of pure stochastic processes, complexity distortions become even more problematic.As demonstrated in [22, Theorem 2.4 and Corollary 2.5], there are (sufficiently large) deceiving molecules the complexities of whose respective generative processes arbitrarily diverge from the assembly index that the assembly pathway method assigns to them.By a generative model [1] we mean here a model that can be implemented or emulated by the execution of a Turing machine (with one or multiple tapes or any other equivalent model) so that it generates the pathway assembly and its objects.Therefore, generally speaking, this proposed assembly index fails to capture the minimality that is necessary for a complexity measure that may be claimed to be unambiguous and observer-independent.
Additionally, MA in general fails to avoid false positives in the specific sense that it may not be able to distinguish a "complex" object that is in fact the result of randomly generated generative processes.Under the same assumptions as in [16,17], we construct in [22,Theorem 2.4] a deceiving molecule that has a much larger MA value in comparison to the minimal information sufficient for a randomly generated generative process to singlehandedly construct this molecule.Whatever arbitrarily chosen method is used to calculate the statistical significance level, the MA of this molecule is large enough to make the expected frequency of occurrence (estimated via the arbitrarily chosen Assembly Theory) diverge from the actual probability (which derives from the random generation of the computable processes).In this case, Assembly Theory would consider such a molecule "biotic", resulting from extrinsic factors that increase biases toward certain pathways or that constrain the range of possible joining operations, although its sole underlying generative process in fact results from fair-coin-toss random events.This proven existence of false positives due to such a deceiving phenomenon is corroborated by our findings in Sections 2.4 and 2.5, which show that MA displays a behaviour that is both structurally and empirically similar to traditional statistical compression methods.Indeed, the latter methods are already known to present distorted values [26], performing worse than more recent algorithmic-based methods [28].Thus, they are prone to overestimating complexity, and consequently to presenting false positives for "high"complexity objects.
The key rationale behind this result is the computable nature of MA, so that given the set of biases and the joining operations allowed by the model, objects with much higher MA can be constructed by much simpler (and, therefore, more probable) computable generative processes.Thus, in this context, MA (or any computable 'assembly' measure of this basic statistical type) will underestimate the frequency of occurrence of objects with high MA that in fact were constructed by much simpler randomly generated processes.This means that MA would misidentify molecules as byproducts (or constituents) of livings systems that resulted from evolutionary processes, while in fact these molecules might have been byproducts of single-handed computable systems (natural or artificial) that were randomly generated by a fair coin toss, and as such are not the result of an evolutionary process of optimisation over time.
Nevertheless, note that it is true that there are computable (lossless) encodings of a source, such as Huffman coding, that are proven to be optimal on average, but only if one knows a priori that the underlying processes generating the objects are purely stochastic (in particular, when one knows beforehand that the conditions of the source coding theorem are satisfied [6]).In this case, one can show that the minimum expected size of the encoded object converges to its expected algorithmic complexity.However, pure stochasticity is too strong an assumption, or does not realistically represent the generative processes of molecules.This is because, especially in the context of complex systems like living organisms, organic molecules may be the byproduct of intricate combinations or intertwinements of both deterministic/computable and stochastic processes that govern the behaviour of the entire organism [23,27].Moreover, as shown by [22,Corollary 2.5], the deceiving phenomena can be equally bad or even worse in case the molecules are byproducts of complex systems that are somehow capable of universal computation.
When processes that are not purely stochastic are also possible generative processes, there is no such thing as a generally optimal complexity measure that cannot be improved upon, since computable complexity measures are dependent on the observer (or the chosen formal theory) [1].Without the necessary conditions being satisfied by the underlying stochastic process, one cannot generally guarantee such a convergence between the expected size of the encoded object and the expected algorithmic complexity that is assured by the source coding theorem.For example, in the case of advanced civilisations that are capable of artificially constructing living beings by computable processes, simplistic complexity measures such as MA can be intentionally misled with respect to what actually should be measured.
Conclusion
We have shown that the method at the heart of the so-called Assembly Theory (AT) as advanced in [17] and several other papers from the same group, is a suboptimal weaker version of Shannon-Fano and Huffman's encoding algorithms, the basis upon which most popular statistical lossless compression algorithms work based on the principle of 'counting repetitions' as AT intended.Shannon-Fano type and Huffman's encoding algorithms are not sophisticated compression algorithms but very basic coding schemes introduced at the very beginning of information theory that do not incorporate the many more advances in recent decades in the area of coding and compression theory and are regarded thus as very basic 'counting algorithms'.
The concepts and ideas underpinning Assembly Theory, as well as the challenges it faces, are very much part and parcel of the decades-long history of research in complexity theory.For example, Bennett faced the same sorts of problems that the authors believe they are facing for the first time [17], such as the differences between taking the shortest or average paths.The authors rehash and reinvent concepts and measures, not properly citing essential work.We have shown, for example, that the characterisation of simpler molecules using mass spectrometry signatures is not a challenge for other computable and statistically weak indexes and that as soon as these (including MA) are confronted with more complicated cases of non-linear modularity, they fail.We have shown that the best performance of molecular assembly does not outdistance other measures of a statistical nature.
Our theoretical and empirical results also show that molecular assembly (MA), and its generalisation in Assembly Theory, is easily prone to false positives and fails to capture the notion of high-level complexity (non-trivial statistical repetitions) necessary for distinguishing a serendipitous extrinsic agent (e.g. a chemical reaction) that constructs, or generates, the molecule of interest from a simple or randomly generated configuration.As empirically demonstrated, other indexes outperform MA as a discriminant of the biosignature categories, both, by InChI and by mass spectra (MS2 peak matrices), thereby dismissing MA, as the only experimentally valid measure of molecular complexity, rendering it irrelevant.The list of MA values for all mass spectral signatures is made available in [17] (supplementary information).In all cases, other indexes outperform MA both using only InChI strings or mass spectral matrices (taking MA values from their paper).In fact, it had been already reported before that some degree of discrimination between organic and inorganic molecules/compounds was possible using InChI codes [32].
Lacking the capability of detecting essential features of complex structure formation that go beyond a linear and combinatorial sequence space (optimised for only the largest repeated block sizes), Assembly Theory and its simplistic (mathematical and computational) methods may return misleading values that would classify a low complexity molecule as being extrinsically constructed by a much more complex agent, thus failing to characterise extraterrestrial life, as the authors have widely claimed [17].In fact, this extrinsic agent may be of a much simpler nature (e.g. a naturally occurring phenomenon).
Thus, the claim that Assembly Theory can quantify natural selection and emergence lacks any substance and if it were true then all other weak indexes explored would do too.As we have clearly shown, it is easy to mislead Assembly Theory with a simple recursive function that takes a module, iterates over a number of steps, and keeps adding a new module every number of steps to iterate over a new block.As matters stand, the bold claims regarding the capabilities of this Assembly Theory to characterise life, and even extraterrestrial life, are misleading or hugely exaggerated, attracting undeserved media attention, to the detriment of new and past research.
For example, more careful and deeper arguments regarding simplicity, recursivity, and the emergence of modularity in life have been advanced and are better grounded in a theoretical and methodological framework advanced in [9], where it was shown that exploiting first principles of computability and complexity theories, modular properties in living systems may be explained.
Living systems are complex systems consisting of multiscale, multi-nested processes that are unlikely to be reducible to simplistic and intrinsic statistical properties such as those suggested by AT and MA theory.We cannot conceive a measure that only looks at the internal structure of an agent isolated from its environment and how it interacts with its external medium to determine its (non)living nature.
Methods
Various complexity measures were used to classify living vs. non-living molecules from the mass spectrometry (MS) data in a four-category scheme: natural compounds, metabolites, pharmaceuticals and industrial compounds, where the natural compounds include the amino acids.The results were subjected to statistical analyses such as the Kolmogorov-Smirnov test, one sample t-tests, and Pearson correlation analysis using GraphPad Prism v. 8.4.3.
The Mass Spectrometry (MS) data were further analysed using various complexity measures, including the 1D-string and 2D-matrix Block Decomposition Method (BDM) [27,29], Shannon's entropy, and compression algorithms, including Lempel-Ziv-Welch (LZW), All data was first binarised using the online text to binary converter with ASCII/UTF-8 character encoding.Run Length Encoding (RLE), Huffman coding, and gzip.The InChI strings of the 99 molecules from (MW vs. MS data) of figure 2B, and the 114 molecules from figure 3 (MS data standard curve) of the original paper were analysed using the OACC (Online Algorithmic Complexity Calculator) app in R, which computed the 1D-BDM (block size of 2, alphabet size of 2, block overlap of zero) and Shannon entropy scores.The LZW compression lengths were computed with an online LZW calculator using UTF-8 encoding for the 1D strings.Likewise, RLE and Huffman coding compression lengths were obtained using online calculators as additional lossless compression measures to assess the MS bio-signatures.The RLE calculator was set to character then count settings, while the Huffman coding calculator output was set to compression ratio.As for the Figure 4, biological extracts analysis, we used the mass spectra peak matrices (MS2 peaks vs. the number of peaks) for the above-discussed method/analysis, post-binarization above the threshold.
To perform the 2D-BDM on the MS signatures (molecules), the structural distance matrix was extracted from the 2D-molecular structure SDF files for each molecule using the PubChem database.Binary conversion was performed on the matrices in R at five different conversion thresholds (i.e., -1, 0, 1, 3 and 5).The binarised molecular distance matrices were processed by the PyBDM code (see [22]) to obtain the 2D-BDM scores for each molecule.Distance matrices at a binary conversion threshold of 3 were found to be optimal in discriminant analysis of MS signatures into life vs.non-life categories.The matrices at a conversion threshold of 3 were used to compute the 2D-Huffman code and 2D-RLE compression lengths.
A Appendix
In recent papers, a method and measure have been proposed claiming to be capable of identifying and distinguishing molecules related to living systems versus non-living ones, among other capabilities.In the main article, we demonstrated that the assembly pathway method is a suboptimal restricted version of Huffman's (Shannon-Fano type) encoding so that it falls into the category of a purely (weak) entropic measure for all purposes.This supplementary material contains more information about motivations, algorithms, code, methods, and theorems with respect to the article under the same title.
Having identified a lack of control experiments and a limited analysis offered, we compared other measures of statistical and algorithmic nature that perform similarly, if not better, than the proposed assembly one at identifying molecular signatures without making recourse to a new theory.
Previous work claimed that the computable nature and tree-like structure of Assembly Theory was an advantage with respect to classifying the complexity of biosignatures.This is, however, one of its main weaknesses with respect to both grasping the complexity of the object and distinguishing it from a stochastically random ensemble.We demonstrated that the assembly pathway method is a suboptimal restricted version of long-used compression algorithms and that the "assembly index" performs similar to, if not worse, than other popular statistical compression algorithms.
Simple modular instructions can outperform the pathway assembly index because it falls short to capture the subtleties of trivial modularity.In addition, there are deceiving molecules whose low complexities arbitrarily diverge from the "random-like appearance" that the assembly pathway method assigns to them with arbitrarily high statistical significance.Our theoretical and empirical results imply that pathway assembly index is not an optimal complexity measure in general, and can return false positives.We have also suggested how the previous empirical methods can be applied to improved complexity measures that can better take advantage of the computational resources available.
The group behind "Assembly Theory" are ignoring and neglecting decades of work in previous literature and resources such as work on resource-bounded complexity, self-assembly, modularity and self organization which is beyond the scope of this work.However, the challenges "Assembly Theory" is facing are what half a century of negative results in complexity theory have faced and (partially) solved by dealing with (semi) uncomputable measures after finding that computable measures, which fall into the trivial statistical ones, are of limited use, are ill-defined and can not only be highly misleading but a regression in the field.
PyBDM Code for CTM and BDM
The Coding Theorem (CTM) and Block Decomposition Methods (BDM) are resource-bounded computable methods [15,18] that attempt to approximate semi-computable measures that are a generalisation of statistical measures more powerful than the methods proposed in "Assembly Theory" as they combine global calculations of classical entropy with local estimations of algorithmic information content.
Algorithm 1 Python implementation of 2D-Block Decomposition Method (PyBDM) import numpy as np from pybdm import BDM import pandas as pd X = pd.readcsv(r'file directory',dtype=int) bdm = BDM(ndim=2) Z=X.to numpy() bdm.bdm(Z)B Deceiving molecules (or objects in an assembly space) with high assembly indices The main idea to achieve the following theoretical results is to construct a randomly generated program that receives a formal theory (which contains all the computable procedures and statistical criteria in assembly theory) as input.Then, it searches for a molecule (or object in an assembly space) with MA sufficiently high so as to make the pathway probability of spontaneous formation be sufficiently lower than the very own deceiving program's algorithmic probability, so that the divergence between these two probability distributions become statistically significant according to the chosen statistical method and significance level.One challenge to achieve such a result is to account for the cases in which only a subclass of possible computable processes is allowed to perform the assembly rules (e.g., those that are allowed by the currently known law of physics in the case of molecules) in order to construct molecules, and therefore we shall employ a variation of the traditional algorithmic complexity and algorithmic probability studied in AIT.In this case, not every type of computable function may represent what is an effective or feasible process that constructs a molecule.Thus, in some cases the range of generative processes that can give rise to (or construct) a molecule may not comprise all possible computable functions.For this reason, we will employ a suboptimal form of the algorithmic complexity that is defined on non-universal programming languages (i.e., subrecursive classes).Nevertheless, in an ideal case in which the whole algorithm space indeed constitutes the set of all possible generative processes for constructing the assembly space (e.g., when biological systems achieve the capability of effecting universal computation in the real world [16]), then we show in Corollary B.5 that the deceiving phenomenon hold in the same way (or can even be worse).
A deceiving phenomenon akin to the one employed in Theorem B.4 can be found in [3] based upon the principles in [17], where sufficiently large datasets were constructed so that they deceive statistical machine learning methods into being able to find an optimal solution that in any event is considered global by the learning method of interest, although this optimal solution is in fact a simpler local optimum from which the more complex actual global optimum is unpredictable and diverges.
This phenomenon is also related to the optimality of the algorithmic complexity as an information content measure that takes into account the entire discrete space of computable measures [6,8], or the maximality of the algorithmic probability as a probability semimeasure on the infinite discrete space of computably constructible objects, as demonstrated by the algorithmic coding theorem [5,8,11].
However, unlike in these previous cases, our proof is based on finding a deceiver algorithm that constructs an object with sufficiently high value of assembly index such that its expected frequency of occurrence is much lower than the algorithmic probability of the deceiver itself, and in this way passing the test of any statistical significance level the arbitrarily chosen formal theory may propose.
In order to achieve our results, we base our theorems on mathematical conditions that are consistent with the assumptions and results in [13,14].The first assumption that we specify with the purpose of studying a worstcase scenario is that the assembly space should be large enough so as to include those molecules (or objects) with sufficiently large MA (along with its associated sufficiently low pathway probability of spontaneous formation) relative to the algorithmic complexity of the deceiving program.For the sake of simplicity, we assume that the nested family S of all possible finite assembly spaces from the same basis (i.e., the root vertex that represents the set of all basic building blocks) is infinite computably enumerable.However, an alternative proof can be achieved just with the former-and more general-assumption that the assembly space may be finite but only needs to be sufficiently large in comparison to the deceiving program.Indeed, our assumption is in consonance with the authors' motivation (and/or assumption) that "biochemical systems appear to be able to generate almost infinite complexity because they have information decoding and encoding processes that drive networks of complex reactions to impose the numerous, highly specific constraints needed to ensure reliable synthesis" [14].Closely related to the first assumption, we also assume that there always are molecules with arbitrarily low path probabilities, which follows from the notion that, as infinitesimal as it might be, there is always a chance of randomly combining elements from an unlikely (but possible) sequence of events so as to give rise to a certain complex molecule.
Thirdly, in accordance with the arguments in [12][13][14] that the computability and feasibility of their methods is an actual advantage in comparison with other complexity measures, here we likewise assume that the following are computable procedures: • deciding whether or not a finite assembly space (or subspace) is well formed according to the joining operation rules that are allowed to happen;1 • calculating MA of a finite molecule (i.e., a finite object) in a well-formed assembly space (or subspace);2 • calculating the chosen approximation of MA (e.g., the split-branch version) of a finite molecule in a well-formed assembly space (or subspace);3 • calculating an upper bound for the pathway probability of spontaneous formation of a molecule in the denumerable nested family of possible finite assembly spaces;4 • calculating the significance level for a frequency of occurrence of a molecule in a sample so that this empirical probability distribution (i.e., the type of the sample) diverges from the pathway probability distribution of spontaneous formation of the molecule;5 In this manner, one can now demonstrate the following theorems.Besides the notation from [13] for assembly theory, we also employ the usual notation for Turing machines and algorithmic complexity.
Respectively, as in [13,Definition 11] and [13,Definition 15] respectively, let (Γ, φ) denote either an assembly space or assembly subspace.From [13, Definition 19], we have that c Γ (x) denotes the assembly index of the object x in the assembly space Γ.
Note that assembly spaces are finite.So, from our assumptions, we need to define a pathway assembly that can deal with arbitrarily large objects.To this end, let S = (Γ, Φ, F) be an infinite assembly space, where every assembly space Γ ∈ Γ is finite, Φ is the set of the correspondent edge-labeling maps φ Γ of each Γ, and F = (f 1 , . . ., f n , . . . ) is the infinite sequence of embeddings [10] (in which each embedding is also an assembly map as in [13,Definition 17]) that ends up generating S. That is, each f i : {Γ i } ⊆ Γ → {Γ i+1 } ⊆ Γ is a particular type of assembly map that embeds a single assembly subspace into a larger assembly subspace so that the resulting sequence of nested assembly subspaces defines a total order S , where Let γ = z . . .y denote an arbitrary path from z ∈ B S to some y ∈ V (S) in S, where B S is the basis (i.e., the finite set of basic building blocks) of S and V (S) is the set of vertices of S. Let γ x denote a path from some z ∈ B S to the object x ∈ V (S).
Let Γ * x denote a minimum rooted assembly subspace of Γ from which the assembly index c Γ (x) calculates the augmented cardinality and that its longest rooted path γ x ends in the arbitrary object x ∈ V (Γ) as in [13,Definition 19].
As usual, let U be a universal Turing machine on a universal programming language L. Let U(x) denote the output of the universal Turing machine U when x ∈ L is given as input in its tape.Let • , • denote an arbitrary recursive bijective pairing function [8,11] so that the bit string • , • encodes the pair (x, y), where x, y ∈ N. Note that this notation can be recursively extended to • , . . ., • in order to represent the encoding of n-tuples.
We have that the (prefix) algorithmic complexity, denoted by K (x), is the length of the shortest prefix-free (or self-delimiting) program x * ∈ L that outputs the encoded object x in a universal prefix Turing machine U, i.e., U (x * ) = x and the length |x * | = K (x) of program x * is minimum.In addition, the algorithmic coding theorem [5,6,8,11] guarantees that with some x ∈ L Γ as input, where L Γ ⊆ L is a (non-)universal programming language such that every generative process of an assembly space is bijectively computed (or emulated) by U (x).In other words, for every generative process that can assemble objects into building another object, there is a program x ∈ L Γ that computes (or emulates) this process.In the case of infinite assembly spaces, one analogously defines language L S ⊆ L. In the special case in which the generative process of the assembly space S are capable of universal computation, then one has that L S = L holds.Additionally, for every x ∈ L Γ , there is a generative process which is computed (or emulated) by program x ∈ L Γ .Also for the sake of simplicity, let K Γ denote the sub-algorithmic complexity K f Γ .That is, K Γ (x) gives the shortest program that can compute or emulate a generative process of the object x in the assembly space Γ.In the case of infinite assembly spaces, one analogously defines the sub-algorithmic complexity K S and the sub-universal a priori probability upon language L S .
Lemma B.1.Let S be infinite computably enumerable.Let F be an arbitrary formal theory that contains assembly theory, including all the decidable procedures of the chosen method for calculating (or approximating MA) of an object for a nested subspace of S, and the program that decides whether or not the criteria for building the assembly spaces are met.Let k ∈ N be an arbitrarily large natural number.Then, there are a program p y , Γ ∈ S and y ∈ V (Γ) such that where the function c Γ : Γ ⊂ S → N gives the MA of the object y in the assembly space Γ (or S) and U (p y ) = y.
Proof.Let p be a bit string that represents an algorithm running on a prefix universal Turing machine U that receives F and k as inputs.Then, it calculates |p| + |F| + O (log 2 (k)) + k and enumerates S while calculating c Γ (x) of the object (or vertex) x ∈ V (Γ) ⊂ V (S) at each step of this enumeration.Finally, the algorithm returns the first object y ∈ V (S) for which holds.In order to demonstrate that p always halts, just note that S is infinite computably enumerable.Also, for any value of c Γ (z) for some z ∈ V (Γ ) ⊂ V (S), there is only a finite number of possible paths starting on any object in B S and ending on z in c Γ (z) steps, where B S is the basis (i.e., the finite set of basic building blocks [14,Definition 12]) of S. This implies that there
Figure 1 :
Figure 1: Correlation plot between 'Molecular Assembly' (MA) and other coding algorithms.The strongest positive correlation was identified between MA and 1D-RLE coding (R= 0.9), which is one of the most basic coding schemes and among the closest to the original definition of MA.Other compression algorithms, including the Huffman coding (R = 0.896), also show a strong positive correlation with MA.As seen, the compression values of both 1D-RLE and 1D-Huffman coding show overlapping and nearly identical medians (horizontal line at centre) and ranges on the whisker plot.This analysis reveals the similarity in behaviour of MA and popular statistical lossless compression algorithms that are based on the same counting principles.
Figure 3 :
Figure 3: Analysis of the 114 molecules from the MS2 spectra standard curves derived from Figure 3 of the original paper by Marshall et al., and other popular indexes.Discriminant/classification analysis was performed on the two categories of molecular signatures: 94 small molecules vs. 18 peptides.The strongest Pearson correlation was identified between 1D-BDM and the category of molecules (R= 0.828; P<0.0001).The complete correlation analysis of the 114 molecules classification is provided in Table2.
Figure 4 :
Figure4: Analysis of living vs. non-living mass spectra using complexity measures: The strongest positive correlation was identified between MA and 1D-RLE coding (R= 0.9), which is one of the most basic coding schemes and among the most similar to the intended definition of MA, as being capable of 'counting copies' in 132 molecules; 114 plus 18 living extracts.Other coding algorithms, including the Huffman coding (R = 0.896), also show a strong positive correlation with MA.As seen, the compression values of both 1D-RLE and 1D-Huffman coding show overlapping and nearly identical medians (horizontal line at centre) and ranges on the whisker plot.The analysis further confirms our previous findings, in the similarity in performance between MA and basic compression measures (that basically only 'count copies') in classifying living vs. non-living mass spectra signatures
Table 1 :
Table of Pearson correlation values of MA and other indices across the four categories of mass spectroscopy (MS) signatures.LZW and BDM are given in bits, meaning the length of the compressed description of the object, including the number of steps.Both LZW and BDM generate better statistics than MA without any adaptions or modifications.
Table 2 :
Table of Pearson Correlation values of MA and complexity indices across the two categories (small molecules and peptides) of mass spectroscopy (MS) signatures seen in Figure3of the original paper.BDM, RLE, and Huffman are given in log-normalized bits.As shown, BDM generates better statistics than MA without any adaptions or modifications, while Huffman shows near identical correlation performance as MA, thereby, supporting our findings.
Table 3 :
Table of Pearson Correlation values of MA and complexity measures across all 132 molecules, including the biological extracts from Figure 4 of the original paper.BDM, RLE, and Huffman are given in log-normalized bits.As shown, BDM and the compression algorithms generate better statistics than MA.
|
2022-10-04T06:42:08.599Z
|
2022-09-30T00:00:00.000
|
{
"year": 2024,
"sha1": "53b68b5b497ef544194762f61af48d431d17ed03",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/2210.00901",
"oa_status": "GREEN",
"pdf_src": "ArXiv",
"pdf_hash": "53b68b5b497ef544194762f61af48d431d17ed03",
"s2fieldsofstudy": [
"Biology",
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Mathematics"
]
}
|
122889285
|
pes2o/s2orc
|
v3-fos-license
|
Instability of a Square Sheet under Symmetric Biaxial Loading
An early experiment found that a square rubber sheet under symmetric biaxial loading may not remain square. This curious result has been one of the most instructive examples in finite elasticity. Here thermodynamic considerations are used to analyze this instability. Introduction An early experiment found that a square rubber sheet under symmetric biaxial loading may not remain square. This curious result has been one of the most instructive examples in finite elasticity. The instability of square rubber sheet under symmetric biaxial loading was first reported by Treloar in 1948 and since then it has been discussed and analyzed quite often in the literature (Kearsley (1986), Chen (1987), Ericksen (1991), Müller (1996)). Although the subject is a mechanical problem in nature, unlike the previous works, we shall rely on thermodynamic considerations to establish a stability criterion with proper boundary conditions of a biaxially loaded square sheet for the analysis of instability. We consider an incompressible isotropic elastic body of Mooney-Rivlin materials, whose free (strain) energy function ψ is given by ρψ = α(I − 3) + β(II − 3), (1) where I and II are the first and the second invariants of the Cauchy-Green strain tensor, and ρ is the mass density. Both α and β are material constants, which according to experimental results for rubber satisfy the following inequalities: α > 0, β ≥ 0. (2)
Introduction
An early experiment found that a square rubber sheet under symmetric biaxial loading may not remain square.This curious result has been one of the most instructive examples in finite elasticity.The instability of square rubber sheet under symmetric biaxial loading was first reported by Treloar in 1948 and since then it has been discussed and analyzed quite often in the literature (Kearsley (1986), Chen (1987), Ericksen (1991), Müller (1996)).
Although the subject is a mechanical problem in nature, unlike the previous works, we shall rely on thermodynamic considerations to establish a stability criterion with proper boundary conditions of a biaxially loaded square sheet for the analysis of instability.
We consider an incompressible isotropic elastic body of Mooney-Rivlin materials, whose free (strain) energy function ψ is given by where I and II are the first and the second invariants of the Cauchy-Green strain tensor, and ρ is the mass density.Both α and β are material constants, which according to experimental results for rubber satisfy the following inequalities: If β = 0, the material is called Neo-Hookean.We shall consider biaxial loading of a rubber sheet laying in the x-y plane, given by a time-dependent homogeneous deformation which takes a material point at X = (X, Y, Z) to a point at x = (x, y, z) with For this deformation, the invariants of the Cauchy-Green strain tensors are given by
Thermodynamic Consideration
We shall first establish a criterion for thermodynamic stability.For a body in a region V at a uniform constant temperature θ and free of external supplies, we have the energy equation, and the entropy inequality, where T is the Cauchy stress tensor, ε is the internal energy and η is the entropy density.By eliminating the heat flux q between (5) and (6), we obtain where ψ = ε − θη is the free energy density.Let the region occupied by the body in the reference state be denoted by V κ then the above condition can be written in the reference state as where T κ is the Piola-Kirchhoff stress tensor.Let the region V κ occupied by the square sheet in the reference state be given by 0 ≤ X ≤ 1, 0 ≤ Y ≤ 1, and 0 ≤ Z ≤ D. The sheet is uniformly loaded on the lateral surfaces by the forces per unit area F 1 and F 2 in the X and Y directions respectively and is stress-free on the top and the bottom surfaces, i.e., where e x and e y are the unit base vectors of the coordinate system.Moreover from (3), we have ẋ Therefore, by the boundary conditions (8) and ( 9), it follows that ∂Vκ where A 1 e A 2 are the lateral surfaces of the region at X = 1 and Y = 1 respectively.Now, considering the deformation process under prescribed biaxial loading, and assuming the process is quasi-static (with negligible acceleration), we have from (7) that which upon integration gives since the process is homogenous.Therefore if we define the relation (10) becomes We call A(t) the availability function of the square sheet.
Stability Criterion
We call a deformed state under a prescribed biaxial loading characterized by the stretches (λ 1 , λ 2 ) a stable equilibrium state if any small perturbation from this state will eventually return to this state as time tends to infinity.Suppose that such a perturbation is represented by a process (λ Since the availability A(t) is a decreasing function of time by the condition (12), it must have a local minimum at ( λ1 , λ2 ).Therefore, by regarding A as a function of (λ 1 , λ 2 ), this criterion is equivalent to the following conditions: and the Hessian matrix of A is positive semi-definite, or equivalently, where E denotes the evaluation at the stable equilibrium state ( λ1 , λ2 ).By the use of the free energy given in ( 1) and ( 4), the expression (11) leads to the following equilibrium conditions from (13): where the equations are evaluated at the equilibrium state and the overhead bars are suppressed for simplicity.In the case of symmetric loading, F 1 = F 2 , from (15) we obtain where h = β/α is a non-negative material constant from the empirical inequalities (2).This immediately gives the symmetric solution, λ 1 = λ 2 .Since λ 1 , λ 2 , and h are positive quantities, no other solution exists if hλ 1 λ 2 < 1, which rules out the possibility of an asymmetric solution for Neo-Hookean materials (h = 0).The asymmetric solution may exist and can be found from the equation, For such a solution, λ 1 and λ 2 are different in general, and the square sheet becomes rectangular after stretching.
Furthermore, the conditions (14) lead to which is identically satisfied, and Let the left hand side of the relation (19) be denoted by f (λ 1 , λ 2 ), then we have which is the condition for an equilibrium state (λ 1 , λ 2 ) to be stable.
Conclusion
We have plotted the function f (λ, λ) against λ for the symmetric solution in Fig. ?? for h = 0.1.It shows that for λ ≤ λ B = 3.1685, the function f (λ, λ) is non-negative and therefore according to the condition (20) the symmetric solution is stable.However, for λ > λ B the function f (λ, λ) becomes negative and hence the square sheet is not longer stable under symmetric loading.
For asymmetric solution λ 1 = λ 2 under symmetric loading, from the condition (17) one can solve for λ 2 in terms of λ 1 so that λ 2 = g(λ 1 ) and hence f (λ 1 , g(λ 1 )) becomes a function of λ 1 only.Doing this numerically, we can easily verify the condition (20) by plotting the function f (λ 1 , λ 2 ) against λ 1 as shown in Fig. ??, from which we conclude that f (λ 1 , λ 2 ) is non-negative and hence the asymmetric solution is always stable.
|
2018-12-29T09:21:49.732Z
|
2000-01-01T00:00:00.000
|
{
"year": 2000,
"sha1": "9a52e4c06647d7182930cfd47c6db736b2a31cf2",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.1590/s0100-73862000000400004",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "9a52e4c06647d7182930cfd47c6db736b2a31cf2",
"s2fieldsofstudy": [
"Engineering",
"Materials Science",
"Physics"
],
"extfieldsofstudy": [
"Engineering"
]
}
|
222386688
|
pes2o/s2orc
|
v3-fos-license
|
Performance of the Kato-Katz method and real time polymerase chain reaction for the diagnosis of soil-transmitted helminthiasis in the framework of a randomised controlled trial: treatment efficacy and day-to-day variation
Background Accurate, scalable and sensitive diagnostic tools are crucial in determining prevalence of soil-transmitted helminths (STH), assessing infection intensities and monitoring treatment efficacy. However, assessments on treatment efficacy comparing traditional microscopic to newly emerging molecular approaches such as quantitative Polymerase Chain Reaction (qPCR) are scarce and hampered partly by lack of an established diagnostic gold standard. Methods We compared the performance of the copromicroscopic Kato-Katz method to qPCR in the framework of a randomized controlled trial on Pemba Island, Tanzania, evaluating treatment efficacy based on cure rates of albendazole monotherapy versus ivermectin-albendazole against Trichuris trichiura and concomitant STH infections. Day-to-day variability of both diagnostic methods was assessed to elucidate reproducibility of test results by analysing two stool samples before and two stool samples after treatment of 160 T. trichiura Kato-Katz positive participants, partially co-infected with Ascaris lumbricoides and hookworm, per treatment arm (n = 320). As negative controls, two faecal samples of 180 Kato-Katz helminth negative participants were analysed. Results Fair to moderate correlation between microscopic egg count and DNA copy number for the different STH species was observed at baseline and follow-up. Results indicated higher sensitivity of qPCR for all three STH species across all time points; however, we found lower test result reproducibility compared to Kato-Katz. When assessed with two samples from consecutive days by qPCR, cure rates were significantly lower for T. trichiura (23.2 vs 46.8%), A. lumbricoides (75.3 vs 100%) and hookworm (52.4 vs 78.3%) in the ivermectin-albendazole treatment arm, when compared to Kato-Katz. Conclusions qPCR diagnosis showed lower reproducibility of test results compared to Kato-Katz, hence multiple samples per participant should be analysed to achieve a reliable diagnosis of STH infection. Our study confirms that cure rates are overestimated using Kato-Katz alone. Our findings emphasize that standardized and accurate molecular diagnostic tools are urgently needed for future monitoring within STH control and/or elimination programmes.
Background
With an estimated 1.5 billion infections, the soil-transmitted helminths (STHs), namely Ascaris lumbricoides, Trichuris trichiura and the hookworms Necator americanus and Ancylostoma duodenale, are of enormous public health importance in subtropical and tropical regions, particularly amongst the most marginalized populations [1]. Diseases accompanying these infections can cause considerable burden manifested as malnutrition [2,3], impairment in physical and cognitive development in children [4], reduction in work performance in adulthood [5] and adverse pregnancy outcomes [3,6]. Preventive chemotherapy, the periodic large-scale administration of anthelminthic medicines to at-risk populations without prior diagnosis is the cornerstone of helminth control recommended by the World Health Organization (WHO). It is considered simple and cost-effective in its implementation and to have a strong impact on morbidity by decreasing the worm burden [7]. Accurate, scalable and sensitive diagnostic tools are crucial to assess and monitor treatment efficacy, prevalence and intensity of infection to guide future interventions, including the early detection of possible resistance development [8][9][10][11][12][13]. Cost-effective, sensitive techniques are paramount especially in areas of low endemicity, where a robust surveillance system is needed to approach and monitor elimination [13].
The microscopic Kato-Katz technique is a relatively simple and low-cost method recommended by the WHO for the detection of STH and other helminth eggs in faecal samples [14][15][16]. Consequently, it is widely used in randomised controlled trials (RCTs), epidemiological surveys and surveillance studies to determine the impact of STH interventions. Yet, the technique has considerable shortcomings. There is substantial variation in the readings, resulting from uneven distribution of eggs within a single stool sample (within sample variation), day-to-day fluctuations of egg excretion (between sample variations) and ultimately results depend on the readers' skills and experience [17][18][19][20]. Most importantly, the Kato-Katz method may particularly miss low-intensity infections leading to underestimation of the actual prevalence, but in the case of efficacy trials artificially inflate cure rates (CRs) from undetected residual low-egg count infections post-treatment [21]. Moreover, expertise in microscopy is increasingly rare [22,23].
Over the past few decades, molecular diagnostic methods have been developed for the use in human parasitology in order to increase sensitivity and specificity of the diagnosis of intestinal helminths. qPCR-based assays for the detection of helminth DNA or ribosomal RNA on faecal samples are the most widely used molecular methods [11,[23][24][25]. In recent years, further improvements of the DNA isolation step were made, and multiplex approaches have been developed to detect different parasite targets in a single procedure [17,26]. Higher specificity and sensitivity of molecular diagnostics are generally observed in studies comparing the Kato-Katz thick smear stool examination to molecular methods (primarily qPCR), with rare exceptions [11,20,[27][28][29]. The semiquantitative output of PCR also reflects the amount of parasite DNA present, which could be of further interest as parasite burden rather than absence or presence of a STH infection is a key determinant of morbidity [17,30]. Moreover, nucleic acid amplification may improve the detection in infections with low parasitic burden and has the ability to differentiate between morphologically identical species [31].
Evaluations on drug efficacy using molecular approaches are scarce, even though monitoring drug efficacy is of utmost importance for making treatment recommendations for novel therapies and in the light of possible upcoming anthelminthic resistance [32,33]. Given the higher sensitivity and specificity of qPCR, the few available studies showed that treatment efficacy based on CRs is lower using qPRC detection compared to the microscopic Kato-Katz method. It is worth highlighting that STHs do not release eggs at a constant rate [34][35][36] and therefore, we hypothesize multiple collection of faecal samples might increase the sensitivity of qPCR.
The aim of the present study was to compare the performance of the microscopic Kato-Katz method and the molecular qPCR method for the diagnosis of soil-transmitted helminthiasis and its impact on treatment efficacy and day-to-day variation analysing two stool samples before and after treatment respectively. Stool samples were collected within the framework of a phase III, parallel group, double blind RCT assessing the safety and efficacy of the current standard treatment (albendazole) versus combination therapy (ivermectin-albendazole).
Trial design
Trial details are summarized in the published trial protocol [37] and in the trial registration (clinicaltrials.gov, reference: NCT03527732, date assigned: 17 May 2018). Participants were invited for clinical examination and treatment if found positive for T. trichiura infection in at least two slides of quadruple Kato-Katz thick smears with an infection intensity of at least 100 eggs per gram (EPG) of stool. The samples analysed in this work were collected at baseline and 14-21 days post-treatment between September 2018 and December 2018 in one of the three study settings, on Pemba Island, United Republic of Tanzania.
Laboratory procedures
Two fresh morning stool samples were obtained from each participant within a maximum of 5 days using a door-to-door approach. Collected stool samples were kept in a cool box containing ice packs while being transported to the laboratory. Samples were examined with quadruplicate Kato-Katz microscopy within 24 h after collection for the detection of STH ova by experienced laboratory technicians following the WHO standard procedures [15]. An independent quality control of the Kato-Katz readings for T. trichiura and A. lumbricoides was conducted for 10% of the slides.
Stool samples of participants fulfilling eligibility criteria (minimal egg count for T. trichiura ≥ 100 EPG, 2 or more out of 4 Kato-Katz slides positive) and all identified STH egg negative participants (negative controls without any co-infection) were further processed. In total, 160 randomly selected T. trichiura Kato-Katz positive participants with complete aliquot pairs per treatment arm (n = 320) and 180 identified Kato-Katz helminth negative participants with two baseline aliquots were analysed. An aliquot of stool (~ 1 g) was mixed with 80% ethanol and preserved at 4 °C and shipped at room temperature to the Swiss Tropical and Public Health Institute (Swiss TPH) in Basel, Switzerland for subsequent qPCR analyses.
DNA extraction was performed using the QIAamp DNA Mini kit (Qiagen; Hilden, Germany) with slight modifications from the standard protocol validated and described by Kaisar et al. [17]. A multiplex real-time qPCR was used for simultaneous detection of A. lumbricoides, T. trichiura, N. americanus, A. duodenale and Strongyloides stercoralis. However, the latter parasite was not expected in these samples [38] but was placed together with the hookworm species in the same color channel, in case further specification would be of interest in a second round. Amplification consisted of 2 min at 50 °C, 10 min at 95 °C followed by 45 cycles of 15 s at 95 °C and 1 min at 58 °C. Testing was performed using CFX Maestro ™ (Bio-Rad Laboratories, Inc, Hercules, CA, USA). qPCR plate planes were generated with a random distribution and balance; however, all samples of one participant were distributed within one plate to reduce between-plate variability and twelve wormnegative controls were placed in between. Four negative controls containing double-distilled water were randomly placed on each plate to ensure detection of confounding factors. For subsequent standardization of each plate, nine positive controls with rising plasmid concentrations (10 1 , 10 3 and 10 5 plasmids/µl) containing an insert with the sequence of the STH qPCR product were included in each amplification run. Standard curves were generated by plotting cycle threshold (Ct) values against the logarithm of starting DNA quantities.
The DNA amplification results of a serial 10-fold dilution series of the plasmids from each specimen were compared in separate reactions. Each dilution series was tested both with and without the other target DNAs to assess the assay's ability to detect mixed infections. The details of all primers and detection probes (Eurofin Genomics, Ebersberg, Germany) and the concentrations of the qPCR using TaqMan GeneExpression MasterMix (ThermoFischer, Switzerland) are presented in the supplementary data (Additional file 1: Tables S1, Additional file 2: Table S2). Extraction of DNA, preparation of the master mix and handling of qPCR products were all performed in different rooms to prevent contamination.
Data preparation
All qPCR assays with an observed copy number above zero were considered positive. All qPCR assays for which no amplification curves were obtained, were considered negative (equalling zero copy numbers). Kato-Katz results were calculated as mean egg counts of the two slides of each time point assessment (baseline day 1, baseline day 2, follow-up day 1 and follow-up day 2) and samples considered positive if at least 0.5 eggs per sample were identified. Data of the amplification curves were cleaned and standardised according to the standard curves with CFX Maestro ™ Software and then uploaded to ELIMU-MDx, an open-source platform for storage, management and analysis of diagnostic qPCR data [39]. Subsequent statistical analyses were conducted using Stata IC15 (StataCorp., College Station, TX).
The Ct value is defined as the number of qPCR cycles needed for the detection of fluorescence signal of the amplified products to pass the fixed threshold value. Accordingly, exceeding that threshold can be interpreted as the earliest qPCR cycle at which point a sample's amplification product is statistically different from the background fluorescence [40]. Consequently, higher quantities of helminth DNA are inversely proportional and thus, result in lower Ct values and vice versa [23]. Amplification curves not following a sigmoidal shape were considered as negative results interpreting these signals as unspecific background noise. Samples below Ct value of 15 were excluded for that species, as the range of standards tested and detected was above Ct 15. Data of the amplification curves were then translated into copies/µl DNA by inserting the average slopes and y-intercepts for each quantified target of the standard curves into a linear equation. Thus, the cycle cut-off points vary for each quencher, depending on the calibration curves obtained. This procedure was done to avoid choosing an arbitrary Ct cutoff which is known to not be ideal, by either being too low (eliminating valid results) or being too high (increasing false-positive results) [40].
Statistical analysis
Based on available summarised efficacy measures from a recent review [41] and the published literature, the CR of albendazole against T. trichiura was assumed to be 30% compared to 50% in the ivermectin-albendazole treatment regimen according to Kato-Katz. Moreover, the correlation between the two diagnostic test results was assumed to be 0.6. A sample size of 320 Kato-Katz T. trichiura positives (160/treatment arm) was chosen to detect a 10% difference in CRs against T. trichiura between Kato Katz and qPCR with a power of 80% assuming a two-sided type 1 error of 5%. An additional subsample of 320 Kato-Katz negatives (1:1 ratio to the positives) was aimed for to determine the sensitivity of qPCR versus Kato-Katz. Since STH infections are staggeringly prevalent on Pemba Island, we only found 180 helminth negative individuals within the screening phase.
Correlation between microscopic egg count and DNA copy number
Correlation between copy numbers/µl DNA according to qPCR and egg count numbers derived by the Kato-Katz thick smear method were assessed as a base for sensitivity and specificity estimates. Spearman's rank correlation coefficients r S were calculated for each species and each time point among the samples, which were positive according to both diagnostic methods to assess potential correlation. The degree of agreement was categorised as "poor" (r S < 0.2), "fair" (0.2 ≤ r S < 0.4), moderate (0.4 ≤ r S < 0.6) good (0.6 ≤ r S < 0.8) and very good (r S ≥ 0.8) agreement [42].
Diagnostic method variability between samples
To assess the agreement of test results between baseline day 1 and day 2 and follow-up day 1 and day 2, for qPCR and Kato-Katz, Spearman's rank correlation of copy numbers and egg counts, respectively, was performed among all samples, which were found positive according to both techniques. An alternative assessment was based on Cohen's Kappa, comparing positivity of qPCR and Kato-Katz between baseline day 1 and day 2 and between follow-up day 1 and 2, including negative and positive test results. The κ-statistics categorised in the same way as the rank correlation coefficients r S .
Overall sensitivity of Kato-Katz and qPCR
The sensitivity was determined assuming a 100% sensitivity and specificity of each diagnostic method, as disclosed by the morphology of the eggs or by the species-specific qPCR assays. Sensitivities of qPCR relative to Kato-Katz and vice versa were calculated for baseline and follow-up separately and for both time points combined. A qPCR test at baseline or follow-up was considered positive if at least one of the two samples taken on the respective consecutive days provided a positive result. The 95% confidence intervals for sensitivities across both time points were computed using a logistic regression model with robust standard errors adjusting for longitudinal correlations of test results within individuals.
Cure rates according to Kato-Katz and qPCR
CRs were calculated as the proportion of participants negative for infection (EPG or transformed DNA copy equalling zero) in both follow-up stool samples among those who were positive at baseline in any sample. Moreover, CRs assessed by qPCR were also calculated considering only the first follow-up sample to assess if test result reliability affects CRs. Logistic regression models were used to compare CRs between different treatment arms. Comparisons of CRs between the qPCR and the Kato-Katz method also required the use of robust standard errors adjusting for correlations of outcomes within subjects. Statistical significance of observed differences or associations was defined as a two-sided P-value smaller than 0.05.
Results
Two stool samples of 320 T. trichiura positive participants, partially co-infected with A. lumbricoides and hookworm at baseline and two stool samples 14-21 days post-treatment were processed by both, Kato-Katz and qPCR method. As negative controls, two faecal samples of 180 individuals negative for STH eggs as assessed by Kato-Katz were analysed (Fig. 1).
Overall positivity agreement according to Kato-Katz and qPCR for all four examination time points pooled
In total, 1020 samples were positive for T. trichiura according to Kato-Katz and 1134 were positive for T. trichiura according to qPCR. There were 394 samples with discordant results, 254 where only the qPCR-result was positive and 140 where only the Kato-Katz result was positive. Results for A. lumbricoides and hookworm showed even more pronounced differences, with most discordant tests being positive for qPCR and negative for Kato-Katz (Table 1). Table 2). Both correlations had P-values > 0.2 and them being chance results can therefore not be ruled out.
Diagnostic method variability between samples
To assess the variability within one diagnostic method, the agreements between baseline day 1 and day 2 as well as between follow-up day 1 and day 2 were calculated using Spearman's rank correlation, including all positive test results according to both techniques. qPCR showed moderate agreement for T. trichiura (r S = 0.51) and good agreement for A. lumbricoides (r S = 0.62) and hookworm (r S = 0.64) at baseline. At follow-up, moderate agreement for T. trichiura (r S = 0. 45) and hookworm (r S = 0.49) and good agreement for A. lumbricoides (r S = 0.71) was found. Kato-Katz results showed moderate agreement for T. trichiura (r S = 0.49) and good agreement for A. lumbricoides (r S = 0.65) and hookworm (r S = 0.70) between the two baseline samples. Moderate agreement between the follow-up samples was shown for T. trichiura (r S = 0.60) and poor agreement for hookworm (r S = − 0.55) was observed. Agreement could not be assessed for A. lumbricoides at follow-up as no samples were positive by both methods (Table 3).
Additionally, Cohen's Kappa was used to assess reproducibility including positive and negative test results between baseline day 1 and day 2 and between follow-up day 1 and day 2. qPCR test results between baseline and between follow-up samples showed moderate agreement for T. trichiura (κ = 0.57, 0.42) and hookworm (κ = 0.58, 0.52). For A. lumbricoides, qPCR showed good agreement between baseline (κ = 0.63), but only poor agreement (κ = 0.1) between follow-up samples. Kato-Katz showed very good agreement for T. trichiura (κ = 0.92) and A. lumbricoides (κ = 0.93) and good agreement for hookworm (κ = 0.72) between baseline samples. The agreement between follow-up samples was moderate for T. trichiura (κ = 0.56) and hookworm (κ = 0.56). However, there was poor agreement (κ = − 0.004) for A. lumbricoides, possibly because only a few positive samples were found by the Kato-Katz technique (Table 4).
Overall, qPCR results showed greater variability between both baseline and follow-up samples compared to Kato-Katz, especially when looking only at the positivity and negativity of samples and using Cohen's kappa coefficient.
Overall sensitivity of Kato-Katz and qPCR
Across all comparisons between the two methods, the sensitivity of qPCR in detecting positive samples according to Kato-Katz was higher than the respective sensitivity of Kato-Katz in detecting positive samples according to qPCR (Table 5). When pooling baseline and follow-up results, the pooled sensitivity of qPCR relative to Kato-Katz was 93.7% for T. trichiura, 84.4% for A. lumbricoides and 88.4% for hookworm, while the sensitivity of Kato-Katz relative to qPCR was 79.5%, 30.4% and 35.9% for T. trichiura, A. lumbricoides and hookworm, respectively. Interestingly, the sensitivity of Kato-Katz relative to qPCR significantly dropped from 38.5% (31.6-45.8) at baseline to 3.4% (0.4-11.9) at follow-up in the case of A. lumbricoides.
Cure rates according to Kato-Katz and qPCR
Cure rates for T. trichiura were significantly lower with albendazole monotherapy than with combination therapy (ivermectin-albendazole) with both diagnostic methods, while CRs were comparable for A. lumbricoides and hookworm. CRs of the combination therapy according to Kato-Katz were slightly higher ( Odds ratios (ORs) were calculated for the combination therapy (ivermectin-albendazole) compared to albendazole. The odds of being cured was significantly higher under combination therapy as compared to monotherapy in T. trichiura positives irrespective of the diagnostic approach and the amount of qPCR follow-up stool samples. CRs and ORs according to Kato-Katz and qPCR are listed in Table 6.
Discussion
We evaluated the diagnostic performance of qPCR compared to standard Kato-Katz microscopy for the diagnosis of STHs and the resulting treatment efficacies by both methods. This study was done within the framework of a phase III, parallel group, double blind RCT assessing the efficacy and safety of the current standard treatment (albendazole) versus a combination therapy of ivermectin-albendazole. This was the first study analysing two stool samples before and two stool samples after treatment of each participant to assess diagnostic method variability between samples to elucidate reproducibility of test results of both diagnostic methods. Moreover, we present data on the efficacy of the most promising therapy to date for treating STH infections, ivermectinalbendazole, based on molecular diagnosis.
An interesting finding from the comparison of these two diagnostic methods is that an additional 41.2% of microscopy-negative samples were found T. trichiurapositive when assessed by qPCR. We hypothesise that lower DNA loads found in Kato-Katz-negative samples reflect higher detection rates by qPCR due to a higher sensitivity rather than lower specificity of the qPCR assays as remaining DNA of already dead worms or eggs can still act as a template DNA during qPCR [43].
We found that qPCR results indicate higher sensitivity for all species across all examination days compared to Kato-Katz, which substantiates previous findings [17,27,30,34,[44][45][46]. Interestingly, follow-up Kato-Katz results differ significantly compared to the baseline results in the case of A. lumbricoides, indicating a time-dependent difference. The very low number of Kato-Katz positive test results post-treatment show the difficulty of detecting low A. lumbricoides worm burden by use of Kato-Katz, while qPCR was able to detect a considerable number of treatment failures.
Of note, the eligibility criteria (minimal T. trichiura egg count ≥ 100 EPG, 2 or more out of 4 Kato-Katz slides positive) for trial inclusion did not consider low T. trichiura infection intensities. Interestingly we observed that qPCR positivity of Kato-Katz-negative samples (EPG = 0) at baseline was lower compared to follow-up, implying higher sensitivity for qPCR when assessing low infection intensities. However, interpretation requires caution, as it is unclear how long residual DNA persists after parasite clearance leading to false-positive qPCR results [18,47].
Our results highlight, that the combination therapy (ivermectin-albendazole) shows a significantly better efficacy compared to the monotherapy for T. trichiura with both diagnostic methods, while CRs were comparable for A. lumbricoides and hookworm. However, the observed low to moderate CRs for ivermectin-albendazole with qPCR (23.2% for T. trichiura, 75.3% for A. lumbricoides and 52.4% for hookworm) are far from benchmark target product profiles for anthelminthic drug candidates and combinations and highlight the need to develop novel efficacious treatments. A particularly striking difference in CRs of the combination chemotherapy (and for A. lumbricoides after albendazole treatment) between Kato-Katz and qPCR was observed when two qPCR samples Table 3 Spearman's rank correlations of copy numbers (qPCR) resp. egg counts (Kato-Katz) between baseline day 1 and day 2 and between follow-up day 1 and day 2 for qPCR and Kato-Katz (restricted to positive test results) were considered post-treatment. These results also stress to analyse two qPCR samples post-treatment in clinical trials to elucidate the true efficacy of treatments. Although hypothesised, we only observed a fair to moderate agreement between microscopic egg count and DNA copy, which is in agreement with findings from Barda et al. [33]. Our results do not corroborate the observation of Mejia et al. [18], who found a significant good correlation (r = 0.7) between egg counts measured by the coprological Kato-Katz method and the DNA quantified by qPCR for A. lumbricoides and T. trichiura. The reason for this discrepancy is not entirely clear, but one partial explanation could be that only a few A. lumbricoides-or hookworm-positive samples were found post-treatment according to Kato-Katz in our study and that infection intensities were relatively low in these participants. As there is no strong correlation between egg counts and DNA copy number, finding a real gold standard for practical use remains a profound challenge, hampering the comparison of STH diagnostic tools.
As egg excretion is highly variable over time, there is considerable variation in EPG of faecal samples collected on consecutive days [34][35][36]. It is well known that Kato-Katz shows improved sensitivity when performed on several samples on different days [48,49]. We observed that Kato-Katz showed very good agreement for T. trichiura and A. lumbricoides and good agreement for hookworm at baseline between day 1 and day 2, whereas qPCR only showed good agreement between day 1 and day 2 for A. lumbricoides, but not for T. trichiura and hookworm, indicating lower test result reproducibility of the qPCR method. This apparent high correlation of Kato-Katz test results might be explained by the laboratory technician's skills, as the same well-trained microscopists were reading the stool samples every day, in addition to rather high T. trichiura parasite load (EPG ≥ 100 as inclusion criterion). The reason for the surprisingly low qPCR test reproducibility is not entirely clear; however, Pilotte et al. [50] found that common qPCR assays make use of suboptimal target sequences limiting detection and speciesspecificity. Another explanation could be that, in contrast Table 6 Comparison of efficacy in terms of Cure Rates (CRs) and Odds Ratio (OR) for being cured between treatment arms (albendazole vs ivermectin-albendazole), by diagnostic approach (Kato-Katz on samples of day 1 and 2 vs qPCR on first day sample only and qPCR on samples of day 1 and 2) a P-values of the odds ratio (OR) for being cured between albendazole monotherapy (ALB) and ivermectin-albendazole (IVM-ALB) derived from logistic regression models Note: CRs in bold highlight significant differences (P < 0.05) between CRs assessed by Kato-Katz and two qPCR samples in the respective treatment arm (i.e. ALB or IVM-ALB) to bacteria and viruses, isolation of parasite DNA out of faecal samples is a challenging process as the wall of helminth eggs is difficult to lyse and thus several additional steps are needed to achieve release of nucleic acids [12,51,52]. Moreover, we based our analyses on DNA copies/µl as qPCR parameter for infection intensity, while consensus has not been reached on the optimal qPCR parameter with regard to reliability and reproducibility assessment.
We are aware that a number of limitations might have influenced the results obtained. The sample input volumes of the Kato-Katz assays are considerably larger than those of the qPCR assays, which might substantially increase sensitivity for Kato-Katz given the stochastic distribution of eggs in stool samples. On the other hand, increasing the stool volume for DNA extraction would not be possible, as faecal specimens contain various substances acting in a qPCR inhibitory manner [53]. Another limitation is that, in contrast to Kato-Katz, qPCR samples were not analysed in duplicates. Further research needs to be performed to elucidate if and for how long residual DNA may persist after parasite clearance as this might lead to false-positive qPCR results post-treatment [18,29,47]. Furthermore, we fully agree with Levecke et al. [54], that an agreement on an absolute universal unit for qPCR is needed to establish best comparison parameters for these two diagnostic methods. Lastly, it is important to note, that the standardisation and adherence to one approved protocol would help to achieve more readily comparable results between different research laboratories.
Conclusions
The sensitive and scalable nature of qPCR makes its usage in large-scale diagnosis of intestinal helminths appealing over the rather operator-dependent microscopic method. DNA samples can be stored for further use, such as genetic characterisation and molecular typing, which might be of interest in surveillance studies to detect sporadic and focal infections or to monitor disease recrudescence. The evidence from this study implies statistically lower CRs (the primary outcome of this trial) for the combination therapy (ivermectin-albendazole) for all three species when assessed with two qPCR samples compared to Kato-Katz. Thus, it underlines the importance of the need for standardised and accurate molecular diagnostic tools, which are applicable in peripheral field settings, for future monitoring within STH control and/or elimination programmes and for developing novel efficacious treatments. This study has revealed for the first time, that qPCR test results show greater between day variability for baseline as well as post-treatment samples compared to Kato-Katz calling for a multi-sample analysis approach in order to improve qPCR-based diagnosis. This is of particular importance for studies aiming at assessing accurate disease prevalence as well as treatment efficacy. However, it needs to be carefully evaluated if the obtained higher sensitivity comes at the cost of the lower test reproducibility and how important this finding is in the context of preventive chemotherapy and the surveillance of low prevalence settings. Daily stool sample analyses to monitor dynamics of DNA copy numbers over a longer period post-treatment using the qPCR method might be one way forward to answer this question.
|
2020-10-16T05:04:00.829Z
|
2020-10-15T00:00:00.000
|
{
"year": 2020,
"sha1": "02acd46321b8e1ddb5227bb0a5747c0a839f4560",
"oa_license": "CCBY",
"oa_url": "https://parasitesandvectors.biomedcentral.com/track/pdf/10.1186/s13071-020-04401-x",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d2ee53147bb07d8959d4d974282c68a55f939f7d",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
118891801
|
pes2o/s2orc
|
v3-fos-license
|
High-contrast Aharonov-Bohm oscillations in the acoustoelectric transport regime
Phase-coherent acoustoelectric transport is reported.Aharonov-Bohm oscillations in the acoustoelectric current with visibility exceeding 100% were observed in mesoscopic GaAs rings as a function of an external magnetic field at cryogenic temperatures. A theoretical analysis of the acoustoelectric transport in ballistic devices is proposed to model experimental observations. Our findings highlight a close analogy between acoustoelectric transport and thermoelectric properties in ballistic devices.
Phase-coherent acoustoelectric transport is reported.Aharonov-Bohm oscillations in the acoustoelectric current with visibility exceeding 100% were observed in mesoscopic GaAs rings as a function of an external magnetic field at cryogenic temperatures. A theoretical analysis of the acoustoelectric transport in ballistic devices is proposed to model experimental observations. Our findings highlight a close analogy between acoustoelectric transport and thermoelectric properties in ballistic devices. The pioneering work by Aharonov and Bohm had farreaching fundamental implications even beyond its objective of highlighting the significance of potential with respect to force in quantum mechanics. Aharonov-Bohm (AB) interference [1] was observed in several mesoscopic systems [2][3][4][5] and exploited or proposed as a tool to investigate electron coherent dynamics at the nanoscale [6][7][8]. Unfortunately its implementation in practical devices is hindered by the rather low contrast of the conductance oscillations achievable in solid-state devices. Even at sub-K temperatures in the linear transport regime, contrast is typically limited to few percents of the background signal owing to the sizable fraction of electrons that propagate incoherently.
In this Letter we investigate electronic phase coherence in mesoscopic AB rings in the acoustoelectric-transport regime, i.e. when electrons propagate owing to piezoelectric interactions between acoustic waves traveling on the device surface and conduction-band electrons. Our data and analysis highlight the interplay between coherence and acoustoelectric transport.
Very-high-contrast AB oscillations will be shown in the acoustoelectric current, even if the characteristic coherence length in this transport regime will be shown to be comparable to that of the linear-regime case. The theoretical analysis presented here to describe the experimental investigation demonstrates that, at cryogenic temperatures and in the energy range here of interest, the acoustoelectric current shows the same behavior of a current generated by a thermal gradient across the device. Our results establish a close link between acoustoelectric and thermoelectric transport properties and open the way to the study of thermoelectric effects in nanodevices with the much simpler approach characteristic of acoustoelectric experiments.
AB rings were fabricated starting from a twodimensional electron gas (2DEG) confined 90 nm below the surface of a modulation-doped GaAs/Al 0.3 Ga 0.7 As heterostructure. At a temperature T = 4.2 K the unpatterned 2DEG density and mobility were 2.1 × 10 11 cm −2 and 1.7 × 10 6 cm 2 /Vs, respectively. The ring geometry was defined by shallow plasma etching. The same processing step yielded a set of lateral gates (labeled G 1 through G 6 ) that provide control over the electron density in each part of the device. A scanning electron microscopy (SEM) image in artificial colors of one of our devices is shown in Fig. 1. Standard Ni/AuGe/Ni/Au (5 nm/180 nm/5 nm/100 nm) n-type Ohmic contacts (not shown in the figure) were fabricated to allow electrical access to the 2DEG. Aluminum interdigitated transducers (IDT) were evaporated to generate surface acoustic waves (SAWs). Devices with ring radii of 500 nm (device R500) and 750 nm (device R750) and IDTs tuned at resonance frequencies of 1.5 and 3 GHz (corresponding to SAW wavelengths of 2 and 1 µm) were fabricated and studied in a 3 He cryostat at a base temperature of 350 mK.
The coherent-transport properties of the devices were first assessed in the linear regime. The injection contact (lead L 1 ) was biased with an ac excitation signal (V ex ) at 15.7 Hz with an amplitude of 30 µV to avoid heating effects. The output current from lead L 2 was detected by means of a current preamplifier in series with a lock-in amplifier. A blocking capacitor between the excitation source and L 1 was employed to remove any unwanted dc component of the bias. The conductance of one of the devices at 1.7 K as a function of the voltage applied to gates G 2 and G 5 (data not shown) displayed plateaus demonstrating ballistic transport across the ring which in turn made it possible to determine the number of onedimensional subbands available for transport in each arm of the ring. Data presented in the following were collected in the regime where only one propagating subband is available in each arm. This choice maximizes the ABoscillation contrast.
The zero-bias conductance of device R500 measured at 400 mK as a function of the magnetic field is reported in Fig. 2a. It displays oscillations with a period of ∼ 5.1 mT as determined by Fourier transforming the data (see Fig. 2b), and a contrast (defined as the peak-to-peak amplitude divided by the background value) of ∼ 1 %. The observed period corresponds to h/e AB oscillations for a ring with an effective radius of 507 nm, in good agreement with the sample geometry. The higher-order harmonics appearing in the Fourier spectrum shown in Fig. 2b can be exploited to estimate the electronic coherence length [13]. The amplitude of each peak was evaluated with a Lorentzian-fit procedure. The center of each peak provides an estimate of the length of the corresponding closed electronic path. The exponential fit of the peak amplitude as a function of the electronic-path length yields a coherence length of λ c ∼ 2.2 µm, a value consistent with what expected for a high-mobility 2DEG at 400 mK.
In order to investigate the acoustoelectric regime SAWs were generated by exciting the IDT incorporated in each device. Experiments were performed in the pulsed regime to avoid spurious effects due to cross-talk and SAW reflections off sample edges [14,15]. In the present case, the effects of cross talk and reflections were found to be minimized by using a pulse period of 2500 ns and pulse width of 300 ns. Acoustoelectric current across R500 measured at 1.7 K is shown as a function of the excitation frequency f in the inset of Fig. 3. The peak at f = 2.985 GHz corresponds to SAW generation at IDT resonance.
The acoustoelectric current was measured as a function of the magnetic field at T = 400 mK. Results obtained on R500 and R750 are presented in Figs. 3a, 3b and 3c. Both devices exhibit pronounced oscillations. The periods of the observed AB oscillations correspond to ring radii of 505 ± 10 nm and 710 ± 10 nm for R500 and R750 respectively. These values match within experimental error the measured lithographic radii of the devices (500 nm and 715 nm respectively) and confirm that the origin of the observed oscillations is the magnetic AB effect. To the best of our knowledge, this is the first demonstration of the coherence of SAW-driven electronic transport in AB rings. By applying the same procedure adopted to analyze the linear-transport regime, data yield a coherence-length value of ∼ 2.4 µm, consis- tently with the linear regime. Note that although the experimental acoustoelectric coherence length is comparable to the linear-transport coherence length, the contrast of the AB oscillations in the two different transport regimes is very different, being of the order of 1% in the latter case and exceeding 100% in presence of SAWs.
This peculiar phenomenology can be clarified and successfully modeled by taking into account the nature of acoustoelectric transport in ballistic devices. Following Ref. [16], SAWs can be schematized as a monochromatic phonon flux characterized by momentum q = 2π/λ SAW and energy qv SAW . Piezoelectric interactions between electrons confined in a one-dimensional channel and SAWs generate the acoustoelectric current due to momentum transfer to electrons that can be introduced in the system Hamiltonian as a first-order perturbation term. The solution of Boltzmann equation demonstrates that electron-phonon interaction is more efficient for electrons propagating in the same direction of the acoustic wave (set positive for clarity) and that a particularly strong electron-phonon coupling occurs when the SAW velocity matches the Fermi velocity in one of the subbands of the 1D channel (v SAW = v F = k F /m * ). In the case of GaAs devices this con- In the inset we report the acoustoelectric current at T = 1.7 K, generated by a 1-µm-periodicity IDT across R500 as a function of the RF excitation frequency. The excitation signal was modulated with pulses with a width of 300 ns and a period of 2500 ns.
dition occurs close to a subband pinch-off, leading to "giant" acoustoelectric current peaks [17]. Experimental data [17] also show that even off-resonance acoustoelectric current is detectable, even if lower in intensity owing to the weaker coupling. This "off-resonance" regime is the one relevant for the present analysis since data were collected by biasing gates in order to operate the device at the first conductance plateau. In this situation the Fermi energy (E F ) can be estimated E F ∼ 3 meV based on the measured charge density, corresponding to a Fermi velocity (v F = 2E F /m * ∼ 10 5 m/s), i. e. two orders of magnitude higher than the SAW velocity (v SAW ≃ 3000 m/s) in GaAs. In this regime electron Fermi momentum k F ≃ √ 2m * E F / ∼ 10 8 m −1 is one order of magnitude higher than the phonon momentum q = 2π/λ SAW ∼ 10 7 m −1 , therefore non allowing phonon-induced backscattering of electrons. Moreover, electron-phonon interactions are, at first approximation, restricted to electrons within E F − ∆E and E F + ∆E, where ∆E = max(∆E ph , k B T ). Here ∆E ph is the energy gained by an electron after absorbing a phonon, i. e. qv SAW . In our case ∆E ph ≃ 10 µeV, corresponding to a temperature of ≃ 0.1 K.
Under these conditions, Pauli exclusion principle makes the SAW-phonon absorption more efficient for the electrons propagating in the same direction of the SAW [18]. This leads to a larger perturbation of the Fermi distribution of the these electrons (f > ), with respect to the distribution of electrons propagating in the opposite direction (f < , here for simplicity we shall neglect this perturbation): f > = f 0 + ∆f , f < ≃ f 0 , where f 0 is the equilibrium Fermi function while ∆f represents the change in f 0 induced by the SAW. Here, ∆f is non-negligible only in the range E F +∆E > E > E F −∆E and assumes positive (negative) values at energies higher (lower) than E F . Conservation of electron number implies that ∆f have zero average ( +∞ 0 ∆f dE = 0). The resulting acoustoelectric current generated across a constriction can then be written as: where energies are measured from the bottom of the first 1D subband and T(E) is the electron transmission probability across the constriction, accounting for interference effects.
The resulting integral, defined over an energy window ∆E, suppresses the Fourier components of T(E) that oscillate with frequency much higher than ∆E. Neglecting their contribution to the integral and keeping only low-periodicity components of T(E), the acoustoelectric current can be written: where T(E) includes the low-frequency Fourier components of the transmission probability.
This equation can be further simplified by substituting T(E) with its first-order approximation around E F since, by definition, T(E) varies slowly in the integration domain.
The acoustoelectric current can therefore assume negative values depending on the slope of T(E) consistently with the acoustoelectric counterflow shown in Fig. 3.It is instructive to compare this result to the expression for conductance given by the Landauer-Büttiker formalism at low temperatures: The reason for the high-visibility of acousoelectric AB oscillations is immediately apparent. In fact, owing to the derivative with respect to E, the slowly-varying, noncoherent contribution is not present in the acoustoelectric current so that ∂T ∂E (E F ) shows AB oscillations with a visibility higher than T(E F ) even if both regimes are governed by the same coherence length.
In conclusion we should like to remark that the present model not only clarifies the peculiar behavior of acoustoelectric coherent transport, but establishes an unexpected link between acoustoelectric and thermoelectric transport regimes. This relationship is clear if we compare acoustoelectric current given by (Eq. 1) and the thermoelectric coefficient B: Since the (− ∂f ∂E )(E − E F ) is functionally analogous to the perturbation ∆f defined in an energy range ∆E = k B T around E F , in the weak coupling regime described by Eq. 2, the acoustoelectric current is proportional to the thermoelectric coefficient at temperature T = ∆E/k B : This finding suggests that the study of thermoelectric effects in nanoscopic circuits can be successfully carried out with the much simpler experimental approach characteristic of acoustoelectric investigations. * e.strambini@utwente.nl; Now at University of Twente, Twente, The Netherlands
|
2013-09-05T07:01:22.000Z
|
2013-09-05T00:00:00.000
|
{
"year": 2013,
"sha1": "78035150f0ad221c8a4ef531cbddd2710ba2d401",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "78035150f0ad221c8a4ef531cbddd2710ba2d401",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
73697144
|
pes2o/s2orc
|
v3-fos-license
|
Statistical modeling of interannual shoreline change driven by North Atlantic climate variability spanning 2000–2014 in the Bay of Biscay
Modeling studies addressing daily to interannual coastal evolution typically relate shoreline change with waves, currents and sediment transport through complex processes and feedbacks. For wave-dominated environments, the main driver (waves) is controlled by the regional atmospheric circulation. Here a simple weather regime-driven shoreline model is developed for a 15-year shoreline dataset (2000–2014) collected at Truc Vert beach, Bay of Biscay, SW France. In all, 16 weather regimes (four per season) are considered. The centroids and occurrences are computed using the ERA-40 and ERA-Interim reanalyses, applying k-means and EOF methods to the anomalies of the 500-hPa geopotential height over the North Atlantic Basin. The weather regime-driven shoreline model explains 70% of the observed interannual shoreline variability. The application of a proven wave-driven equilibrium shoreline model to the same period shows that both models have similar skills at the interannual scale. Relation between the weather regimes and the wave climate in the Bay of Biscay is investigated and the primary weather regimes impacting shoreline change are identified. For instance, the winter zonal regime characterized by a strengthening of the pressure gradient between the Iceland low and the Azores high is associated with high-energy wave conditions and is found to drive an increase in the shoreline erosion rate. The study demonstrates the predictability of interannual shoreline change from a limited number of weather regimes, which opens new perspectives for shoreline change modeling and encourages long-term shoreline monitoring programs.
Introduction
Sandy coasts are complex environments that are under increasing threat posed by anthropogenic pressures and climate change. Shoreline change is governed by myriad nonlinear physical processes interacting through complex feedbacks covering a wide range of spatial and temporal scales (Stive et al. 2002), challenging model developments. Although several complex process-based morphodynamic models have been developed in recent decades, simulations at large temporal scales, i.e., years, are still hardly reliable. Shoreline evolution on timescales from hours (cf. storms) to years has recently been simulated with fair skill using wave-driven empirical equilibrium-based models (e.g., Davidson and Turner 2009;Yates et al. 2009;Davidson et al. 2013;Castelle et al. 2014;Splinter et al. 2014a). These models can also reproduce the interannual shoreline variability that sometimes exceeds the seasonal variability (e.g., Castelle et al. 2014). However, model skills strongly depend on the availability and quality of wave data. The characteristics of waves reaching the coast depend strongly on remote surface atmospheric circulation (e.g., Bacon and Carter 1993;Young 1999;Woolf et al. 2002;Le Cozannet et al. 2011;Charles et al. 2012a;Martínez-Asensio et al. 2016). Because waves are the primary driver of shoreline change along most coastlines, interannual shoreline variability is expected to be related to interannual large-scale atmospheric dynamics. Therefore, directly using atmospheric conditions as inputs in shoreline models appears as an appealing approach. This reduced-complexity strategy may also implicitly account for other drivers such as mean water level fluctuations (Ruggiero et al. 2001;Serafin and Ruggiero 2014).
Using a simple approach, Kuriyama et al. (2012) revealed that about 45% of the interannual shoreline variability measured at a NW Pacific Ocean beach can be attributed to largescale climate fluctuations described through a combination of teleconnection pattern indices. Barnard et al. (2015) recently gave new evidence that large-scale atmospheric circulation patterns control unusual, local storm-driven shoreline change around the Pacific Basin, with enhanced erosion along the NW American coast and the SE Australian coast caused by extreme El Niño and La Niña, respectively. Studies focusing on NE Atlantic sandy coasts and climate variability have already highlighted the existence of a relationship between the North Atlantic Oscillation teleconnection (NAO) and the beach sand bar states (e.g., Masselink et al. 2014) or alongshore sediment transport (e.g., Silva et al. 2012;Idier et al. 2013). However, none of these studies addresses the potential link between the large-scale atmospheric circulation and shoreline variability. In addition, these studies used teleconnection pattern indices to characterize the largescale atmospheric circulation, as they are freely available online and easy to use. However, it is also possible to describe large-scale atmospheric circulation and its variability by so-called weather regimes.
Weather regimes are recurrent and persistent atmospheric circulation patterns. They are usually identified by cluster analysis (Michelangeli et al. 1995) applied to daily fields of mean sea-level pressure or geopotential height (at a given pressure level) taken over an area of interest. Using this approach, the North Atlantic synoptic circulation can be accurately characterized, as atmospheric data located over the oceanic basin only are used for the weather regime computation (Cassou et al. 2004;Barrier et al. 2013Barrier et al. , 2014. In this paper, a simple weather regime-driven shoreline model is implemented to investigate shoreline interannual variability at Truc Vert beach, Bay of Biscay, SW France. A set of 16 seasonal weather regimes (four per season) is computed for the North Atlantic Basin and the shoreline model is tested against a shoreline dataset covering a 15-year period from 2000 to 2014. The relation between weather regimes, waves and shoreline evolution, as well as the model skills are discussed.
Physical setting
Truc Vert is a meso-macrotidal double-barred open beach backed by high and wide coastal dunes (Fig. 1a, b). The sediment consists of fine to medium sand with a mean grain size of about 0.35-0.40 mm. Truc Vert is exposed to high-energy, seasonally modulated waves generated over the North Atlantic Ocean with a mean significant wave height H s of 1.7 m, a mean peak wave period of 10.3 s and a dominant WNW direction (Castelle et al. 2015). Summer is characterized by the dominance of NW short waves whereas longer and larger waves coming from the WNW prevail in winter. H s can episodically exceed 8 m during severe winter storms with a peak wave period often larger than 15 s (Castelle et al. 2015). This is illustrated in Fig. 2e based on a time series of 3-hourly H s offshore of Truc Vert beach with the superimposed 90-day moving average over the period 2000-2014 using the wave data described in Castelle et al. (2014).
The North Atlantic atmospheric circulation is characterized by westward-tracking extra-tropical low-pressure systems over the North Atlantic Ocean, which is regularly interrupted by broadening of high-pressure systems. In winter, the North Atlantic atmospheric variability is found to be accurately characterized on a daily basis through the so-called weather regime paradigm (see, for example, Barrier et al. 2014). Four welldefined circulation patterns inherent to the atmospheric dynamics over the North Atlantic Ocean are usually identified (Vautard 1990;Cassou et al. 2004). The zonal regime (ZO) is characterized by a strengthening of the pressure gradient between the Iceland low and the Azores high. The Greenland anticyclone (GA) exhibits an opposite structure with a gradient lowering. ZO and GA correspond to the positive and negative phase of the North Atlantic Oscillation (NAO+ and NAO-), respectively (Cassou et al. 2011). The blocking regime (BL) refers to a situation in which a persistent anticyclone is located over northern Europe and Scandinavia. The Atlantic ridge (AR) is associated with a broadening of the Azores high, and is very close to the negative phase of the East Atlantic pattern (EA; Barnston and Livezey 1987;Cassou et al. 2011). Intensification of the latitudinal pressure gradient over the North Atlantic Ocean typically results in stronger westerly winds promoting energetic waves that propagate toward Truc Vert, whereas persistence of anticyclonic conditions over the North Atlantic Basin results in smaller waves. However, the overall relations between these basinscale weather regimes and the local wave and shoreline dynamics at Truc Vert have not yet been investigated. This is addressed in this study.
Shoreline data
Two topographic datasets were gathered to produce a shoreline dataset that covers a 15-year study period from 2000 to 2014. From March 2000 to March 2005, single beach profiles were collected through various means (e.g., theodolite, DGPS). From April 2005 to December 2014, with a 1-year gap in 2008, 2-4 week sampled topographic surveys were performed at Truc Vert at spring low tide using a centimeteraccuracy Trimble 5700 DGPS. The alongshore coverage increased over the years from about 350-750 m in early 2009 to about 1,500 m in October 2012 onward. The topographic surveys were averaged alongshore to derive a mean beach profile (for more details, see Castelle et al. 2014and Splinter et al. 2014a. Figure 1c shows the superimposed mean profiles surveyed from 2005 to 2013 where the elevation is given with respect to the local mean sea level, highlighting the large vertical and cross-shore variability.
The shoreline proxy explaining the largest amount of the total beach volume variability was chosen. The total beach volume was computed by integrating all positive elevations above the local mean sea level up to the backbeach where the topographic elevation remains approximately constant over time, with the total volume at the start of the survey period set to 0. The vertical distribution of the correlation coefficient between the shoreline proxy and the beach volume is reported in Fig. 1d for the 2005-2013 period. The best correlations are obtained for elevations ranging from 1 to 2 m above mean sea level, which approximately corresponds to the mean high water level for neap and spring tides, respectively (Fig. 1c, d). The mean high water level shoreline proxy (1.5 m above the local mean sea level) is used here, which agrees with previous shoreline modeling studies at Truc Vert beach (Castelle et al. 2014;Splinter et al. 2014a). Figure 2f shows the time series of shoreline position at Truc Vert combining the whole dataset. Error bars indicate the alongshore standard deviation of the 1.5-m iso-contour, which is a measure of beach three-dimensionality, with a mean error bar length of 7.9 m. The large variability in error bars starting in 2005 reflects the strong beach three-dimensional variability throughout the years. The shoreline positions prior to April 2005 are also included, despite their low accuracy blurring the seasonal variability. Nonetheless, this supplementary dataset further highlights a striking interannual shoreline signal within the entire 2000-2014 period. The seasonal cycles are generally characterized by a succession of accretional and erosional periods centered on the summer and winter, respectively. Spring and fall are both transition periods that can be either accretional or erosional, although a slight mean accretion trend is found for both seasons. The cross-shore amplitude of the interannual variability is 30 to 40 m, which is similar to the amplitude of the seasonal cycle. The whole dataset appears to capture two full cycles of interannual variability (
Weather regime computation
Assessment of the North Atlantic climate variability is based on two global atmospheric reanalyses produced by the European Centre for Medium-Range Weather Forecasts (ECMWF). The ERA-40 reanalysis (Uppala et al. 2005) is used to compute the weather regime centroids and their daily occurrence for the 1958-2001 period. As the ERA-40 reanalysis does not cover the 2000-2014 study period, the ongoing ERA-Interim reanalysis (Dee et al. 2011), which started in 1979, is used to Fig. 1 a Truc Vert beach location (green square) in the Bay of Biscay (gray box), and buoy and wave model grid point used to produce the wave hindcast (blue and yellow dots, respectively). b Aerial view of Truc Vert beach. c Alongshore-averaged beach profiles surveyed at Truc Vert beach from April 2005 to April 2013 (gray curves) and time-averaged mean profile (black curve). d Vertical distribution of correlation coefficient between shoreline position and total beach volume. HAT, MHWS, MHWN and MSL indicate highest astronomical tide, mean high water spring, mean high water neap and mean sea level, respectively compute the daily occurrence of the weather regimes for this study period, assuming the quasi-stationarity of the weather regimes (Michelangeli et al. 1995;Cassou et al. 2004).
Because of a strong seasonal modulation of the North Atlantic atmospheric circulation, and in turn of the wave energy, weather regimes are computed by meteorological season: winter (December, January, and February), spring (March, April and May), summer (June, July and August) and fall (September, October and November). The weather regime centroids are computed as in Sanchez-Gomez et al. (2009), that is, by using the anomaly maps of the 500-hPa geopotential height (Z500) over the North Atlantic Basin (90°W-30°E, 20-80°N) for the 1958-2001 period. The classification into weather regimes is performed by applying the k-means partition algorithm to the data after reducing the number of degrees of freedom using an EOF decomposition (Michelangeli et al. 1995). Only the first 15 principal components are retained, which explains about 90% of the total variance. For each season the optimal Z500 anomaly field partitioning is carried out with four clusters. Once the centroids are defined, for both reanalyses the weather regime occurrence is computed on a daily basis, such that each day is associated to one of the 16 weather regimes. The similarity criterion is based on the Euclidian distance between the daily Z500 anomaly map and the weather regime centroids. Finally, yearly seasonal occurrence is computed to provide the time series of seasonal weather regime occurrence. For each season of each year, the cumulative occurrence of the four seasonal weather regimes always equals 100% of the time, meaning that the time series are interdependent within each season.
Weather regime impact on wave climate
At Truc Vert and along many other waved-dominated sandy beaches, waves are the primary driver of shoreline change. To support the development of a shoreline evolution model driven by weather regimes, the relation between the weather regimes and the local wave climate is explored on seasonal timescales. This relation is assessed by computing correlation maps between time series of seasonal weather regime occurrence derived from the ERA-40 reanalysis and seasonal wave parameter anomalies computed over the Bay of Biscay (bordered by the N Spanish and W French coasts, see Fig. 1) using the BoBWA-10kH wave hindcast (Charles et al. 2012b). The wave parameters used are H s , the mean wave period and the peak wave direction. Because waves are essentially wind-generated, to provide insights into possible relations between weather regimes and waves the seasonal surface wind modification over the North Atlantic Ocean is assessed by computing and analyzing the 10-m seasonal mean wind maps and the 10-m wind mean composites corresponding to the 16 weather regimes.
Statistical weather regime-driven shoreline model
To investigate the predictability of the interannual shoreline variability, a simple model linking seasonal shoreline change and weather regime occurrence is developed. A limited number of weather regimes is used in comparison with existing studies downscaling wave climatology from atmospheric data (e.g., Camus et al. 2014;Laugel et al. 2014). Here, using a small number of well-defined circulation patterns is a necessary requirement both to ease interpretation and to achieve a robust statistical model setup as a higher number of circulation patterns would require a longer shoreline dataset. The model f Measured shoreline position using the mean high water level proxy at z=1.5 m. Gray dots Shoreline positions calculated using single beach profiles, colored dots shoreline positions calculated by averaging the position of the mean high water level iso-contour over an alongshore distance of approx. 350 m (purple), 750 m (red) and 1,500 m (blue). Error bars Associated cross-shore standard deviation. Gray area 90-day moving average ± mean cross-shore standard deviation considers the shoreline position as an auto-regressive process (i.e., it depends linearly on its own previous values) and assumes that, for a given season, the rate of shoreline change is controlled by a linear combination of individual weather regime occurrence. With these assumptions, the simulated shoreline position x mod is calculated on the first day of each season: where t is the time, Δt is the seasonal time step and equals the duration of that season in days (between t and t+Δt), and u mod is the weather regime-based estimate of shoreline change rate during that season. u mod is obtained from the following season-dependent equation: where WR i is the occurrence value of the ith weather regime during a given season, b i,season is the coefficient associated with WR i and is season-dependent, and a season is a season-dependent constant. Values of a season and b i,season are calibrated for each season by performing multiple-linear regression between the seasonal weather regime occurrence and the corresponding time series of measured seasonal shoreline change rate over the period spanning April 2005 to December 2014 (hereafter called the calibration period). For each season only three out of the four time series of seasonal weather regime occurrence are used as these time series are interdependent (see Weather regime computation subsection). Indeed, the sum of the occurrence of the four weather regimes is 100%, such that the occurrence of the fourth regime can be deduced from the occurrence of the three other ones. Using the four time series would therefore add redundant data in the multiple-linear regressions. A preliminary analysis indicates that changing the three weather regimes causes no change to the model output. Both the measured and simulated shoreline time series are detrended with a linear fit to remove the long-term trend, as this study aims at investigating the interannual variability of shoreline change. The model skill is assessed in terms of the root-mean-square error (RMSE) and coefficient of determination (R 2 ). Since the topographic surveys were performed at irregular intervals and depending on tide range, there is no measurement concurrent with the model output. To assess model skill, each simulated shoreline position is associated with the average of the measured shoreline positions at ±15 days of the simulated position. As a last processing step, the measured and simulated shoreline position time series are linearly interpolated and further low-pass filtered with a 2-year cutoff frequency to focus on the interannual shoreline variability.
Weather regimes and wave climate
The four computed winter centroids (Fig. 3a, e, i, m) are very similar in pattern with those described in the literature (Vautard 1990;Cassou et al. 2004) and introduced in the Physical setting section. For the other seasons, the centroids are characterized by similar anomaly patterns, although some significant shifts of the Z500 anomaly position and magnitude are detected, especially during fall (Fig. 3). For clarity, each centroid is denoted by the name of the most similar winter centroid.
The most significant correlation maps are obtained for the winter and summer seasons (Figs. 4 and 5, respectively). For both seasons, the seasonal wave characteristics off the SW French Atlantic coast appear to be strongly related with the weather regimes. High occurrence of winter and summer ZO is associated with an increase in H s and mean wave period (Figs. 4a, b and 5a, b). During winter and summer, high occurrence of GA is associated with an anticlockwise rotation of the peak wave direction (Figs. 4f and 5f). In addition, high occurrence of winter GA decreases the mean wave period (Fig. 4e) whereas high occurrence of summer GA leads to an increase in H s (Fig. 5a). High occurrence of winter and summer BL is associated with a slight decrease in H s (Figs. 4g and 5g). Finally, high occurrence of winter and summer AR drives a clockwise rotation of the peak wave direction (Figs. 4l and 5l) along with a decrease in H s , which is more pronounced in summer (Figs. 4j and 5j). Figure 6 reveals that the weather regimes appear to strongly modulate the surface wind patterns over the North Atlantic Basin. While some weather regimes are associated with a strong reinforcement of the mean surface circulation at various locations (e.g., ZO), others lead to a decrease in wind magnitude and drive significant change in the mean surface wind direction (e.g., AR).
The time series of seasonal weather regime occurrence from winter 2000 to fall 2014 and derived from the ERA-Interim reanalysis are shown in Fig. 2a-d. The North Atlantic Ocean atmospheric circulation displays a strong variability on both seasonal and interannual timescales.
Model results
The simulated shoreline position in Fig. 7a indicates that the seasonal cycles over the calibration period are well captured by the model, although the cross-shore excursion is slightly underestimated. Over this period, the RMSE and R 2 calculated between the measured and simulated shoreline positions are 8.6 m and 0.61, respectively. Prior to April 2005, the seasonal cycles are still reproduced, with the low-accuracy data over this period preventing relevant model skill assessment. Figure 7b shows the interannual signal contained in both time series, highlighting that the model reproduces the interannual variability from April 2005 to December 2014 with excellent skill (RMSE and R 2 of 5.0 m and 0.93, respectively). From 2000 to the end of 2004, the overall slow accretion trend in the measurements is well captured by the model, although there is a substantial shift between the two signals. Therefore, the interannual variability is also well reproduced over the entire period, with a RMSE of 5.9 m and R 2 of 0.70.
Discussion
Weather regimes, wave climate and shoreline change A detailed inspection of Fig. 6 shows how the weather regimes can affect the wave climate in the Bay of Biscay. The surface wind modifications induced by the weather regimes have a profound impact on wave generation in the North Atlantic Ocean and, in turn, on the waves reaching the Bay of Biscay. Figure 6a-d indicates that during all seasons ZO occurrence is characterized by above average surface winds blowing from the W to E, such that ZO occurrence should allow energetic swells to develop and propagate toward W European coasts, which agrees with the results found in Fig. 4a, b and Fig. 5a, b. On the contrary, Fig. 6m-p shows that AR occurrence is characterized by a strong reduction or even a disappearance of the W wind component over the central part of North Atlantic Ocean and by an increase in winds blowing from the N-NW over the Bay of Biscay. These combined effects should limit swell occurrence and favor the formation of NW seas in the Bay of Biscay, explaining the wave patterns depicted in Fig. 4j-l and Fig. 5j-l. Figure 6e, g reveals that the maximal zonal surface circulation is shifted southward for winter and summer GA occurrence, giving a plausible reason for the anticlockwise rotation of wave direction observed in Fig. 4f and Fig. 5f. The interpretation of the wind composites associated with BL occurrence (Fig. 6i-l) is more difficult. However, the slight decrease in H s observed in winter and summer (Figs. 4g and 5g) could be related to the smaller distance over which westerly surface winds blow in the middle of the North Atlantic Ocean.
To estimate weather-regime impact on shoreline change on seasonal timescales, correlation coefficients (R) between the time series of the seasonal weather regime occurrence and seasonal shoreline change rate are calculated for the 2005-2014 calibration period. A positive value indicates that the Fig. 3 North Atlantic weather regime centroids for winter (December, January, February), spring (March, April, May), summer (June, July, August) and fall (September, October, November). The centroids are computed from the anomaly maps of the 500-hPa geopotential height over the North Atlantic Basin obtained from the ERA-40 reanalysis. ZO, GA, BL and AR Zonal, Greenland anticyclone, blocking and Atlantic ridge regimes, respectively corresponding weather regime reduces the seasonal erosion trend or amplifies the seasonal accretion trend. Most of the obtained correlation values are not statistically significant. However, four seasonal weather regimes have significant correlation with p-values ranging from 0.05 to 0.15: in winter, ZO high occurrence increases erosion rate (R=-0.56); in spring, AR high occurrence is associated with a decreased accretion rate (R=-0.60); in summer, GA and BL high occurrences lead to an increase (R=0.61) and a decrease in accretion rate (R=-0.67), respectively. It has been proven on many wavedominated coasts that shoreline change rate is proportional to the incident wave energy, and the energy disequilibrium between this energy and the equilibrium energy for which the coast is stable (Davidson and Turner 2009;Yates et al. 2009). Thus, the statistically significant relationships identified here may be related to weather regime-driven modulation of incoming wave energy. Change in water level induced by weather regime-driven variations of sea-level pressure (Barrier et al. 2013) and/or onshore winds (Ullmann and Moron 2008;Ullmann and Monbaliu 2010) may also impact shoreline change as storm wave events coinciding with higher water levels result in higher rates of erosion (Ruggiero et al. 2001;Serafin and Ruggiero 2014). However, this was not verified here as it is beyond the scope of this study.
According to the results of the correlation maps (Fig. 4), in winter, only high occurrence of ZO is associated with larger and longer waves offshore of Truc Vert, which is expected to increase beach erosion. During summer, high occurrence of GA is associated with larger waves and an anticlockwise rotation of the peak wave direction (Fig. 5d, f), allowing the wave incidence to be closer to shore normal. Onshore wavedriven sediment transport requires a minimal amount of incident wave energy to move the eroded sand back on the beach. It is hypothesized that, for the summer GA weather regime, slightly above average H s favors beach recovery at Truc Vert. The results also reveal that, during summer, high occurrence of BL is associated with smaller waves (Fig. 5g) and BL occurrence appears to be anti-correlated to GA occurrence in summer (Fig. 2c). By reducing the occurrence of GA and by causing lower energy conditions, BL high occurrence is expected to slow down beach recovery at Truc Vert. Figure 6n shows that spring AR is characterized by nearly no wind circulation over the central part of the North Atlantic Ocean, and light winds blowing from the N over the Bay of Biscay resulting in very low energy conditions. High occurrence of AR in spring is therefore associated with low incoming wave energy at Truc Vert, which limits post-winter beach recovery.
Shoreline change also strongly depends on antecedent wave conditions (Wright and Short 1984;Davidson et al. 2013;Splinter et al. 2014a) and storm event chronology (Splinter et al. 2014b). Therefore, investigating the individual contribution of the weather regimes to shoreline change on seasonal timescales could be improved by accounting for a Bmemory^effect. However, the present shoreline dataset spans too short a duration to perform such an analysis.
Weather regime-driven shoreline model
To test the ability of the model to simulate the relationship between the seasonal weather regime occurrence and the interannual shoreline variability, randomization tests are performed. Model input data were replaced by a dataset of 16 random signals following a uniform law with mean and standard deviation similar to the time series of weather regime occurrence. Over 1,000 simulations were performed on random inputs, and the results indicate that the probability to increase model skill is less than 1%. This confirms that the model is not over-calibrated and that there is a physical relation between the combination and succession of the seasonal weather regime occurrence and the interannual shoreline dynamics. According to the above results, large-scale atmospheric fluctuations over the North Atlantic Ocean, described here through the weather regime paradigm, can explain up to 70% of the interannual shoreline variability measured at Truc Vert beach between 2000 and 2014. However, the model underestimates both maxima for erosion and accretion because it solves shoreline change on a seasonal timescale. This is a major limitation compared to equilibrium shoreline models (e.g., Yates et al. 2009) based on time steps of the order of hours. Nonetheless, it is important to note that these models also tend to underestimate maxima for both erosion and accretion (e.g., Splinter et al. 2014a), which can be attributed to the omission of other factors such as tides and sandbar welding to the shore. To compare the model developed here (hereafter referred to as the RO16 model) with existing wave-driven shoreline evolution models, the model of Yates et al. (2009) is used (hereafter referred to as the YA09 model). The setup of the YA09 model is performed following the method in Castelle et al. (2014) using the calibration and simulation periods addressed herein. Results from the YA09 model are superimposed on those from the RO16 model in Fig. 7a. Consistently with the methodology described in the Materials and methods section, the new simulated shoreline position is detrended, linearly interpolated and low-pass filtered with a 2-year cutoff frequency to extract the interannual variability (Fig. 7b). Over the entire study period the interannual variability is well reproduced by the YA09 model, with a RMSE of 6.0 m and R 2 of 0.69. The YA09 model does not perform better during the calibration period, as the RMSE increases to 6.4 m while the R 2 barely changes (0.70).
The RO16 model is more skillful than the YA09 model during the calibration period presumably because of the large number of input variables and best-fit coefficients that ensure an optimized fitting with field data. Another asset of the RO16 model is that it does not need wave hindcast data, which can require much effort (e.g., model setup, computation, validation) for simulations particularly along rugged coastlines. Equilibrium shoreline models such as the YA09 model were developed to explicitly account for beach memory, which has been known for decades to be critical to short-term beach response to a given storm (Wright and Short 1984). Here, the RO16 model is successful in simulating the interannual shoreline change based on the occurrence of weather regimes without using prior weather regime conditions. This suggests that storm chronology and memory effects are much less important for interannual shoreline change than for short-term beach response.
This new modeling approach should be applicable to other North Atlantic wave-dominated beaches for which the local wave climate is modulated by large-scale atmospheric circulation adequately described by the North Atlantic weather regimes. This would require further calibration, as the best-fit coefficients are site-specific. For instance, Martínez-Asensio et al. (2016) demonstrated that while NW European Atlantic coasts experience above average wave energy conditions during high winter NAO+, the S European Atlantic coasts undergo the opposite (and vice versa for high winter NAO-). This in turn drives opposite shoreline response, as for instance during the winter 2009/2010. This winter was associated with very high occurrence of the GA regime (Figs. 2a and 3e) with limited erosion at Truc Vert beach (Fig. 2f), while strong erosion was measured at Levante Beach (SW Spain; Rangel-Buitrago and Anfuso 2011). At the other end of the spectrum, beaches with similar wave exposure may exhibit similar links with weather regimes. Comparative data presented in Masselink et al. (2016aMasselink et al. ( , 2016b reveal that shoreline change patterns on interannual timescales are very similar at Perranporth beach (SW England) and Truc Vert beach. Future work should involve application of this statistical model to other coasts exposed to waves generated over the North Atlantic Ocean.
Conclusions
This paper introduces the development of a new weather regime-driven shoreline model that explains more than 70% of the shoreline interannual variability observed at a high-energy sandy beach in SW France. This implies that interannual shoreline variability on open sandy coasts can be inextricably linked to natural climatic variability over oceanic basins. Findings from this study are limited to a 15-year shoreline time series at a given site, suggesting the need for continued or new long-term shoreline monitoring programs in contrasting hydrodynamics and geological settings to further test and improve a new generation of weather regime-driven shoreline models.
|
2018-12-27T03:45:14.046Z
|
2016-08-09T00:00:00.000
|
{
"year": 2016,
"sha1": "dfc8d167e5a5a4ebe19abcd2ba927083bfe44ddc",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s00367-016-0460-8.pdf",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "0ed826c848b9cce9287e6da5da802562b916aad6",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Geology"
]
}
|
235303116
|
pes2o/s2orc
|
v3-fos-license
|
Improving the Pharmacological Properties of Ciclopirox for Its Use in Congenital Erythropoietic Porphyria
Congenital erythropoietic porphyria (CEP), also known as Günther’s disease, results from a deficient activity in the fourth enzyme, uroporphyrinogen III synthase (UROIIIS), of the heme pathway. Ciclopirox (CPX) is an off-label drug, topically prescribed as an antifungal. It has been recently shown that it also acts as a pharmacological chaperone in CEP, presenting a specific activity in deleterious mutations in UROIIIS. Despite CPX is active at subtoxic concentrations, acute gastrointestinal (GI) toxicity was found due to the precipitation in the stomach of the active compound and subsequent accumulation in the intestine. To increase its systemic availability, we carried out pharmacokinetic (PK) and pharmacodynamic (PD) studies using alternative formulations for CPX. Such strategy effectively suppressed GI toxicity in WT mice and in a mouse model of the CEP disease (UROIIISP248Q/P248Q). In terms of activity, phosphorylation of CPX yielded good results in CEP cellular models but showed limited activity when administered to the CEP mouse model. These results highlight the need of a proper formulation for pharmacological chaperones used in the treatment of rare diseases.
Introduction
Shortcomings in the pharmacological properties constitute one of the main sources in the failure of clinical trials for drugs that have proven active in preclinical studies [1]. To be safe, a drug must be completely eliminated from the body, ideally not long after the activity's window timeframe [2]. To that end, the drug catabolism (pharmacokinetics, PK) has to be fine-tuned with the biochemical and physiological effect of the drug (pharmacodynamics, PD). Improved PK can be achieved by the modification of the active principle, yet this may be at the expense of the drug's PD. Alternatively, a complex formulation may help PK optimization without jeopardizing the active principle's efficacy, but this strategy may be of limited applicability. In this study, we have explored all these strategies over ciclopirox, a topical antifungal that has been repositioned as a potential drug for the treatment of congenital erythropoietic porphyria (CEP), acting as a pharmacological chaperone [3].
Porphyrias, inborn errors of heme biosynthesis, are metabolic disorders, each resulting from the deficiency of a specific enzyme in the heme biosynthetic pathway [4,5]. This group of diseases includes CEP (ICD-10 #E80.0; MIM#263700), also known as Günther's disease [6][7][8]. CEP is autosomal recessive and results from a markedly deficient activity of the uroporphyrinogen III synthase (UROIIIS; EC 4.2.1.75) that leads to the accumulation of type I porphyrins, specifically uroporphyrin I (URO I) and coproporphyrin I (COPRO I) [9]. The accumulation of these porphyrins leads to the specific symptoms of this disease, such as hemolysis, severe anemia, splenomegaly, and disfiguring phototoxic cutaneous lesions [4].
CEP is a mutilating and one of the most severe porphyrias, and it currently has no curative treatment other than bone marrow transplant, an approach that is not devoid of specific risks including infections derived from immunosuppression, toxicity problems derived from chemotherapy, transplant rejection, and, eventually, premature death of the patient [7]. Palliative care includes avoidance of sun exposure, skin care, and avoiding mechanical trauma [10].
We recently demonstrated the medical plausibility of ciclopirox (CPX) for the treatment of CEP, acting as a pharmacological chaperone targeting uroporphyrinogen III synthase [3]. Pharmacological chaperones function by directly binding a folded or partially folded protein to stabilize it and allow completion of the folding process to yield a functional protein [11][12][13]. In turn, CPX is a topical treatment of cutaneous fungal infections and is believed to act as a fungicidal agent by chelating polyvalent metal cations such as Fe 3+ and Al 3+ , resulting in the inhibition of peroxide degradation [14]. CPX binding to UROIIIS stabilizes its structure and reduces its unfolding and degradation with time [3]. Therefore, CPX restores the protein levels of UROIIIS and its activity. The effect of CPX on the activity of UROIIIS was evidenced in cell-based and murine models of CEP [15]. CPX caused a significant decrease in the levels of the toxic porphyrins, particularly URO I and COPRO I, in liver, red blood cells, and urine. Furthermore, it reduced splenomegaly, an indirect measure of reduction in circulating porphyrins [3]. Altogether and considering that CEP is an ultra-rare disease, CPX was granted an orphan drug designation for the treatment of CEP by the FDA (DRU-2018-6297, May 2018) and the European Medicines Agency (EMA/OD/186/17, January 2018).
The PK of CPX was initially described during its development as an antifungal agent [16]. Several studies were performed with oral administration, for instance, at a dose of 1 mg ciclopirox-14 C-olamine/kg to rats, or in doses between 10 and 15 mg ciclopirox-14 C-olamine/kg to dogs. In such experiments, the preparation of CPX-olamine was either encapsulated as a crystallizate in hard gelatin capsules (for dogs) or dissolved in polyethylene glycol 400 (for rats) [17]. The results indicate that the compound is quickly eliminated in urine (3-6 h). Despite this fast turnover, our own experience and the literature [16] indicate that CPX administered orally to mice results in gastrointestinal toxicity (GI toxicity). Figure 1A shows the effect of CPX accumulation in the gut, with macroscopic bowel inflammation. This results are also consistent with a study in rats where ciclopirox-olamine was administered to rats for 4 weeks at doses up to 300 mg/kg found gastric irritation and chronic gastritis [18]. This acute toxicity evidences the need for further development of the drug. Herein, we have first developed an NMR-based method for the monitorization of CPX and related compounds in animal models. This method allowed investigation of the activity and catabolism of a CPX prodrug, intended to circumvent the toxicity problems [19]. The prodrug is able to solve the observed GI toxicity without altering the activity of the active principle, but at the expense of the pharmacokinetic profile. of the uroporphyrinogen III synthase (UROIIIS; EC 4.2.1.75) that leads to the accumulation of type I porphyrins, specifically uroporphyrin I (URO I) and coproporphyrin I (COPRO I) [9]. The accumulation of these porphyrins leads to the specific symptoms of this disease, such as hemolysis, severe anemia, splenomegaly, and disfiguring phototoxic cutaneous lesions [4]. CEP is a mutilating and one of the most severe porphyrias, and it currently has no curative treatment other than bone marrow transplant, an approach that is not devoid of specific risks including infections derived from immunosuppression, toxicity problems derived from chemotherapy, transplant rejection, and, eventually, premature death of the patient [7]. Palliative care includes avoidance of sun exposure, skin care, and avoiding mechanical trauma [10].
We recently demonstrated the medical plausibility of ciclopirox (CPX) for the treatment of CEP, acting as a pharmacological chaperone targeting uroporphyrinogen III synthase [3]. Pharmacological chaperones function by directly binding a folded or partially folded protein to stabilize it and allow completion of the folding process to yield a functional protein [11][12][13]. In turn, CPX is a topical treatment of cutaneous fungal infections and is believed to act as a fungicidal agent by chelating polyvalent metal cations such as Fe 3+ and Al 3+ , resulting in the inhibition of peroxide degradation [14]. CPX binding to UROIIIS stabilizes its structure and reduces its unfolding and degradation with time [3]. Therefore, CPX restores the protein levels of UROIIIS and its activity. The effect of CPX on the activity of UROIIIS was evidenced in cell-based and murine models of CEP [15]. CPX caused a significant decrease in the levels of the toxic porphyrins, particularly URO I and COPRO I, in liver, red blood cells, and urine. Furthermore, it reduced splenomegaly, an indirect measure of reduction in circulating porphyrins [3]. Altogether and considering that CEP is an ultra-rare disease, CPX was granted an orphan drug designation for the treatment of CEP by the FDA (DRU-2018-6297, May 2018) and the European Medicines Agency (EMA/OD/186/17, January 2018).
The PK of CPX was initially described during its development as an antifungal agent [16]. Several studies were performed with oral administration, for instance, at a dose of 1 mg ciclopirox-14 C-olamine/kg to rats, or in doses between 10 and 15 mg ciclopirox-14 Colamine/kg to dogs. In such experiments, the preparation of CPX-olamine was either encapsulated as a crystallizate in hard gelatin capsules (for dogs) or dissolved in polyethylene glycol 400 (for rats) [17]. The results indicate that the compound is quickly eliminated in urine (3-6 h). Despite this fast turnover, our own experience and the literature [16] indicate that CPX administered orally to mice results in gastrointestinal toxicity (GI toxicity). Figure 1A shows the effect of CPX accumulation in the gut, with macroscopic bowel inflammation. This results are also consistent with a study in rats where ciclopiroxolamine was administered to rats for 4 weeks at doses up to 300 mg/kg found gastric irritation and chronic gastritis [18]. This acute toxicity evidences the need for further development of the drug. Herein, we have first developed an NMR-based method for the monitorization of CPX and related compounds in animal models. This method allowed investigation of the activity and catabolism of a CPX prodrug, intended to circumvent the toxicity problems [19]. The prodrug is able to solve the observed GI toxicity without altering the activity of the active principle, but at the expense of the pharmacokinetic profile.
Compounds and Cell Lines
The compound ciclopirox (CPX, 6-cyclohexyl-1-hydroxy-4-methyl-2(1H)-pyridinone) was purchased from Santa Cruz Biotechnology (sc-204688), and phosphorylated ciclopirox (CPXpom, O-phosphoryl-methylen-6-cyclohexyl-1-hydroxy-4-metyl-2(1H)-Pyridinone) was synthetized by Charnwood Molecular Ltd. PD of CPX was studied in HEK CRISPR Cas UROIIIS-C73R and UROIIIS-P248Q cell lines [20]. Briefly, 60 µM dose of CPX and phosphorylated CPX was administrated for 30 min and 1, 2, and 4 h of exposition. Then, the cells were counted and harvested. Samples were treated by the same protocol for the porphyrin extraction. Mouse experiments. For the PK experiments and for each bolus administration method 10 WT ICR (CD-1 ® ) outbred mice (Envigo) were used. Serum was collected after the administration at 1, 2, 4, 6 (7), 12, and 24 h. At 48 (36) h, the mice were sacrificed. Triplicates were collected for each data point. Urine samples were collected at the same interval. In the PD experiments, the concentration of porphyrins was measured from circulating blood. Blood samples were extracted from the submandibular vein every week, and mice were weighed before each extraction. Mice were treated with oral CPX by gavage to evaluate CPX dose efficiency. In such experiment, CPX was administered every 24 h for 6 consecutive weeks, and the mice were weighed every week. All work performed with animals was approved by the competent authority (Diputación Foral de Bizkaia) following European and Spanish directives. The CIC bioGUNE Animal Facility is accredited by AAALAC Intl.
Sample Preparation for NMR Spectroscopy
An aliquot of 200 µL of murine serum was placed in a 1.5 mL Eppendorf. Afterwards, 1.3 mL of MeOH:H 2 O (ratio 2:1) was added. The mixture was gently shaken until final homogenization. Samples were centrifuged for 30 min at 20,000× g. Supernatant was transferred to a new tube and dried in a SpeedVac, which was resuspended in 480 µL of DMSO-d6 with 1.66 µM of DSS as an internal reference (sodium trimethylsilylpropanesulfonate) and placed into a 5 mm NMR tube. Urine samples were collected in an Eppendorf tube and immediately frozen until further use. Samples were thawed on ice and diluted in D 2 O till it makes a volume of 450 µL. Further, 1 mM of sodium azide was added for sample conservation and 100 µM of DSS was added as reference and placed into a 5 mm NMR tube.
NMR Spectroscopy
NMR data were collected at on an 800-MHz Bruker Avance III spectrometer equipped with a cryoprobe and on a 600-MHz Bruker Avance III US2 spectrometer. For each sample, a 1D 1 H noesygppr1d spectrum was collected (Bruker, size of fid 69228, ds 16, ns 1024, d1 3 s, experiment time 55 min). Data analysis was done using the TopSpin 3.5 software (Bruker BioSpin GmbH). Free induction decays were multiplied by an exponential function equivalent to 0.3 Hz line-broadening before applying Fourier transform. All transformed spectra were corrected for phase and baseline distortions and referenced to the DSS singlet at 0 ppm. Chemical shifts for CPXpom, CPXhm, CPX, and CPXglu were predicted in silico [21] and assigned by spike. Metabolic quantification was carried out at peaks' integral against the added internal reference compound. In case of signal overlap, peak deconvolution (command LDCON) was done to assign corresponding peak areas. Final CPX serum/urine concentration was obtained considering peaks' spin system and sample dilution performed during sample preparation.
Porphyrin Extraction
Murine blood samples were obtained from the submandibular vein and collected in ethylenediaminetetraacetic acid tubes; samples were aliquoted and stored in a freezer at −80 • C. For porphyrin extraction, 300 µL of 6 M hydrochloric acid was added to the cell samples and 200 µL to the blood samples, then sonicated for 3 cycles at 25" each and incubated at 37 • C for 30 min at 450 rpm. The samples were then centrifugated for 10 min at 10,000× g. The pellet was removed, and the supernatant was transferred to a centrifuge tube filter cellulose acetate membrane, pore size 0.22 µm, and centrifugated for 10 min at 4000× g. The samples were then analyzed by HPLC.
HPLC Analysis
Porphyrins from cell lines were separated by HPLC analysis on a ODS Hypersil C18 column (5 µm, 3 mm × 200 mm; Thermo Scientific, Waltham, MA, US) in a HPLC chromatograph (Shimadzu, Long Beach, CA, US). Porphyrins were separated with a 60 min gradient elution and a two-component mobile phase consisting of ammonium acetate (1 M, pH 5.16, solvent A) and 100% acetonitrile (solvent B) at a flow rate of 1 mL/min. All analyses were performed at 20 • C, and porphyrins were detected by fluorescence with an excitation wavelength 405 nm and emission wavelength 610 nm.
Pharmacokinetic Analysis
Non-compartmental serum and urine pharmacokinetic parameters for CPXpom and CPX were determined using SimBiology module in MATLAB. Values from IV administration of the drugs were fitted to an exponential function (a·e (b*t) ) assuming maximum concentration at t = 0. For oral administration, values were fitted to a rational polynomial ((a1*t + a2)/t 2 + b1·t + b2)) assuming 0 at t = 0.
CPX Characterization in Biofluids (Serum and Urine) by NMR Spectroscopy
We first explored the use of NMR for the identification/quantification of CPX and its derivatives in biofluids, in the context of PK studies. NMR spectroscopy is well suited for the xenobiotic characterization of urine and serum as it is quantifiable, reproducible, non-selective, and non-destructive [22,23].
In the liver and other tissues [24], xenobiotic compounds are metabolized by direct compound modifications (phase I biotransformations: oxidation, reduction, hydrolysis, etc.) and by conjugation reactions (phase II biotransformations). For CPX, UDPglucuronosyltransferase transfers the glucuronic acid component of uridine diphosphate glucuronic acid to CPX to produce the glucuronidated derivative (CPXglu), which is much more soluble and it is quickly excreted in the urine [25]. Actually, this occurs mostly during the first passage to the liver and most of the CPX circulating in serum corresponds to CPXglu. Figure 2 shows the assignment of the signals in the serum spectrum (of mouse), where two doublets at 6.23 and 6.03 ppm are characteristic for CPX (6.5 and 6.41 ppm in urine). These signals belong to the cyclohexyl moiety of the compound and, therefore, they remain unperturbed upon derivatization (i.e., they account for the total circulating CPX, CPXtot). The intensity of the signal can be easily converted into absolute concentration by normalization with respect to a reference compound (see Materials and Methods). In turn, the doublet signal at 4.86 ppm was assigned to the glucuronic moiety of CPXglu, and it can be used to directly quantify this species (5.18 ppm in urine). Of note, we can only use the right-half of the doublet as the left-half is overlapped with other signals from the serum matrix. The free amount of CPX (CPXfree) is estimated by the difference between CPXtot and CPXglu: Herein, it is important to mention that the determination of CPXfree is not exactly accurate (i.e., around 50% error) because it relies on the difference of two concentrations that are one order of magnitude higher. In any case, our results demonstrate that the NMR spectrum analysis can provide a quantitative analysis of CPX catabolism (serum , Table S1) and excretion (urine , Table S2).
CPX Derivatization to Optimize Its Absorption Properties
As discussed previously, CPX is poorly absorbed and rapidly metabolized and excreted via the hepatic route, so it does not attain its full therapeutic potential. Importantly, this is accompanied by acute GI toxicity, as observed in mice ( Figure 1A). We hypothesize that the GI toxicity is due to a partial precipitation of the drug in the stomach due to the low pH effect, which provokes a poor absorption and subsequent accumulation in the gut. Such accumulation would increase the effective local exposure of the drug to the tissue above the toxic levels. To overcome this problem, we propose the derivatization of CPX to produce more soluble compounds: generating a phosphate prodrug is one of the common approaches for circumventing poor solubility issues of a parent drug [26]. We expect that, by introducing an ionizable phosphate group to CPX, the phosphate prodrugs will become highly water soluble. More importantly, it is also expected that the phosphate prodrugs will be readily cleaved into CPX by alkaline phosphatase, an enzyme widely distributed in plasma and a variety of tissues [27].
Direct phosphorylation of the hydroxyl group is unstable and leads to immediate hydrolysis (data not shown). Instead, a phosphoryl-oxo-methylene group (pom) can be chemically conjugated to the given hydroxyl group to yield a stable entity ( Figure 3A, CPXpom), as previously described [28]. In principle, CPXpom is more prone to be absorbed in the body system, increasing its availability at cellular and subcellular levels. We first investigated the stability and catabolism of CPXpom using NMR spectroscopy. As already mentioned, the rationale is that CPXpom will be cleaved off by phosphatases, but this reaction does not leave directly to CPX but to a hydroxymethyl derivative (CPXhm). This species is chemically unstable, and the hemiacetal is spontaneously hydrolyzed to release CPX and formaldehyde. Remarkably, NMR spectroscopy can detect all the species of the chemical process in serum samples ( Figure 3B), with the assignment of protons that unequivocally correspond to the different species under consideration. Of note, we also observed the de novo appearance of a chemical shift characteristic for the formic acid proton, most likely a by-product of the oxidation of the released formaldehyde ( Figure 3A). In Herein, it is important to mention that the determination of CPXfree is not exactly accurate (i.e., around 50% error) because it relies on the difference of two concentrations that are one order of magnitude higher. In any case, our results demonstrate that the NMR spectrum analysis can provide a quantitative analysis of CPX catabolism (serum , Table S1) and excretion (urine , Table S2).
CPX Derivatization to Optimize Its Absorption Properties
As discussed previously, CPX is poorly absorbed and rapidly metabolized and excreted via the hepatic route, so it does not attain its full therapeutic potential. Importantly, this is accompanied by acute GI toxicity, as observed in mice ( Figure 1A). We hypothesize that the GI toxicity is due to a partial precipitation of the drug in the stomach due to the low pH effect, which provokes a poor absorption and subsequent accumulation in the gut. Such accumulation would increase the effective local exposure of the drug to the tissue above the toxic levels. To overcome this problem, we propose the derivatization of CPX to produce more soluble compounds: generating a phosphate prodrug is one of the common approaches for circumventing poor solubility issues of a parent drug [26]. We expect that, by introducing an ionizable phosphate group to CPX, the phosphate prodrugs will become highly water soluble. More importantly, it is also expected that the phosphate prodrugs will be readily cleaved into CPX by alkaline phosphatase, an enzyme widely distributed in plasma and a variety of tissues [27].
Direct phosphorylation of the hydroxyl group is unstable and leads to immediate hydrolysis (data not shown). Instead, a phosphoryl-oxo-methylene group (pom) can be chemically conjugated to the given hydroxyl group to yield a stable entity ( Figure 3A, CPXpom), as previously described [28]. In principle, CPXpom is more prone to be absorbed in the body system, increasing its availability at cellular and subcellular levels. We first investigated the stability and catabolism of CPXpom using NMR spectroscopy. As already mentioned, the rationale is that CPXpom will be cleaved off by phosphatases, but this reaction does not leave directly to CPX but to a hydroxymethyl derivative (CPXhm). This species is chemically unstable, and the hemiacetal is spontaneously hydrolyzed to release CPX and formaldehyde. Remarkably, NMR spectroscopy can detect all the species of the chemical process in serum samples ( Figure 3B), with the assignment of protons that unequivocally correspond to the different species under consideration. Of note, we also observed the de novo appearance of a chemical shift characteristic for the formic acid proton, most likely a by-product of the oxidation of the released formaldehyde ( Figure 3A).
In summary, NMR spectroscopy emerges as a powerful methodology to investigate the absorption, distribution, metabolism, and excretion (ADME) of CPX derivatives. J. Pers. Med. 2021, 11, x FOR PEER REVIEW 6 of summary, NMR spectroscopy emerges as a powerful methodology to investigate the a sorption, distribution, metabolism, and excretion (ADME) of CPX derivatives.
PK and Toxicity Studies of the CPX Derivatives
We then investigated the PK properties of an oral administration of CPXpom (gavag and compared it to the direct administration of the active substance CPX. For each of t formulations, we employed 100 mg/kg, a CPX dose that results in GI toxicity. We al included an intravenous (IV) administration in the experimental design, so the F-val can be estimated. Serum was collected after the administration at 1, 2, 4, 6 (7), 12, 24, an 48 h. The main conclusion of the study is that CPXpom does not result in GI toxicity. Co sistently, no toxicity was observed in another study that administered the prodrug for days to evaluate its PD properties. As shown in Figure 1B, this administration results no macroscopic inflammation of the gut and no other associated symptom was observ for the treated mice.
The PK data are summarized in Figure 4 (for serum) and Table 1 (integrated data f serum and urine) and Table S3. All the quantities refer to CPXtot (i.e., the sum of CPXfr and CPXglu). CPXpom does not appear in the NMR spectrum so its circulating concentr tion must be below the detection limit of the technique, while CPXfree is estimated accor
PK and Toxicity Studies of the CPX Derivatives
We then investigated the PK properties of an oral administration of CPXpom (gavage) and compared it to the direct administration of the active substance CPX. For each of the formulations, we employed 100 mg/kg, a CPX dose that results in GI toxicity. We also included an intravenous (IV) administration in the experimental design, so the F-value can be estimated. Serum was collected after the administration at 1, 2, 4, 6 (7), 12, 24, and 48 h. The main conclusion of the study is that CPXpom does not result in GI toxicity. Consistently, no toxicity was observed in another study that administered the prodrug for 30 days to evaluate its PD properties. As shown in Figure 1B, this administration results in no macroscopic inflammation of the gut and no other associated symptom was observed for the treated mice.
The PK data are summarized in Figure 4 (for serum) and Table 1 (integrated data for serum and urine) and Table S3. All the quantities refer to CPXtot (i.e., the sum of CPXfree and CPXglu). CPXpom does not appear in the NMR spectrum so its circulating concentration must be below the detection limit of the technique, while CPXfree is estimated according to Equation (1) The results in serum show that CPXpom slightly increases the peak absorption of CPXtot: a C max value of 52.87 µg/mL for CPXtot when CPXpom was administered, as compared to CPXtot = 43.4 µg/mL when CPX was administered instead. We attribute these differences to a faster absorption of the prodrug in the intestine, as also suggested by AUC urine 0:24h (139 vs. 127 mg/h·mL). Yet, in all cases, CPXtot CPXglu, suggesting that the prodrug is quickly converted into the active drug and subsequently into the catabolic by-products, also consistent with the absence of peaks for CPXpom in the NMR spectrum.
The shape of the PK profile in serum is also altered when comparing CPX and CPXpom administrations. Indeed, the abovementioned C max increase upon CPXpom administration is also accompanied by a reduction in the T 1/2 of almost an hour, underlining an overall change in the PK profile between the drug and the prodrug administration. Again, we hypothesize that CPX administration results in the accumulation of the compound in the gut followed by a more gradual absorption. Altogether, the PK analysis suggests that CPXpom may modify some of the ADME properties as compared to the administration of the active principle alone.
PD Studies of the CPX Derivatives
We first tested the derivatives on different cellular models of the disease, obtained from HEK cells by CRISPR/Cas9 editing [3]. The selected mutations (UROIIIS C73R and UROIIIS P248Q ) result in destabilized proteins and severe phenotypes [29]. As shown in Figure 5, incubation of the cellular models with both, CPX and CPXpom, significantly reduced the levels of the toxic by-product URO I. This is the case for UROIIIS C73R and UROIIIS P248Q ( Figure 5C). Docking studies with the CPXpom prodrug predict a poor interaction with the binding site of the enzyme, because the N-oxide moiety of the compound (absent in CPXpom) is essential to generate stabilizing interactions with the protein. Consistently, equivalent cellular studies with the non-hydrolysable CPX homolog mimosine showed no activity (data not shown), highlighting the relevant role of the N-oxide moiety in the interaction with UROIIIS. Thus, the reported activity for CPXpom is attributed to a proper hydrolysis of the compound into the active species. The absence of a lag phase in the kinetic experiment ( Figure 5B) indicates that this hydrolysis has to be fast, consistent with the reported literature [30].
PD Studies of the CPX Derivatives
We first tested the derivatives on different cellular models of the disease, obtained from HEK cells by CRISPR/Cas9 editing [3]. The selected mutations (UROIIIS C73R and UROIIIS P248Q ) result in destabilized proteins and severe phenotypes [29]. As shown in Figure 5, incubation of the cellular models with both, CPX and CPXpom, significantly reduced the levels of the toxic by-product URO I. This is the case for UROIIIS C73R and UROIIIS P248Q ( Figure 5C). Docking studies with the CPXpom prodrug predict a poor interaction with the binding site of the enzyme, because the N-oxide moiety of the compound (absent in CPXpom) is essential to generate stabilizing interactions with the protein. Consistently, equivalent cellular studies with the non-hydrolysable CPX homolog mimosine showed no activity (data not shown), highlighting the relevant role of the N-oxide moiety in the interaction with UROIIIS. Thus, the reported activity for CPXpom is attributed to a proper hydrolysis of the compound into the active species. The absence of a lag phase in the kinetic experiment ( Figure 5B) indicates that this hydrolysis has to be fast, consistent with the reported literature [30]. We then tested the prodrug in a mouse model of the disease (UROIIIS P248Q/P248Q ), administering the compound orally by gavage for 30 days. As indicated before, no GI toxicity was observed ( Figure 1B), likely due to the absence of accumulation of the active compound in the intestine. Yet, the compound was much less efficient in reducing the circulating levels of toxic porphyrins, and only a 5% reduction in total UROI levels in serum was observed after 30 days, as compared to the 40% reduction observed for an equivalent dose of CPX for the same period. We hypothesize that, among other mechanisms, perhaps UGT enzymes could also glucuronate CPXhm at the hydroxymethyl group, thus limiting the capacity of the prodrug to release the active principle.
Discussion
The currently validated method for CPX quantification relies on drug separation and quantification using high-pressure liquid chromatography (HPLC) [31]. Yet, pure CPX has no optimal properties for HPLC separation, and it requires derivatization, which is accomplished by methylating the weak acidic N-hydroxyl group (pK = 7) of the 1-hydroxy-2(1H)-pyridones with dimethyl sulfate. The resulting 1-methoxypyridones presents a normal chromatographic behavior on silica [31]. Unfortunately, this methodology is not directly applicable to the CPX-related prodrugs and, even for pure CPX, it requires the use of standard compounds for quantification. For that reason, we decided to use NMR spectroscopy, which turned out to be a useful analytical method to investigate CPX We then tested the prodrug in a mouse model of the disease (UROIIIS P248Q/P248Q ), administering the compound orally by gavage for 30 days. As indicated before, no GI toxicity was observed ( Figure 1B), likely due to the absence of accumulation of the active compound in the intestine. Yet, the compound was much less efficient in reducing the circulating levels of toxic porphyrins, and only a 5% reduction in total UROI levels in serum was observed after 30 days, as compared to the 40% reduction observed for an equivalent dose of CPX for the same period. We hypothesize that, among other mechanisms, perhaps UGT enzymes could also glucuronate CPXhm at the hydroxymethyl group, thus limiting the capacity of the prodrug to release the active principle.
Discussion
The currently validated method for CPX quantification relies on drug separation and quantification using high-pressure liquid chromatography (HPLC) [31]. Yet, pure CPX has no optimal properties for HPLC separation, and it requires derivatization, which is accomplished by methylating the weak acidic N-hydroxyl group (pK = 7) of the 1-hydroxy-2(1H)-pyridones with dimethyl sulfate. The resulting 1-methoxypyridones presents a normal chromatographic behavior on silica [31]. Unfortunately, this methodology is not directly applicable to the CPX-related prodrugs and, even for pure CPX, it requires the use of standard compounds for quantification. For that reason, we decided to use NMR spectroscopy, which turned out to be a useful analytical method to investigate CPX catabolism in biofluids (urine and serum). Isolated signals in the spectrum allowed quantification of the different CPX species and the identification of some transient metabolites such as CPXhm (Figure 2).
We have then addressed the problem of GI toxicity associated to the oral administration of CPX, an antifungal recently repurposed as a pharmacological chaperone active in CEP. The use of phosphorylated prodrugs (i.e., CPXpom) adequately minimized the GI toxicity problem, validating the hypothesis of accumulation of the active principle in the gut due to its poor solubility. The PK studies show an altered PK profile when administering CPXpom as compared to CPX, with increased C max and reduced T 1/2 , while maintaining similar (but not identical) clearance rates in urine at 24 h. Considering the scenario where CPX accumulates in the gut region, an uneven absorption between both formulations could explain the differences observed in the PK profiles.
The PD experiments show disparate results between cellular lines and animal models. While CRISPR/Cas-modified HEK cells show a significant reduction in accumulated toxic porphyrins ( Figure 5), an equivalent UROI/COPRO I reduction does not happen in mice after the administration of the prodrug. We attribute this effect to a glucuronidation mechanism targeting CPXhm that would limit the amount of CPXfree that can be released from the prodrug. In any case, these results underline the importance of using animal models in drug discovery, to account for all the complexity provided by the organism.
In summary, the results presented in this study evidence the putative problems for an oral administration of ciclopirox, in line with previous observations [16]. Yet, we also demonstrate that simple modifications in the form of a prodrug to improve solubility may overcome the problem of GI toxicity due to a local accumulation of the drug. Even though the here proposed prodrug does not provide optimal efficacy in its use as a pharmacological chaperone for CEP, other formulations may be able to optimally deliver the drug at high therapeutic efficacy. We are actively pursuing this goal.
Supplementary Materials: The following are available online at https://www.mdpi.com/article/ 10.3390/jpm11060485/s1, Figure S1: Urine CPX concentrations (µg/mL) as a function of time, as determined by NMR spectroscopy, Table S1: Serum Ciclopirox concentrations (µg/mL), Table S2: Urine Ciclopirox concentrations (mg/mL), Table S3: Serum CPX pharmacokinetic parameters in mice following single IV or oral doses of CPXpom and CPX. Institutional Review Board Statement: The study was conducted according to the guidelines of the Declaration of Helsinki, and approved by the Institutional Review Board (or Ethics Committee) of CIC bioGUNE (48/901/000/6106, protocol code P-CBG-CBB-0117 approved on 26/01/17).
Informed Consent Statement: Not applicable.
Conflicts of Interest: ATLAS molecular Pharma is developing the drug ciclopirox for its use in congenital erythropoietic porphyria.
|
2021-06-03T06:17:21.966Z
|
2021-05-28T00:00:00.000
|
{
"year": 2021,
"sha1": "4fb4513af59bff0f9de4db654ee48cda6790e4f0",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2075-4426/11/6/485/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "8a307015a4a5b0f43f6d1630e8d8c971b1b54112",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
133353740
|
pes2o/s2orc
|
v3-fos-license
|
Onboard measurements of pressure pulsations in a low specific speed Francis model runner
Over the last years, there have been several incidents with cracks in high head Francis turbines. These cracks are understood to be related to pressure pulsations, vibration modulus and the combination of these. In this paper, a setup for the investigation of pressure pulsations in a low specific speed model turbine is presented with the use of onboard pressure sensors. Earlier onboard measurements have mainly utilized blade-mounted sensors. In this paper, a setup with hub-mounted pressure sensors are described. In addition, a position sensor is utilized to analyse the pressure data relative to the angular position of the runner. The setup is considered as a good reference for computational fluid dynamics validation and is considered less extensive for evaluating the onboard pressure pulsations compared to blade-mounted sensors.
Introduction
Some new power plants with installed Francis turbines have experienced breakdown after few hours of operation. The design and calculations of Francis runners are based on numerical analysis, but the main problem is the validation of the results regarding fluid structure interaction in the runner. For a better understanding of the physics behind this problem, measurements must be performed for the validation of the numerical results.
Measurements including moving fluids and transient properties, as pressure pulsations, could be severely influenced by the mounting method of the sensor [1,2]. Today, pressure sensors with high accuracy and small sizes are available with flush mounted diaphragm. For application where accurate flush mounting is possible, the uncertainty from mounting related to hole size, transmission tubes and cavities will be removed [3]. The time and frequency response for the measurements is only related to dynamic properties of the diaphragm and the acquisition chain as described in the ISA standard "A Guide for the Dynamic Calibration of Pressure sensors" [4]. In the current measurements, flush mounting of the sensors was selected to reduce uncertainty related to the mounting method.
To analyze the pressure in the runner channel, the main method found for onboard pressure measurements, are with the use of miniature blade-mounted sensors. Several studies utilized onboard measurements with blade mounted miniature sensors [5][6][7][8][9][10][11]. Kobro et.al did onboard measurements on the same runner as described in this paper, but the complexity of the setup and durability of the sensors was not satisfactory [12]. Another concern is the possibility for mechanical influence on the pressure sensors if mounted on thin blades. The setup presented utilizes a measurement method with hub mounted pressure sensors to analyze the pressure pulsations onboard a low specific speed Francis model runner. The data acquisition in the rotating domain can be done with different methods including telemetry, slip-ring, onboard acquisition and a combination of onboard acquisition and digital transfer with a slipring. In the presented setup, a parallel sampling data acquisition system for all measurements were selected to avoid uncertainties related to time synchronization, hence a multi-channel analog slip-ring system was used.
In addition to the onboard pressure measurements, a setup for continuous angular position measurement of the runner is presented. The objective of the measurements is the analysis of onboard pressure relative to runner angular position.
Experimental setup
The Francis test-rig available at the Waterpower laboratory, at Norwegian University of Science and Technology was used for the experimental studies [13,14]. The Francis turbine in the test-rig is shown in Figure 1. The Francis turbine was equipped with all required instruments to conduct model testing according to IEC 60193 [15]. The total number of pressure taps in the experimental setup was 24. In this paper, the focus was on four sensors mounted in the runner. Figure 2 shows the locations of the onboard pressure sensors in the turbine (R1, R2, R3 and R4). The onboard sensors were mounted in the runner crown. Due to space restriction and the number of channels in the slip ring, custom amplifiers were built and mounted onboard the runner.
To analyze the pressure values onboard the runner relative to the stationary frame, a position sensor (Z) was added to the shaft. The sensor was a digital encoder with 13 bit resolution. The digital position signal was converted to analog +/-10V saw tooth to reduce the number of leads in the cable, and for easier synchronization of other analog values in the DAQ system. The position sensor is shown in Figure 3 29th IAHR Symposium on Hydraulic Machinery and Systems
Data acquisition
The data acquisition system (DAQ) for the onboard measurements, was built with the use of slip-ring. This was chosen to enable full time-synchronisation between stationary and rotating domain, which was of great importance when relating the onboard measurement to runner position measured in the stationary domain. This approach does however introduce longer signal transfer with analog voltage and is therefore more susceptible to noise. This could be reduced with differential signal transfer, but with limited number of channels in the slip-ring, single ended data transfer were selected with common ground. A differential signal transfer, where each channel has signal and reference, would be more noise resistant. A comparison between a single ended and differential signal transfer confirmed this. Nevertheless, in the uncertainty analysis, the added noise did not affect the total uncertainty in the measurements. The input to the DAQ system had low-pass filters for anti-aliasing and the total number of channels in the measurement campaign were 50.
The onboard amplifiers were design with programmable gain instrumentation amplifiers and a precision voltage reference for excitation voltage to the sensors. The amplifiers were design with dual power supply to utilize the full range of the +/-10V input to the DAQ system. One amplifier and a connector board is shown in Figure 4. The amplifiers were mounted inside the runner, i.e as close as possible to the signal source, to improve noise resistance.
Measurements
The setup presented in this paper, was utilized for several measurements and operational conditions. The results for this paper are based on the measurement presented in Table 1.
Post processing methods
The position sensor was used to analyse the onboard measurements relative to stationary domain. The raw signal from the position sensor was +-10V saw tooth signal representing one revolution of the runner as presented in Figure 6. The digital to analog conversion of the position from the encoder was operation in transparent mode, meaning all changes of position was continuously updated on the analog output. This gave glitches on the signal which needed to be filtered. A local regression smoothing filter was used. The signal was then converted to a continuous increasing position vector and the first derivative was calculated to find the speed vector. Figure 6. Position sensor signal processing. a) Saw tooth raw data from sensor. The signal includes some noise and glitches from the digital to analog conversion (circled). b) The signal is filtered using local regression smoothing. c) A continuous vector is created from the saw tooth signal by adding 360º for each drop. d) First derivative of the signal gives the speed vector.
Calibration and uncertainty
The pressure sensors was initially calibrated in the estimated pressure range for the measurements using the guidelines of German Calibration Service [16]. This guideline is according to the ISO guide to uncertainty of measurements [17]. To ensure accuracy, the whole measurement chain is taken into account in the calibration, in accordance with the recommendations in IEC 60193 [15]. For practical reasons, the slip ring was not spinning during calibration, but a separate test was performed with a constant precision voltage source without any added uncertainty. The effect of runner rotation was tested in air by spinning the runner at rated speed, but the influence was found to be neglectable. The calibration constants for each sensor were found with linear regression and the deviation between the calibration reference and the sensor output was used for the estimation of uncertainty. To further evaluate the longtime stability and temperature sensitivity of the sensors, substitute calibration were conducted in zero flow conditions at start up and stop each measurement day. The substitute sensor was calibrated and mounted on the draft tube cone. Figure 7 shows the calibration results for pressure sensor R1 and the Table 2 for BEP. The expanded uncertainties is calculated with a coverage factor of 2 which for a measurand with normal distribution represents a coverage probability of approximately 95%. For the evaluation of amplitudes, which is a dynamic property, the static calibration may not be valid [18]. If the frequency response function of the system is known, the dynamic uncertainty could be modelled [19]. In the current measurement setup, all sensors are stated to have resonance frequencies above 25kHz. Frequencies of interest are below 1,2% of resonance, hence it is assumed that the dynamic uncertainty is neglectable and only repeatability and hysteresis from static calibration remains in the uncertainty evaluation due to covariance [20]. In Figure 8, the 95% absolute repeatability in kPa for the calibrated points for sensor R1 are presented. To analyse the repeatability of the experiments and the test rig, BEP was recorded at the beginning and end of each day the measurements were performed. The 95% probability limits of the difference between reference and each sensor was calculated as presented in Table 2 Uncertainty budget for the amplitudes RSI amplitudes is presented in Table 3 The uncertainty of the position measurement is related to linearity of the position sensor (0.05º), conversion rate of the digital analog converter (neglectable) and signal noise and the post-processing filtering (0.4º). The uncertainty related to signal noise and post-processing was found from the difference in raw signal and filtered signal. In addition, the anti-aliasing filter of all other sensors gave a time-delay which gave an added uncertainty as a function of rotational speed (0.2º at 380rpm). The total maximum absolute position uncertainty was 0.45º.
Results and discussion
The small speed variation in Figure 6d will in a fast Fourier transform give possible spectral leakage if not properly configured. To remove the effect of small changes in the rotational speed, the measured onboard signals were resampled to a fixed rate position signal. With small speed variation, rotational steps of the position vector was nonuniform. The nonuniform steps are represented by the speed vector in Figure 6d, since speed is first derivative of position. The resample process interpolates the measurands to a fixed number of equally spaced sample points per revolution. To verify the resampling process, FFTs of the signal were calculated before and after resampling. Figure 9a is for reference with ten flattop windows overlapping with 50%. The results gives good amplitude prediction for both signals, but the frequency resolution is low. It is well know that longer windows in FFT will give higher frequency resolution, but varying frequencies will give spectral leakage and thereby reduce amplitude accuracy [20]. This is shown in Figure 9b, were a single window is used for a 30 second measurement. As seen in the figure, the resampled signal is unaffected by the leakage and maintains the correct amplitude. The conversion to positional domain is therefore considered particularly useful if evaluating speed dependent frequencies in variable speed measurements with short time FFT. Figure 9. FFT comparison of time domain data and resampled position domain data for guide vane passing fundamental frequency. a) Reference calculation to show unaltered amplitude prediction b) Longer windows for higher frequency resolution, unaffected amplitude accuracy for resampled signal. Figure 10 shows the measured pressure for R1-R4 for one revolution of the runner. A moving average was calculated for each sensor with window length equal to ten revolution of the runner to avoid filtering of the frequency content of the signal. An uncertainty band was added according to the uncertainties presented in Table 2. To analyse the frequencies in the signals, Fast Fourier Transform (FFT) with Welch method was used. The frequency with predominant amplitude is the guide vane passing frequency as shown in Figure 11. Figure 11. FFT R1 in time domain normalized to runner frequency. Grey shaded area represents uncertainty according to Table 3.
By dividing the angular position into 360 sectors, the pressure for each rotational degree of the runner was analysed. In Figure 12, the mean pressure in each sector for 191 revolutions is presented for sensor R1. The standard deviation for each sector was also calculated and indicated as a 95% interval. This analysis provides information of the pressure for both random and systematic quantities.
Conclusion
The measurement setup is considered to give valuable data for CFD verification and the study of onboard pressure in the runner. Combined with a CFD analysis, this could provide valuable information for explaining the physics in the runner channel. The position resampled signal is considered to increase the accuracy of measurement analysis. For the mean pressure, the uncertainty of the measurements was mostly affected by the zero stability of the sensors and the repeatability of the measurements. The evaluation of fluctuating quantities is less affected by the uncertainties in the mean pressure. The RSI frequency is found to be the frequency with predominant amplitude in the channel.
|
2019-04-26T14:16:15.590Z
|
2019-03-27T00:00:00.000
|
{
"year": 2019,
"sha1": "d2cc03c3cb18b5e410d0293166208cd4efa93bd1",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1755-1315/240/2/022040",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "5db166dc4d697ff827571bf19cbf0c5625347a6d",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Physics"
]
}
|
250467474
|
pes2o/s2orc
|
v3-fos-license
|
Optimized design and manufacturing of a motorcycle fairing spider
Abstract In the racing world, weight is one of the key factors when developing a vehicle. Therefore, the aim is to reduce it as much as possible to achieve a good power/weight ratio that can be translated into increased speed, manoeuvrability, or reduced fuel consumption. For this reason, the trend is to redesign existing parts to obtain more optimised and lighter ones using new materials and complex structures that are often manufactured using 3D printing. In this manuscript, a spider or support for the fairing of a racing motorbike was designed, making use of topological optimisation techniques by means of Computer-Aided Design and using additive manufacturing. Specifically, PLA was used as an eco-friendly material to replace the conventional welded metal used in these areas of a motorbike. Theoretical and experimental tests were carried out to confirm the viability of the piece. With the analysis of the topological optimisation, it was possible to manufacture a sustainable, low weight and low cost part, which has never been manufactured before with a polymeric material.Keywords: Topologic, FEM, CAD, PLA, fused deposition modelling, Additive manufacturing
Introduction
Motorcycles are made up of numerous elements of different sizes and with different functions. This increases the possibilities for redesign in the racing automotive world. Therefore, by doing the relevant studies, it can be concluded which parts of a vehicle can be replaced by others that offer better performance (Suryatama & Bernitsas, 2000). Increasingly, the industry is adopting this approach as they pursue specific objectives that can be achieved through redesign and optimization techniques (Cuong-Le et al., 2021;Khatir et al., 2019Khatir et al., , 2021Khatira & Abdel Wahabb, 2019;Tran-Ngoc et al., 2021).
Topological optimisation had its origins with A.G. Michell when he published in 1904 "The limits of economy of material in frame-structures". This is a study in which the author aims to reduce the weight of an articulated bar structure, specifically a cantilever beam, through the use of optimisation methods. It is then that the "Michell beam" is created, which has served as the basis for many of the subsequent analyses of topological optimisation techniques (Paris et al., 2010;Stejskal et al., 2020).
In 1960, topological optimisation underwent an enormous development, and it was Schmidt who proposed the innovative idea of using digital computation to solve engineering problems, such as obtaining a perfectly functional design assuming reduced cost and minimising constraints. From that moment on, the questions of structural optimisation began to be formulated looking for the minimum weight and with non-linear constraints that limited the acceptable values of displacements and stresses (Norato et al., 2007). Then, in 1988, Bendsoe and Kikuchi developed the SIMP model (Bendsoe & Kikuchi, 1988;Bendsoe & Sigmund, 2003) where topological optimisation problems began to be tackled with the main idea of maximising the stiffness of the structure. This type of formulation tries to distribute the material within the domain looking for the maximum stiffness, or in other words, the minimum deformation energy, and at the same time reduces the number of non-linear constraints to work with. However, this conditions the operation with different load cases, so that the method of maximum stiffness triggered new problems for the fairing of a racing motorbike whose solution led to refine the discretisation (Biancolini et al., 2001;Ehlers & Lachmayer, 2020).
Finally, in order to solve the different problems that arose in the framework of topological optimisation, a new formulation is developed, which consists of searching for the minimum weight with constraints in tension by means of the Finite Element Method (FEM; Remache et al., 2019).
A process of optimisation, topological or otherwise, of a model involves an iterative redesign (by means of computer-aided design, CAD) to achieve a compromise of the objectives pursued by adapting to a working area. It is therefore important to analyse which parts have possibilities for redesign and/or modification of their materials. In this context, optimisations of different parts have been carried out in different sectors over the last 20 years. There are already several studies on the optimisation of some vehicle parts, especially in aeronautics (Primo et al., 2017). Some motorbike parts have also undergone structural optimisation. For example, Lorenzo Scappaticci et al. maximized the performance of a tubular frame designed for a motorcycle racing in the Moto2 category, and minimized the weight of the frame by controlling its stiffness (Scappaticci et al., 2017).
The first objective is to reduce the weight of the assembly, so each part is important. Currently, the competition sector is also focusing on the reduction of pollutant emissions associated with combustion, which translates into reduced fuel consumption. A second goal is to reduce the number of components in order to simplify the process of manufacturing and assembling, to save on manufacturing costs by integrating several parts into one. Finally, both industry and competition consider that increasing environmental sustainability is of paramount importance, either by reducing emissions as outlined in the first objective, or by changing the component material to a more environmentally friendly one (Hodonou et al., 2019). These 3 purposes presented are the ones that justify the development of this work, and one way to carry them out is to resort to topological optimisation as a technique to obtain a new lighter and less polluting design that satisfactorily fulfils its function.
In this work, a fairing support design was developed and topologically optimised using Inspire software. This part has not been optimised to date. Its function is to support the front fairing of the bike and normally hold the headlight and the instrument panel (see Figure 1(a)). It is fixed to the motorbike on the chassis, specifically on the steering head pipe (see Figure 1(b)). Currently, on the market, most spiders manufactured are made of metal, most commonly aluminium or steel.
Several basic designs were used to obtain the best performance, and an iterative redesign process was carried out. The aim was to replace the piece with a biodegradable plastic part, polylactic acid (PLA), not only to reduce weight but also to avoid welding and assembly times. The final spider was manufactured by fused deposition modelling (FDM) or Fused Filament Fabrication (FFF) in two directions, and an experimental load was applied to corroborate that it complied with the required specifications.
Design method, material and first estimates
The methodology followed, from product design to manufacturing, comprises the main steps shown in the flow chart in Figure 2.
The first initial condition is the available design space. Within this space, it must be taken into account that the bracket must connect the steering pipe of the motorbike with the front fairing, offering support to the latter. In order to know the volume that can be used and occupied by the bracket, a ".CATProduct" file is used where the assembly of some of the motorbike components. Figure 3 shows in red the approximate proportion of the volume where it is possible to install the bracket.
This volume has an estimated value of 0.011 m 3 and its dimensions are conditioned by the positions of other elements of the motorbike. This is reflected above all in the height of the bracket, which must not interfere with the position of the dashboard, as this would make it impossible to install it. On the other hand, it is mandatory that the spider must cover the entire length of the defined field, since it does not otherwise fulfil the function of being the link between the steering and the fairing and of supporting the fairing. However, the delimitations in the width and depth of the design space do not exist, but are imposed by choice, especially for the aesthetics and structural coherence of the spider.
The second condition that the spider has to meet is to cope with the load state to which it is subjected during its service, which is composed of two main forces: the drag force, caused by the friction between the air and the vehicle when it is in motion, and the embedding in the steering pipe. It is this load state that is used during the optimisation process. The drag force used is calculated taking into account the entire front area of the fairing, but in reality this force is distributed over several areas of the bike, such as the fairing to chassis connections. To obtain b) a)
Instrument panel
Steering head pipe (b) (a) the numerical value of this force it is necessary to turn to Aerodynamics, specifically to the equation (Anderson & Wendt, 1995): Where F D is the drag force, C x is the drag coefficient, A is the frontal area of the fairing, ρ is the density of the fluid, in this case air, and v is the speed at which the motorbike is moving. The numerical values of the different parameters used for the calculation of this force are shown in Table 1 below, resulting in a value of F D = 136.13 N.
However, in order to further evaluate the strength of the spider, the behaviour of the spider under a higher drag force is studied. Specifically, two new values for the drag force are estimated: one keeping the data of 137 N but applying a safety factor of 1.25 (F D1.25 ) and the other considering that the motorbike travels at a maximum speed of 160 km/h (F Dmax ), giving as results 3 values of F D : 137, 173 y 243 N respectively.
The initial step in the methodology is the computer-aided design (CAD) of the spider from a simplified base geometry. Several base designs have been taken (Figure 4), although this work will focus on the final choice, with a design in "trilateral" format as this is the one that best suits the needs. The CAD software used is CATIA, specifically version V5-V6 2020.
Each of the holes present in the model (6 in total) are the points where the bracket is screwed to make its adjustment. After the spider is manufactured, Helicoil thread inserts are installed to achieve a durable metallic thread.
Once the main structure and base of the spider has been generated using CATIA, the topological optimisation process is started following the steps shown in Figure 5.
The forces to be worked with are the drag force indicated above and the support or embedment constraint on the pipe (Figure 6(a)). The symmetry constraint was also applied. A triangular mesh with an initial cell size of approximately 3.24 mm is applied and optimised under the criterion of maximum stiffness. In this way, it is possible to create a structure with a customised mass distribution. The model is also divided into 3 main areas for a correct development of the design: the pipe grip area, the fairing attachment area and the body. The starting masses are shown in Table 2 below. A minimum thickness of 12 mm is also indicated, with a view to correct and easy manufacture.
Two different values for the weight of the fairing are considered in both the stop and the run: the total weight which is 19.6 N (W CT ), as the fairing has a mass of 2 kg, and half of it, 9.8 N (W C/2 ). Several optimisations were carried out at Inspire, and different mass reduction percentages were established for each of the parts into which the spider was divided. To summarise, Table 3 presents the different tests performed on the final topology with the described load states and the optimal mesh obtained from the convergence.
Once the design is complete, the final model is produced by fused deposition modelling on the Creality CR-10 V3 printer with PLA. The final model is printed in the two main orientations, vertical and horizontal. The main parameters for the manufacture of this product are listed in Table 4. Once manufactured, post-processing is carried out to remove supports and improve the surface finish.
Finally, an experimental test is carried out to validate the results, as it has been treated as a homogeneous solid, without taking into account the anisotropy of the additive manufacturing process. The test has been carried out on a load test bench that has been created specifically for this product (Figure 6(b)), as the grip of the product allows for slight oscillation, as occurs on a motorbike. The spider must withstand a minimum load of 137 N, which translates to approximately 14 kg. The load applied to it will therefore increase from 4 to 68 kg.
Results
In order to achieve a small displacement, the force to be exerted on the workpiece should ideally be as parallel as possible to the orientation of the workpiece. For this reason, most designs have been established with small angles.
It should be noted that with respect to the initial design, the optimised spider experiences a decrease in mass of 80 %, and more if compared to the weight of the spiders currently available on the market. It should also be noted that the final volume is 146,170 mm 3 and the dimensions are 237 × 210 x 89 mm, which fits perfectly in the design space (Figure 7). Therefore, in Figure 8 we show the part chosen after the topological optimisation process, in different views.
Once the final design has been chosen as a result of the topological optimisation, a FEM analysis will be carried out to check that the part meets the established load and displacement requirements. The numerical results obtained for the different parameters studied in the FEM analysis are shown in Table 5.
The initial requirements are that the displacement must be less than 1 mm and in this case the maximum value is 0.77 mm. In Figure 9(a) it can be seen in which areas of the spider the largest and smallest deformation occurs. As expected, the piece where the displacement is minimal, or even does not occur, is where the spider is embedded, while the highest deformation occurs in the area of the support where the fairing is directly supported. This deformation has the same direction and sense as the applied drag force.
On the other hand, Figure 9(b) shows how the tensile and compressive stresses are distributed in the piece. The entire upper part of the spider is under the effects of compression, which is to be expected considering that this area is arranged approximately linearly to the direction in which the drag force is applied. Then, the bottom part, in response, is stressed, as is the front chord between the sides of the spider. It is important that neither of the two maximum values of tension and compression that are manifested exceed the yield strength of the material. The chosen material has a yield strength of 4.5107 Pa, so that there is no risk of fracture occurrence.
Also, almost completely the entire spider is subjected to a minimum Von Mises stress, 2.53103 Pa, and only a small area, where the connection between the body of the piece and the bolt holes occurs, is subjected to the maximum stress, 3.09106 Pa. However, this is of no concern because the yield strength of the material is an order of magnitude higher than this maximum value.
The study to which the spider was subjected in the previous section shows that it has suitable characteristics for the function it is to perform. However, this analysis was carried out with the size of the mesh element that the software automatically calculates, which is 0.0032447 m. There are occasions when, if a mesh is not fine enough, i.e. does not have a relatively small element size, information is lost. This is why new analyses are carried out where the size of the cells is modified in order to generate more and more precise meshes and to see how this influences the results. Table 6 shows the values obtained for the parameters when the spider is studied by applying the initial load case, i.e. the drag force of 137 N and the embedding, depending on the mesh used. 1.48 · 10 6/ 1.98 · 10 6 1.98 · 10 6 4.99 · 10 1 Valerga et al., Cogent Engineering (2022), 9: 2095952 https://doi.org/10. 1080/23311916.2022.2095952 When examining the results obtained with each of the five different meshes, the displacement stands out, as it is the same in all cases. It is the only parameter that remains unchanged. This is an indication that the automatic mesh is adequate for determining the deformation of the piece. On the other hand, the maximum safety factor increases with decreasing cell size, just as the minimum Von Mises stress decreases. However, the tension and compression, and therefore the maximum Von Mises stress, increase in value the finer the mesh size. However, at no time is the yield strength of the material exceeded. It should also be noted that, although the minimum limit of the safety factor decreases with the span of the element, a value of less than unity is never reached.
As indicated in the methodology (Table 3), 7 different analyses have been carried out. Table 7 shows the results obtained for the different working conditions in each test. In each of the cases, different loading conditions will be studied, but in all cases an embedding constraint is applied.
Firstly, the values resulting from analyses 2 and 3 are commented on. Both differ from the initial one in that in analysis 2 the total weight of the fairing is applied and in analysis 3 half of it, which means that a state of load is being studied under real operating conditions. As can be seen, the stress and compression, as well as the Von Mises stress, in case 2 are lower, and the safety factor higher, than in case 3, which is somewhat unexpected as the weight is of a higher amount. Likewise, the value of the displacement in test 3 exceeds that of test 2. This is due to the fact that the drag force and the total weight force, when applied in different directions, compensate for the displacement in the area of the spider where it occurs, as well as the other parameters that are manifested. Secondly, case studies 4 and 5, where only the drag force is applied, as in the initial case, will be discussed. The variation is in the value of the forces. Analysis 5 has a much larger load than analysis 4, which is why all the parameters, except the safety factor, are of a higher value. The drag force applied in test 4 is 173 N and can be considered approximately as the limit load value valid in this work since, with a larger one, the displacement will likely exceed one millimetre. This is the case for case 5, where the force is 243 N and the displacement is 1.37 mm. This study also shows that even with a force of this calibre, the spider does not break. Finally, analyses 6 and 7 are presented, which correspond to the load case known as a standstill, so there is no drag force, only the weight of the fairing itself . On this occasion, as expected in test 6, higher values of the parameters are obtained, except for the safety factor, which is lower because the full weight is used, unlike what happens in test 7, where only half of it is used. Since the resulting numerical data are never outside the established ranges, it is concluded that the spider is capable of supporting the full weight of the fairing.
Once the final design was obtained, the prototype was manufactured for the experimental test ( Figure 10). It should be noted that, in addition to saving material and being environmentally friendly, the cost of the material used to obtain this product is 3.21 €, which gives very large profit margins compared to the parts currently on the market. Finally, in the experimental test, a load of 68 kg was reached without showing any signs of breakage or deformation. The resistance to deformation offered by the spider is much higher than expected, as it is capable of resisting a load almost 5 times greater than that required. For this reason, it is understood that the material used could be further optimised.
With this result it is possible to state that the anisotropy of the piece when printed in the unfavourable direction is negligible. This again indicates that it is possible to remove even more material from the design.
Conclusions
A spider has been designed and manufactured that fully complies with the characteristics expected and imposed by a motorcycling competition team. This leads us to appreciate that opening the way to non-conventional technologies, such as additive manufacturing, provides numerous benefits. The most outstanding is the possibility of generating parts of great structural complexity, of any material, in a reduced time and at a low cost.
The manufacturing method is combined with the use of topological optimisation. As a result, it is possible to achieve parts that perform the same function as other pieces but with an improved and lighter structure that significantly reduces weight. In the automotive world, this is of vital importance since this reduction translates into a reduction in fuel consumption, which means an environmental improvement and better and more comfortable handling of the vehicle.
The structure of a fairing spider has been designed and manufactured by fused deposition modelling with PLA, and has been subjected to simulation and experimental tests to verify its commissioning. The tests demonstrated the validity of this structure at a lower cost than what is available on the market. In particular, if a comparison is made between the mass of the spider made in this work and those already on the market, it can be seen that a reduction of approximately 65-75 % is achieved. In addition, a biodegradable material is used as an alternative to welded metals.
|
2022-07-13T16:40:09.236Z
|
2022-07-11T00:00:00.000
|
{
"year": 2022,
"sha1": "f7748c181e3a3bfaeac747e30dec0d3d7267979b",
"oa_license": "CCBY",
"oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/23311916.2022.2095952?needAccess=true",
"oa_status": "GOLD",
"pdf_src": "TaylorAndFrancis",
"pdf_hash": "fdef0d03adda3a60cd40e9907f65788bb0676f10",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": []
}
|
17878269
|
pes2o/s2orc
|
v3-fos-license
|
Chikungunya Fever in Traveler from Angola to Japan, 2016
Simultaneous circulation of multiple arboviruses presents diagnostic challenges. In May 2016, chikungunya fever was diagnosed in a traveler from Angola to Japan. Travel history, incubation period, and phylogenetic analysis indicated probable infection acquisition in Angola, where a yellow fever outbreak is ongoing. Thus, local transmission of chikungunya virus probably also occurs in Angola.
Simultaneous circulation of multiple arboviruses presents diagnostic challenges. In May 2016, chikungunya fever was diagnosed in a traveler from Angola to Japan. Travel history, incubation period, and phylogenetic analysis indicated probable infection acquisition in Angola, where a yellow fever outbreak is ongoing. Thus, local transmission of chikungunya virus probably also occurs in Angola.
S imultaneous circulation of multiple arboviruses has been observed several times in many parts of the world. In 1970, Angola reported an outbreak of a dengue-like syndrome, which turned out to be a concurrent outbreak of yellow fever and chikungunya fever (1). On April 13, 2016, the World Health Organization declared a yellow fever outbreak in Angola. In response to the outbreak, a nationwide yellow fever vaccination campaign was initiated. As of July 29, 2016, a total of 3,818 confirmed and suspected cases were reported (2). In addition, on July 23, 2016, the World Health Organization was notified of a Rift Valley fever case in a man from China working in Luanda, the capital city of Angola, and started an investigation in Angola (3). We describe a case of chikungunya fever in a traveler from Angola to Japan.
In May 2016, a 21-year-old woman traveled to Tokyo, Japan, from her home in Luanda. She began to exhibit a high fever on the first day of her visit. On the second day, she sought care at the National Center for Global Health and Medicine (Tokyo). She had been previously healthy and had not traveled out of Luanda in the past 6 months. She claimed to have been vaccinated according to the national immunization plan, which included vaccination against yellow fever. At the first visit, she had high-grade fever (40.7°C) without other signs. Her vital signs were otherwise stable, and physical examination revealed no abnormality. Complete blood count and biochemistry tests revealed only a slightly elevated C-reactive protein level (1.55 mg/dL). Results of rapid diagnostic testing for malaria and dengue, 3 consecutive thin blood smears, HIV screening, and blood culture for bacteria were all negative.
After hospitalization, her fever gradually subsided but remained above 38°C. On the fifth day, bilateral axillary lymphadenopathy appeared. The lymph nodes were ≈2 cm, painful, and nonfluctuant. Despite the high-grade fever and lymphadenopathy, her general condition improved, and she was discharged on the fifth day. Thereafter, she recovered quickly and returned safely to Luanda.
Although the patient was supposedly vaccinated against yellow fever virus, we performed real-time reverse transcription PCR for yellow fever virus, and the result was confirmed to be negative. Testing for other arboviruses was performed, and real-time reverse transcription PCR for chikungunya virus (CHIKV) showed a positive result. Therefore, the final diagnosis was chikungunya fever. We used phylogenetic analysis based on the nucleotide sequence of the E1 gene from the serum sample, the maximum-likelihood method with 1,000 bootstrap replicates, and MEGA 6.0 software (http://www.megasoftware.net). The sequence was 98% identical to that of a CHIKV strain isolated in the Central African Republic in 1987 (Figure). Considering travel history, incubation period, and phylogenetic analysis, the patient was probably infected with CHIKV while in Luanda.
CHIKV was first isolated in Tanzania in 1953 (4). After a few decades of absence in Africa, the virus caused a large outbreak in the Democratic Republic of the Congo in 2000 (5) and has subsequently been causing infection across the continent. Although the epidemiology of chikungunya fever is scarcely understood in Africa, an effort has been made to grasp the current burden of CHIKV in Africa. A study in Kenya found the rate of CHIKV IgG positivity among HIV-negative specimens to be 0.96% (6). A serologic study in southern Mozambique found that the rate of seroconversion or a >4-fold titer rise of CHIKV IgG among patients with acute febrile illness was 4.3% (7). These studies suggest that the incidence of CHIKV infection in Africa may be higher than previously assumed. This discrepancy may be explained by lack of awareness, diagnostic tools, and surveillance systems. As of April 22, 2016, Angola was not recognized as a country with local CHIKV transmission (8). However, considering that Angola harbors Aedes aegypti mosquitoes, which are efficient CHIKV vectors, and that neighboring countries have documented local transmission of the virus, it is reasonable to speculate that local transmission also occurs in Angola.
Co-infection and co-distribution of multiple arboviruses (including dengue viruses, CHIKV, and yellow fever virus) are widely reported (1,9,10). Although these viruses share a common vector, Aedes spp. mosquitoes, their interactions within mosquitoes and their effects on vector competence are unknown (9). Arboviruses cause similar clinical presentations, which makes diagnosis challenging without labor-intensive diagnostics, especially in outbreak settings. Because a yellow fever outbreak is ongoing in Angola, the diagnosis of other arboviral infections is needed for conducting appropriate clinical and public health interventions and precise surveillance.
Little is known about the presence of human pathogenic Puumala virus (PUUV) in Lithuania. We detected this virus in bank voles (Myodes glareolus) in a region of this country in which previously PUUV-seropositive humans were identified. Our results are consistent with heterogeneous distributions of PUUV in other countries in Europe.
P uumala virus (PUUV) (family Bunyaviridae) is an
enveloped hantavirus that contains a single-stranded trisegmented RNA genome of negative polarity (1). PUUV harbored by the bank vole (Myodes glareolus) is the most prevalent human pathogenic hantavirus in Europe (2). A high population density of bank voles can lead to disease clusters and possible outbreaks of nephropathia epidemica, a mild-to-moderate form of hantavirus disease (3).
In contrast to the Fennoscandian Peninsula and parts of central Europe (4,5), little is known about the epidemiology of PUUV in Poland and the Baltic States. Recent investigations confirmed the presence of PUUV in certain parts of Poland (5,6). A molecular study of bank voles in Latvia identified 2 PUUV lineages (Russian and Latvian) (7). In Estonia, serologic and molecular screening provided evidence of the Russian PUUV lineage (8). For Lithuania, a previous serosurvey indicated the presence of PUUVspecific antibodies in humans from 3 counties (online Technical Appendix Figure 1, http://wwwnc.cdc.gov/EID/ article/23/1/16-1400-Techapp1.pdf). However, molecular evidence of PUUV in humans or in voles is lacking (9).
We report a molecular survey of rodent populations in Lithuania at 5 trapping sites, including 2 sites in counties where PUUV-specific antibodies were previously detected in humans (online Technical Appendix Figure 1). A total of 134 bank voles, 72 striped field mice (Apodemus agrarius), and 59 yellow-necked field mice (A. flavicollis) were captured during 2015. Three trapping sites (Juodkrantė, Elektrėnai, and Lukštas) were located in forests at or near
|
2017-09-22T16:52:22.293Z
|
2017-01-01T00:00:00.000
|
{
"year": 2017,
"sha1": "af2a801796b62cf79cc209591a2ca44b437c8c08",
"oa_license": "CCBY",
"oa_url": "https://wwwnc.cdc.gov/eid/article/23/1/pdfs/16-1395.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "af2a801796b62cf79cc209591a2ca44b437c8c08",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
228088899
|
pes2o/s2orc
|
v3-fos-license
|
Chromatin and Nuclear Architecture in Stem Cells
Here we outline the contents of Stem Cell Reports’ first special issue, on chromatin and nuclear architecture in stem cells. It features both reviews and original research articles, covering emerging topics in nuclear architecture including 3D genome organization in stem cells and early development, membraneless organelles, epigenetics-related therapy, and more.
DNA in living cells is wrapped around a core of histone proteins, which, together with additional structural proteins, comprise the fundamental repeating unit of life, chromatin (Kornberg, 1974). The term ''chromatin'' was coined in 1882 by Walther Flemming ''for the time being'' to designate ''that substance, in the nucleus, which upon treatment with dyes known as nuclear stains does absorb the dye.'' In other words, Flemming found a novel method to stain structures within the nucleus, and, for a lack of a better word (or understanding of what he was observing at the time), he named it ''the stainable substance of the nucleus.'' This coloring technique was the basis of his influential book Zellsubstanz, Kern und Zelltheilung (Cell Substance, Nucleus and Cell Division) (Flemming, 1882;Paweletz, 2001). In this book he also coined the term ''mitosis,'' and described its various stages in immaculate detail ( Figures 1A-1C). Almost 50 years later, in 1929, Emil Heitz, using improved cytological staining techniques that he himself developed, suggested that chromatin is in fact divided into condensed and less active regions largely devoid of genes, which he termed ''heterochromatin,'' and gene-rich domains, which he named ''euchromatin'' (Figures 1D) (Heitz, 1929). Despite being over-simplistic, these terms are extremely useful, and are extensively used to explain chromatin structure and regulation.
Essentially all cellular processes are governed by changes in chromatin structure, which, in turn, regulate gene expression. Such changes are particularly pertinent in stem cells, which maintain potency but undergo massive changes upon differentiation . In recent years, our understanding of chromatin and nuclear architecture has increased considerably, owing to the development of new microscopes and cutting-edge imaging-based methods, breaking the limit of diffraction, and to high-throughput sequencing-based technologies, e.g., Hi-C, designed to capture genome organization in three dimensions. Combined with CRISPR-based techniques, the possibilities become essentially endless, from endogenous labeling of nuclear structures, to CRISPR-based screens for epigenetic and nuclear modifiers, and much more.
This special issue of Stem Cell Reports, dedicated to chromatin and nuclear architecture in stem cells, features both original research papers and several review articles, the latter covering chromatin and epigenetic regulation in early mammalian embryogenesis (Xia and Xie, 2020), three-dimensional organization of the pluripotent genome (Pelham-Webb et al., 2020), nucleolar function and organization in pluripotent cells (Gupta and Santoro, 2020), chromatin-associated membraneless organelles in pluripotency (Grosch et al., 2020), and finally, clinical implications of the epigenetic landscape and histone modifications in stem cells (Völker-Albert et al., 2020).
The primary research papers included in this special issue contain method development, where the authors describe a single mammalian locus isolation technique using TALEs (Knaupp et al., 2020), as well as several reports identifying chromatin/epigenetic modifiers regulating pluripotency, stem cell identity, or differentiation. Working with PRC2 mutant mouse embryonic stem cells (ESCs), Perino et al. reveal the functional differences in the recruitment of the PRC2 complexes to chromatin, demonstrating that PRC2.1 recruitment is dependent on MTF2, whereas PRC2.2 recruitment is mediated by PRC1 (Perino et al., 2020). Another study, which focuses on heterochromatin regulation in mouse ESCs, identifies a role for MeCP2 in regulating both chromocenter clustering and the targeting of major satellite transcripts to pericentric heterochromatin (Fioriniello et al., 2020). Vidal et al. show that histone lysine 9 (H3K9) methylation in euchromatic regions, and especially the histone methyltransferase EHMT1, plays essential roles during reprogramming to pluripotency (Vidal et al., 2020). Analyzing the binding partners of a previously identified pluripotency regulator, SET, Harikumar et al. identify the Wnt and p53 pathways as mediators of SET's function in mouse ESCs (Harikumar et al., 2020). Another study explores the regulation of the trophoblast stem cell state by TET1 and 5-hydroxymethylation (Senner et al.2020). Finally, reanalyzing a CRISPR screen conducted in human ESCs aimed at identifying genes important for ESCs, Lezmi et al. identify the chromatin regulator ZMYM2, which restricts human ESCs growth on the one hand, but is essential for teratoma formation on the other (Lezmi et al., 2020).
This special issue on chromatin and nuclear architecture in stem cells is developed in parallel with an ISSCR digital series on the same topic, which brings together many of the authors from this issue and additional experts in the field for a discussion of these exciting themes. We as guest editors (Figure 2) would like to thank the authors for their contributions to this issue and Stem Cell Reports for featuring this important area of research.
|
2020-12-10T09:01:56.450Z
|
2020-12-01T00:00:00.000
|
{
"year": 2020,
"sha1": "f23c2b9161e81036cfe1d899d36b0415fdaba290",
"oa_license": "CCBYNCND",
"oa_url": "http://www.cell.com/article/S2213671120304604/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "2df41bea3195f951e9142a4ea678e84d3a9a0f97",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.